Search Header Logo
Artificial Intelligence and the Limits of Human Judgment

Artificial Intelligence and the Limits of Human Judgment

Assessment

Presentation

English

9th - 12th Grade

Practice Problem

Medium

Created by

Darren Walshe

Used 1+ times

FREE Resource

11 Slides • 25 Questions

1

media

​Artificial Intelligence and the Limits of Human Judgment

2

To what extent should machines be trusted to make important decisions that affect human lives?

media

3

Have you ever followed a recommendation from technology that you later questioned?

media

4

Is efficiency always more important than human judgment?

Can you think of situations where speed might actually cause problems?

media

5

How might long-term dependence on technology change the way people think or solve problems?

media

6

Paragraph 1

Artificial intelligence has moved rapidly from a speculative concept to an embedded feature of everyday life. Algorithms now determine which news articles individuals read, assist doctors in diagnosing illness, and influence financial decisions once made exclusively by human experts. Proponents argue that these systems enhance efficiency and reduce human error by processing vast quantities of data at speeds no individual could match. Yet as reliance on artificial intelligence increases, questions have emerged about the extent to which human judgment is being reshaped—or quietly displaced—by automated decision-making systems.

7

Paragraph 2

One reason artificial intelligence appears so compelling is its perceived objectivity. Human judgment is vulnerable to fatigue, emotion, and cognitive bias, whereas machines operate according to predefined parameters. In fields such as medical imaging and credit risk assessment, AI systems have demonstrated impressive accuracy rates. However, these outcomes often obscure a critical limitation: algorithms are only as reliable as the data used to train them. When historical data reflects social inequalities or flawed assumptions, artificial intelligence may amplify rather than correct systemic distortions.

8

Paragraph 3

Beyond technical concerns, the psychological impact of automated systems deserves closer scrutiny. Research indicates that individuals tend to defer to algorithmic recommendations, even when contradictory evidence is available. This phenomenon, sometimes described as automation bias, can erode critical thinking and personal accountability. Over time, professionals may lose confidence in their own expertise, relying instead on systems they do not fully understand. Such dependence raises ethical questions about responsibility, particularly in high-stakes contexts where errors carry serious consequences.

9

Despite these risks, artificial intelligence is often presented as an inevitable progression rather than a negotiable choice, a framing that subtly discourages sustained public debate about how much authority machines should possess (A). While technological innovation has historically driven economic and social advancement, it has also required deliberate regulation to prevent misuse and unintended consequences, particularly when efficiency begins to outweigh ethical deliberation (B). The challenge, therefore, lies not in rejecting artificial intelligence outright but in cultivating systems that complement human judgment rather than override it, preserving accountability in complex decision-making environments (C). Without clear institutional safeguards, automated processes may prioritize speed and convenience over transparency, leaving critical moral and social questions insufficiently examined (D). As a result, societies must confront the growing tension between technological convenience and democratic control in an increasingly automated world.

10

Paragraph 5

Ultimately, the integration of artificial intelligence into decision-making processes forces a reconsideration of what it means to exercise judgment responsibly. While machines can optimize outcomes based on measurable criteria, they cannot account for moral ambiguity or contextual nuance. Education systems, therefore, face the task of preparing individuals not merely to use intelligent systems, but to question them. The future of artificial intelligence depends less on technological sophistication than on humanity’s willingness to define the boundaries within which such tools should operate.

11

Multiple Choice

The word “displaced” in paragraph 1 is closest in meaning to:

1

improved

2

replaced

3

examined

4

intensified

12

Multiple Choice

Q2. According to Paragraph 2, why can artificial intelligence produce misleading outcomes?

1

It processes information too rapidly

2

It lacks sufficient computational power

3

It reflects limitations present in its training data

4

It operates independently of human design

13

Multiple Choice

Q3. Why does the author mention medical imaging and credit risk assessment in Paragraph 2?

1

To provide examples of AI’s practical applications

2

To argue these fields should rely exclusively on AI

3

To show areas where accuracy is less important

4

To criticize professional resistance to automation

14

Multiple Choice

Q4. In Paragraph 3, why does the author discuss professionals’ confidence?

1

To suggest expertise becomes unnecessary

2

To compare novice and experienced workers

3

To argue for complete reliance on algorithms

4

To highlight a psychological consequence of automation

15

Multiple Choice

Q5. Inference Question (Paragraph 3)

What can be inferred about long-term reliance on automated systems?

1

It encourages ethical independence

2

It eliminates responsibility for errors

3


It may weaken individual decision-making authority

4

It improves transparency in complex situations

16

Multiple Choice

Q6. Vocabulary (Paragraph 4)

The word “safeguards” in paragraph 4 is closest in meaning to:

1

protections

2

incentives

3

limitations

4

alternatives

17

Multiple Choice

Q7. Why does the author mention historical regulation in Paragraph 4?

1

To show innovation has always been unrestricted

2

To argue regulation prevents progress

3

To demonstrate that oversight has accompanied past advances

4

To criticize governments for slow responses

18

Multiple Choice

Q8. Contrast Question (Paragraph 4)

Paragraph 4 contrasts the appeal of technological efficiency with concerns about:

1


economic inequality

2


institutional accountability

3

scientific accuracy

4

user accessibility

19

Multiple Choice

Where would the following sentence best fit in Paragraph 4?

“This perspective risks minimizing the role of human choice in shaping technological systems.”

1

A

2

B

3

C

4

D

20

Multiple Select

Q10. Summary Question

Which THREE of the following statements best express the main ideas of the passage?

1

Artificial intelligence is increasingly embedded in daily decision-making

2

Algorithms eliminate the need for ethical judgment

3

Psychological dependence on AI can affect human responsibility

4

Careful oversight is necessary when integrating automated systems

5

Technological progress should proceed without restriction

21

media

​Multiple choice

22

Multiple Choice

Q1. The word ambiguous means:

1

deliberately misleading

2

open to more than one interpretation

3

emotionally neutral

4

legally uncertain

23

Multiple Choice

Q2. An untenable position is one that is:

1

unpopular

2

unstable

3


impossible to continue

4

difficult to explain

24

Multiple Choice

Q3. To mitigate damage is to:

1

reduce its seriousness

2

deny responsibility for it

3

prevent it completely

4

shift it elsewhere

25

Multiple Choice

Q4. A meticulous person is especially careful about:

1

authority

2

efficiency

3

details

4

tradition

26

Multiple Choice

Q5. Something inherent is:

1

naturally part of something

2

recently added

3

artificially created

4

rarely observed

27

Multiple Choice

Q6. To advocate a policy is to:

1

enforce it

2

publicly support it

3

quietly question it

4

reluctantly accept it

28

Multiple Choice

Q7. A discrepancy refers to a:

1

contradiction

2

consequence

3

difference

4

misinterpretation

29

Multiple Choice

Q8. A pragmatic decision is based on:

1

theory

2

emotion

3

tradition

4

practical considerations

30

Multiple Choice

Q9. Something counterproductive is likely to:

1

delay results

2

produce the opposite effect

3

require cooperation

4

improve efficiency

31

Multiple Choice

Q10. A prevalent problem is one that is:

1

hidden

2

controversial

3

widespread

4

temporary

32

Multiple Choice

Q11. To underscore a point is to:

1

emphasize it

2

question it

3

simplify it

4

weaken it

33

Multiple Choice

Q12. A feasible plan is one that is:

1

ambitious

2

theoretical

3

morally acceptable

4

realistically achievable

34

Multiple Choice

Q13. Someone who is reluctant feels:

1

confused

2

hesitant

3

motivated

4

exhausted

35

Multiple Choice

Q14. The word subsequent refers to something that:

1

occurs earlier

2

happens unexpectedly

3

happens later

4

happens repeatedly

36

Multiple Choice

Q15. To scrutinize data is to:

1

accept it quickly

2

organize it efficiently

3

examine it carefully

4

dismiss it entirely

media

​Artificial Intelligence and the Limits of Human Judgment

Show answer

Auto Play

Slide 1 / 36

SLIDE