The danger of automation

Reading time: 7 minutes

Since the time that computer and IT systems became part of the daily life, a new cognitive error is affecting the human decision making process. The automation bias, in fact, describes the phenomenon that leads an individual to neglect or to avoid looking for contradictory information to a computer-generated solution that is considered correct and optimal. During a conference on intelligent systems organized by the American Institute of Aeronautics and Astronautics in 2004, Mary Louise Cummings, one the first female fighter pilots of the United States Navy who then became a professor, presented the risks associated with this bias in situations in which it is necessary to make a decision in a critically limited time[1].

In the development of intelligent support systems, it is above all fundamental to define how automation can help a human decision maker and which level of autonomy to assign to the machine. The Fitts list, whose name comes from the American psychologist Paul Fitts, describes the respective strengths of humans and computers in decision making[2].

Humans are especially capable of perceiving patterns, improvising and utilizing flexible procedures, recalling relevant facts at the appropriate time, reasoning inductively and exercising judgment. On the contrary, computers are better at responding quickly to control tasks, repetitive and routine tasks, reasoning deductively and handling many complex tasks simultaneously.

The levels of automation

The levels of automations describe instead the grade of autonomy left to the system and they are on a scale from 1 to 10. At the lowest level, the computer offers no support and the person has to make every decision. Going up, the machine offers a complete set of alternatives, reaching then a single suggestion at level 4. At the fifth grade of automation, the system executes the operation if the person confirms it and, at the following level, forces the operator to choose in a limited time window, before the automatic execution. At level 7, the computer informs humans of the operation only if it is necessary, at the eighth only if it is requested and at the ninth only if it decides to communicate it to the operator. At the highest grade of automation, the system takes every decision autonomously, ignoring the human.

Since the time that computer and IT systems became part of the daily life, a new cognitive error is affecting the human decision making process.

Human beings, Cummings explains, tend to make mistakes under stress, because of several factors that underlie the automation bias. For example, another cognitive distorsion such as the confirmation bias, that leads the operator to look only for information that supports their own thesis, or the assimilation bias, phenomenon that describes the tendency to process new contradictory information in a way that they are integrated coherently in the mental model that was precedently created, actually ignoring proof that discredits their own opinion[3][4]. In addition, the automation bias comes into play as a process of cognitive conservation: the less energy the brain requires, the more a specific decisional pathway is preferred. Leaving the highest possible number of decisions to the computer is, thus, an excellent method to save energies.

Case histories

According to a 1994 study, 40% of commercial pilots turned up to be overreliant on systems that, at a level of automation equal to 4, suggested flight plans that were significantly sub-optimal[5]. But, until the matter is plans that are not efficient, all problems are relative. Sadly, the automation bias led to victims too, in the world of aviation.

In 1972, Eastern Air Lines Flight 401 crashed in the Florida Everglades, leading to the death of 101 people between passengers and crew members. The accident was likely caused by excessive confidence in the flight systems of the plane. During the execution of the landing checklist, the nose gear was signaled as unsafe. The crew then engaged the autopilot to focus on the problem, ignoring several minutes later a gradual descent in altitude, likely caused by one of the pilots that, inadvertently, disengaged the autopilot hitting the control stick by mistake. Relying on the fact that the flight system would have alerted them of any arising problem and not noticing the rapid descent, the plane crashed.

To reduce the effect of this cognitive error, many computers now show on a display the reliability trend of the system. In an experiment, the pilots that had this trend at their disposal presented a weaker automation bias than those that received only the overall reliability of the system[6]. The author concludes that the development of intelligent systems must consider the operator not only as a peripheral device, but actually an integrated component that, ultimately, will determine the success or the failure of the system.

Carlo Sordini


[1] Cummings, Mary. (2004). Automation Bias in Intelligent Time Critical Decision Support Systems. Collection of Technical Papers – AIAA 1st Intelligent Systems Technical Conference.

[2] Chapanis, A., Frick, F. C., Garner, W. R., Gebhard, J. W., Grether, W. F., Henneman, R. H., Kappaif, W. E., Newman, E. B., and Williams, A. C., Human Engineering for an effective air navigation and traffic control system, P. M. Fitts Eds., National Research Council Washington DC, 1951.

[3] Lord, C. G., Ross, L., and Lepper, M., Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence, The Journal of Personality and Social Psychology, 47, 1979, 1231-1243.

[4] Carroll, J. M. and Rosson, M. B., Paradox of the Active User, in Interfacing Thought: Cognitive Aspects of Human- Computer Interaction, J. M. Carroll Eds., MIT Press Cambridge, MA, 1987, 80-111.

[5] Layton, C., Smith, P. J., and McCoy, E., Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation, Human Factors, 36, 1994, 94-119.

[6] McGuirl, J. M. and Sarter, N. B., How are we doing?: Presenting System Confidence Information to Support Trust Calibration and Adaptive Function Allocation., Human Factors and Ergonomics Society 47th Annual Meeting, Denver, CO, 2003.

This post is also available in: English Italian

Bias collegato: