The Automation Paradox: How More Tech Can Mean More Human Challenges
Balancing Technological Advancements with Human Factors
Summary: The article discusses the 'Irony of Automation', illustrating how increased automation can unexpectedly complicate human tasks. It emphasises the importance of UX professionals in addressing these challenges through human-centric design and suggests practical strategies for maintaining human engagement and skills in automated systems.
"The 'Irony of Automation' is a concept that presents a curious paradox: the more we rely on advanced automated systems the more we might actually encounter human performance issues. This irony is increasingly relevant in our technology-driven world.
Origins and Meaning
The term "Irony of Automation" was first described by Lisanne Bainbridge in 1983. It points to the unintended consequences of increased automation. As systems become more autonomous, the human operator's role shifts from active participant to passive monitor, which can lead to a decrease in vigilance and an increase in response time to unexpected failures.
Bainbridge outlined two main ironies:
One of the major ironies outlined is that automation often fails to reduce human workload as intended. While routine tasks may be automated, this can leave operators with the more complex and difficult tasks related to monitoring, problem-solving, and taking over when the automation fails. These tasks may require skills and knowledge that operators lack if they are out of the loop most of the time.
Bainbridge also discusses issues around vigilance and monitoring. Automation aims to eliminate human operators because they see them as unreliable, yet the tasks left to operators require even greater skill and performance. Operators end up needing more training and expertise, not less, to handle monitoring highly reliable systems and dealing with novel failures outside the scope of automated systems.
If the rationale for automation is that machines can perform some functions better than humans, then how can humans be reliably expected to monitor the machine's performance?
Several other ironies are presented, including:
Automation complicating the operator's task instead of simplifying it
Automation obscuring system failures instead of making them obvious
Automation deskilling operators and undermining their sense of achievement
These ironies can lead to several problems. Without regular hands-on practice, operators lose manual and cognitive skills needed to control processes and diagnose issues. They lack understanding of the current system state when asked to rapidly take over from automated systems. Tasks like monitoring for unlikely failures exceed human vigilance capabilities. In addition to this, operators lose their skills as their roles erode, causing stress and decreased motivation.
An Example from Autonomous Vehicles
In March 2018, an Uber autonomous test vehicle struck and killed a pedestrian named Elaine Herzberg in Tempe, Arizona. The vehicle was in self-driving mode at night when its LIDAR and radar sensors detected Herzberg wheeling a bicycle across the road midblock. However, its automated emergency braking systems failed to classify her as a human requiring action. The vehicle continued straight and fatally struck her at 38mph.
Meanwhile, the human safety driver in the vehicle, Rafaela Vasquez, was found to be streaming a TV show on her phone at the time, distracted from monitoring the road. An onboard camera showed she was looking down at her phone frequently. She only glanced up at the road less than a second before the crash.
This tragic event illustrates several core ironies Bainbridge outlined with automation:
Humans are seen as unreliable monitors, yet still assigned monitoring tasks: Uber's system was supposed to perform safety-critical functions more reliably than a human by perfectly attending to the road. Yet a human still had to babysit it in case it failed.
Automated systems take over functions at which humans are poor, yet humans still have to take over those functions in an emergency: Uber's system was designed to avoid collisions automatically since humans are poor at rapid emergency reactions. But when the system indeed failed to execute an emergency manoeuvre, the distracted, likely unprepared human operator was still responsible for rapid taking over to avoid the crash. A tragic catch-22.
More automation should reduce human work, yet it creates unintended new burdens: In principle, Uber's automated system was supposed to reduce the labor of driving for the safety driver. But it created new challenging tasks like vigilance on top of disengagement readiness, contributing to Vasquez's distraction.
Across the levels of this event, the vehicle automation ultimately failed to fulfil its promise and in fact backfired in a perfectly ironic fashion with deadly consequences. It powerfully illustrates Bainbridge's main points. The lessons remain salient for the AV industry and regulators today.
What Can We Do?
There are several areas we need to focus in order to reduce the burden caused by automation and as UX professionals we can play a crucial role in this.
Monitoring Automated Systems
Research has repeatedly shown that humans are poor at monitoring rare events. As a result, we need to ensure that the design of automated systems supports them in this task. A few suggestions are discussed below:
Provide alarm systems, potentially with multiple layers ("alarms on alarms") to notify operators of abnormal conditions. As discussed above, monitoring rare events exceeds human vigilance capabilities over time. In addition, systems should pinpoint the exact issue requiring operator attention.
Display automated system target values and tolerance bands directly on operators' screens. This allows operators to always be aware of expected performance.
Maintaining Manual and Cognitive Skills
Automation can lead to users losing some of the key manual and cognitive skills needed to perform specific tasks. Some ways to prevent this:
Allow operators brief periods of hands-on manual control during each shift. This maintains their muscle memory and skills in case of emergencies.
Training should focus on building diagnostic skills, not just basic fault recognition. Simulate rarely-encountered failure events so operators have mental models to draw on. Practice investigating causes of unplanned, unfamiliar issues and choosing appropriate corrective tactics.
Interface Design
Minimise shifts in the types of information displayed to avoid operator confusion from constant display mode changes. But also avoid over-compatibility and smoothing away complexity such that operators cannot deepen knowledge.
Design effective system status displays, alarms, and notifications. This requires research to better understand the context of use, including the mental model of the use, and the abnormal and emergency situations operators they must handle
Conducting research and understanding the limitations of human cognition and ability are key in order to improve automation design and account for the automation ironies.
What About Language Model Automation?
The dynamics discussed around autonomous vehicles can also manifest in the automation of language understanding and generation through large neural language models (LLMs). As LLMs accumulate more textual skills and fluency, human collaborators communicate with them less and can experience deskilling in language use and evaluation. Meanwhile, LLM complexity reaches levels opaque to human analysis, yet they elicit blind trust. However, more research is required to better understand how this applies to LLMs. We can cover this in a future article.
While the irony of automation reminds us that automation fundamentally changes the human role in complex ways we may fail to anticipate. As Peter Drucker wrote in 1967, “We are becoming aware that the major questions regarding technology are not technical but human questions”. As UX professionals, we play a vital role in addressing these challenges by designing systems that keep human factors at the forefront. By doing so, we can create automation that is not only efficient but also human-centric, safe, and reliable.