News release
From:
The psychology of self-driving cars: why the technology doesn’t suit human brains
Self-drive car features expose drivers’ struggles in overseeing the technology, offering critical lessons for safer automation uptake across society.
Cars with self-driving features are supposed to promise a safer and more convenient future. But there’s a problem: human brains weren’t designed for the strange new role these vehicles demand of us.
According to Professor of Engineering Psychology Ronald McLeod, cars with autonomous features place unprecedented psychological demands on drivers – demands we are currently drastically unprepared for. McLeod is a world-renowned Human Factors specialist, which involves analysing and understanding how humans interact with various autonomous systems, from industrial machines to aircraft systems.
In his book Transitioning to Autonomy, Professor McLeod draws on decades of research into how humans interact with automated systems. But it was his personal experience buying a new car with autonomous features that really opened his eyes to the scale of the problem.
“I was handed the keys with no training whatsoever and let loose into Glasgow rush-hour traffic,” he recalls. “No research ethics committee would ever allow such an experiment, yet this is happening to drivers every day around the world.”
Most cars manufactured today feature at least some level of driver support technology, and in fact some driving assistance technology is now mandatory in new cars, with the intention of reducing accidents caused by human error. These include lane assistance technology to keep cars driving straight without steering, automatic braking and road sign scanning to ensure you are driving at the correct speed.
The driver’s dilemma
The issue lies in a fundamental shift in thinking that most of us would not recognise. When autonomous features engage, drivers do not simply become passengers – they become something far more challenging: supervisory controllers. Instead of actively steering and accelerating, they must monitor the system’s ongoing performance and stand ready to intervene at a moment’s notice.
This creates what psychologists call a “vigilance task” – maintaining attention during periods of low activity. And, it is something humans are notoriously bad at.
“We’re not capable of consciously paying continuous attention for more than relatively short periods,” Professor McLeod explains. “Yet that’s exactly what these systems expect of us.”
The cognitive load is actually higher than manual driving in many ways. Drivers must maintain a mental model of what the car is currently doing and is capable of doing and its limits, assess whether the car is aware of hazards the driver can see, and make split-second decisions about when to intervene – all while having minimal direct engagement with the physical driving task.
These challenges are exacerbated by variability in people’s abilities, influenced by factors such as age, experience, personality and fatigue.
When trust breaks down
Professor McLeod describes the unsettling experience of approaching slow-moving traffic while his car maintained speed.
“I could see the hazard ahead, but had no way of knowing if the vehicle was aware of it,” he says. “That uncertainty creates anxiety – how long should I wait before taking control?”
He uses this example to highlight a crucial design flaw: current interfaces in modern cars often fail to communicate the system’s ‘awareness’ to drivers. Critical information about the car’s mode, capabilities and its limitations are often buried in dense user manuals or poorly displayed on dashboards.
“Automation changes the role of the people involved. New technology with no training, or even no warning, leaves humans guessing and often failing to adapt – which can cause safety incidents. It is not sufficient simply to rely on a driver’s experience of driving manually as the basis for supervising an autonomous system: the roles are fundamentally different.”
The changes required
It isn’t just cars and drivers – the relationship with autonomous systems and supervisors is changing across aviation, industry and beyond at a staggering pace.
Professor McLeod uses his personal experience transitioning to a self-driving car to highlight broader psychological challenges in human supervisory control which are systemic across industry and society.
Professor McLeod draws parallels with incidents including the Boeing 737 MAX crashes, maritime accidents, nuclear power failures, railway mishaps, and even Wimbledon’s electronic line-calling system. He uses these examples to underscore a recurring issue: insufficient attention by designers, regulators, and users of automated systems to the psychological demands placed on humans tasked with monitoring and intervening.
“There’s a seeming inability among organisations to recognise how new systems change the demands placed on humans,” Professor McLeod suggests.
The solution isn’t to abandon autonomous vehicles,he says, but to fundamentally rethink how we introduce them. Professor McLeod advocates for simulation-based training, clearer interfaces that communicate system status and potentially updated driving tests that assess supervisory control skills.
“We need to treat the psychological dimension as seriously as hardware and software engineering,” he argues. “Autonomous features cannot be considered a ‘bolt-on’ – the entire user experience must be redesigned around the new cognitive demands.
“Without a fundamental change in mindset, there’s going to be a great deal of suffering and misery before the safer world we’re promised is realised.”