istock
istock

EXPERT REACTION: Moral decisions for driverless cars

Embargoed until: Publicly released:
In a scene right out of 'The Good Place', researchers have asked millions of people across the world what they think a driverless car should do in the face of an unavoidable accident. Each scenario required making choices between various combinations of saving passengers or pedestrians and the researchers identified a number of shared moral preferences. These included sparing the most number of lives, prioritising young people and valuing humans over other animals.

Journal/conference: Nature

Link to research (DOI): 10.1038/s41586-018-0637-6

Organisation/s: Massachusetts Institute of Technology, USA

Funder: Ethics and Governance of Artificial Intelligence Fund, ANR-Labex Institute for Advanced Study in Toulouse.

Media Release

From: Springer Nature

Artificial Intelligence: Navigating the moral rules of the road

Global moral preferences for how driverless cars should decide who to spare in unavoidable accidents are reported in a paper published online this week in Nature. The findings, based on almost 40 million decisions collected from participants across the globe, may inform discussions around the future development of socially acceptable AI ethics.

Driverless vehicles will need to navigate not only the road, but also the moral dilemmas posed by unavoidable accidents. Ethical rules will be needed to guide AI systems in these situations; however, if self-driving vehicle usage is to be embraced, it must be determined which ethical rules the public will consider palatable.

Iyad Rahwan and colleagues created the Moral Machine — a large-scale online survey designed to explore the moral preferences of citizens worldwide. The experiment presents participants with unavoidable accident scenarios involving a driverless car on a two-lane road. In each scenario, which imperils various combinations of pedestrians and passengers, the car can either remain on its original course or swerve into the other lane. Participants must decide which course the car should take on the basis of which lives it would spare. The experiment has recorded almost 40 million such decisions.

The authors identify many generally shared moral preferences, including sparing the largest number of lives, prioritizing the young, and valuing humans over animals. They also identify ethics that vary between different cultures. For example, participants from countries in Central and South America, as well as France and its former and current overseas territories, exhibit a strong preference for sparing women and athletic individuals. Participants from countries with greater income inequality are more likely to take social status into account when deciding who to spare.

Before we allow our cars to make ethical decisions, the authors conclude, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.

Attachments:

Note: Not all attachments are visible to the general public

  • Springer Nature
    Web page
    Please link to the article in online versions of your report (the URL will go live after the embargo ends)
  • Massachusetts Institute of Technology, USA
    Web page
    Link to The Moral Machine online test

Expert Reaction

These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.

Distinguished Professor Mary-Anne Williams is Director, Disruptive Innovation at the Office of the Provost at the University of Technology Sydney (UTS)

Designing autonomous vehicles decisions and actions on our roads in the future will be shaped by public acceptance, consumer demand, profit maximisation, law and the ability to enforce it.

The authors highlight there are many nuances that they were unable to consider, such as the uncertainty of actually identifying people accurately e.g. someone has their back to the car's cameras, or the uncertainty involved in predicting how an accident would unfold e.g. what if there was undetected oil on a wet road.

There are many other aspects that will need to be considered. In order to minimise liability, car companies may design cars that slowdown in wealthy neighbourhoods, or that kill humans rather than cause more expensive serious injuries. Since AI algorithms today cannot provide sufficient details to explain their behaviour, it would be difficult to prove cars are taking actions to kill people to reduce legal expenses.

It is not difficult to imagine the segmentation of the autonomous car market: cars that always sacrifice the passengers might sell for 10 per cent of cars that preserve them. Wealthy people may be happy to subsidise the technology to obtain guarantees of protection. One can imagine a new insurance industry built on the need to service people who can pay for personal security on the roads and as pedestrians -  a subscription service that prioritises life according to the magnitude of premiums. This service might provide scope for a new State Government lottery opportunity as well.

Who will have access to the data in autonomous vehicles “blackbox”. Will loved ones have the right to know all the autonomous car decisions.  Will autonomous cars negotiate the outcome of multi-vehicle accidents? How will they resolve their inconsistent human-life preserving strategies during an accident?  Without coordination many more people may die unnecessarily.

AI is fuelling the creation and transformation of industries and requiring the rapid development of new expertise at scale able to maximise the benefits of AI fairly, while minimising the risks for all in Australia.

Last updated: 26 Oct 2018 4:18pm
Declared conflicts of interest:
None declared.
Hussein Dia is Professor of Future Urban Mobility at Swinburne University of Technology

The real value of this research will be in starting a global conversation about how we want autonomous vehicles to make ethical decisions in the case of unavoidable accidents. The work not only provides fascinating insights into the moral preferences and societal expectations that should guide autonomous vehicle behaviour, it also sets to establish how these preferences can contribute to developing global, socially acceptable principles for machine ethics.

The findings, based on data collected from an online platform (Moral Machine) using 40 million decisions from people in 233 countries, showed strong preferences for sparing humans over animals, sparing more lives, and sparing young lives. While these preferences appear to be essential building blocks for machine ethics, the authors found they were not compatible with the first and only attempt to provide official guidelines on autonomous vehicles ethics, as proposed in 2017 by the German Ethics Commission on Automated and Connected Driving. For example, they found that the German rules do not take a clear stance on whether and when autonomous vehicles should be programmed to sacrifice the few to spare the many. The same German rules state that any distinction based on personal features, such as age, should be prohibited. This clearly clashes with the strong preference for sparing the young (such as children) that was assessed through the Moral Machine.

Another interesting aspect of this research is the cultural clustering of results, where the analysis identified three distinct ‘moral clusters’ of countries (West, East and South) where each cluster shared common moral preferences for how autonomous vehicles should behave. The analysis showed some striking peculiarities such as the less pronounced preference to spare younger people in the Eastern cluster; a much weaker preference in the Southern cluster for sparing humans over pets; a stronger preference in the Eastern cluster for sparing higher status people; and a stronger preference in the Southern cluster for saving women and fit people.

These findings clearly demonstrate the challenges of developing globally uniform ethical principles for autonomous vehicles. Regardless of how rare these unavoidable accidents will be, these principles should not be dictated to us based on commercial interests, and we (the public) need to agree beforehand how they should be addressed and convey our preferences to the companies that will design moral algorithms, and to the policymakers who will regulate them.

Last updated: 26 Oct 2018 4:20pm
Declared conflicts of interest:
None declared.
Toby Walsh is a Scientia Professor of AI at The University of New South Wales (UNSW) and Adjunct Fellow at Data 61

This is an interesting and provocative study on how people might behave ethically. You should, however, treat the conclusions with immense caution: how people say they will behave is not necessarily how they will actually do so in the heat of a moment.

I completed their survey and deliberately tried to see what happened if I killed as many people as possible. As far as I know, they didn't filter out my killer results.

Also, whilst such studies tell us about people's attitudes it does not tell us how autonomous cars should drive. The values we give machines should not be some blurred average of a particular country or countries. In fact, we should hold machines to higher ethical standards than humans for many reasons: because we can, because this is the only way humans will trust them, because they have none of our human weaknesses, and because they will sense the world more precisely and respond more quickly than humans possibly can.

In time, we will welcome the arrival of autonomous vehicles: the 1000 or so road deaths every year in Australia will drop precipitously when we have autonomous vehicles, and mobility will be given to the young, the elderly and the disabled that many of us take for granted.

Last updated: 26 Oct 2018 4:19pm
Declared conflicts of interest:
None declared.
Dr Michael Harre is a Senior Lecturer in Complex Systems in the Faculty of Engineering and IT at The University of Sydney

The political, moral, and ethnic diversity we see in the world makes it difficult to understand who we are collectively and consequently what aspects of ourselves we want our future AIs to embody. Our cultures are made up of many individuals making a large number of difficult decisions and these decisions are influenced by their context.

These contexts include the perceptual, interpersonal, and moral. Recent advances in psychology and neuroscience have given us incredible insights into the individual aspects of our decision-making, but what is lacking is an understanding of how our cultural complexity emerges from the diversity of our individual behaviour.

The analysis in this study highlights the moral diversity across many different groups and is a vital piece in understanding what we want embodied in future AIs. These are very complex issues that are not a one way street, questions about artificial intelligence are pushing us to ask and answer important questions about ourselves and how we relate to one another.

Last updated: 24 Oct 2018 3:14pm
Declared conflicts of interest:
None declared.
James Harland is a Professor in Computational Logic at RMIT University

The behaviour of driverless cars will mirror our choices as a society. What would you do when faced with a seemingly impossible choice between crashing your car or running over someone's pet?

Such impossible choices may need to be made by driverless cars, but it is likely that humans in the same situation will make choices that are no better, and sometimes worse. It is certainly true that ethical questions such as this will be increasingly important as technology advances.

It should also be noted that driverless cards may possibly have more 'herd intelligence' available than a human driver, such as statistics showing that a particular area has a higher incidence of collisions, and hence slowing down or taking other precautionary action well before the impossible choice arises.

Last updated: 24 Oct 2018 3:13pm
Declared conflicts of interest:
None declared.
Associate Professor Iain MacGill is from the School of Electrical Engineering and Telecommunications at the University of New South Wales (UNSW)

“These are questions that we have been asking engineering students at UNSW Sydney to consider over recent years, as part of a strategic leadership and ethics course. Utilitarian, virtue, duty and rights ethical frameworks all suggest broadly similar outcomes to those of this survey – autonomous vehicles should seek to protect the greatest number and most vulnerable, albeit with some inevitable value judgements about the relative worth of different types of road users. However, even with sufficient societal consensus on what we would like these vehicles to do in the case of unavoidable accidents, we still face the challenge of shaping the rules which these vehicles must follow (for example, trading off speed and hence user convenience against safety), as well as coding ‘ethics’ to determine how they choose in matters of life and death. And then persuading people to buy vehicles that explicitly put the safety of other road users at the same or perhaps even higher priority than themselves – something that human drivers don’t have to do. It doesn’t help that we have companies racing to bring these vehicles to market with what seems to be insufficient regard to the societal risks invariably involved with new technology deployment. And can we trust the companies driving this, some with significant questions about their own ‘winner takes all’ business ethics, to appropriately program socially agreed ethics into their products.”

Last updated: 24 Oct 2018 3:12pm
Declared conflicts of interest:
None declared.
Dr Raymond Sheh is a Senior Lecturer at the Department of Computing, Curtin University and leads their Intelligent Robots Group.

This article is a crucial reminder that autonomous systems are increasingly making decisions that play into moral and ethical aspects of humanity, with increasingly little oversight. In order to be trusted with these decisions, such systems need to be transparent, explainable and accountable. But to what standard? Having an explainable and accountable autonomous system is no use if there is no agreed or accepted framework in which society considers these decisions. The work in this article forms a crucial first step in determining how society expects such systems to operate. 

Of course, as the authors also point out, this is still early work. As researchers in this area, we need to improve transparency and make it clear that our systems are making decisions that are sensitive to the ethical and moral standards of society. 
 
Our work in Trusted Autonomous Systems and Explainable AI at the Department of Computing, Curtin University, also plays a crucial role in this aspect, developing the next generation of autonomous systems that are able to explain their decisions, such as when they relate to such ethical and moral topics.

Last updated: 24 Oct 2018 3:10pm
Declared conflicts of interest:
None declared.
Professor Lin Padgham is from the School of Computer Science and Software Engineering at RMIT University

This research is interesting and does clearly identify some agreed principles which it would be relatively straightforward to encode, such as the preference to save the lives of people over those of (other) animals. However it is important to realise that the complex ethical/moral judgments required by some of the questions posed are not made by humans when confronted with these situations, and should not be expected of autonomous vehicles either. Nonetheless, understanding clear agreed moral preferences may help in determining which "reflexive" actions to build into autonomous vehicles which may well be different than those used by humans. However, even when there is a clear preference, such as saving a larger number of lives, the action decision is likely to be complex due to the uncertainty of outcomes. Swerving to avoid a single pedestrian in a car with three passengers may well be the right course of action because the pedestrian is far more vulnerable than the passengers in the car. This requires much more complex understanding than a rule about saving more lives. The closest sensible rule is possibly to always try to avoid hitting pedestrians. The biggest gain from autonomous vehicles is likely to be the avoidance of accidents and loss of life due to the potential  greater ability of autonomous vehicles to notice all relevant information and respond to it fast. Inevitably there will sometimes be mistakes, but all evidence suggests they will be far fewer than those made by humans driving cars.

Last updated: 24 Oct 2018 3:10pm
Declared conflicts of interest:
None declared.
Dr Zubair Baig is a Senior Lecturer in Cyber Security at Deakin University

Based on the findings presented, a thorough understanding of moral aspects and effects surrounding cultural variations in making ethical judgements, for unavoidable accident circumstances in this case, have been presented.

With due consideration given to the cultural dynamics and how people in various parts of the world conceive an unavoidable accident situation, by attempting to save one human cohort over the other (passengers vs. pedestrians), it may be noteworthy that technological independence from human reaction encumbers human-guided decision-making by the driverless vehicles; thus encumbering apt decision-making. 
 
If at all a choice on whom to save during an unavoidable car accident is provided to the driverless vehicle’s passengers, the question that remains is: how likely is it that they will make the best choice given the micro second or less to do so?

Even with human-driven vehicles, panicked drivers subject to an unavoidable accident situation may step on the accelerator rather than on the brakes; consequently, fatalities may occur on both sides of the wheel (just one scenario of many).

Drivers of legacy vehicles are also not entirely trained in handling unavoidable accident situations, and the accuracy of AI-driven 'driverless' vehicles to favour passengers over pedestrians, or vice versa, based on AI training received at time of vehicle programming, may or may not be fool-proof; owing to lack of explainable and customisable AI systems. This may change with evolution in policy to facilitate/enforce customisability of driverless vehicle AI systems.

Last updated: 24 Oct 2018 3:08pm
Declared conflicts of interest:
None declared.
Associate Professor Jay Katupitiya is from UNSW Mechanical and Manufacturing Engineering at the University of New South Wales (UNSW)

The raging debate on the driverless cars and the moral responsibility placed upon their creators is clearly on the difficult decision making process the creators will have to program into these machines to enable them to make a decision when the unthinkable is about to happen. The dreamed about scenario is, for this problem to never occur, i.e. to be able to declare that they simply do not collide. Right at the moment not many want to believe that it will be possible.

To draw a parallel, what would we think, if in a court proceeding, a driver testified that 'I steered left because I could save a young person's life and I knew it would kill the frail old person, and it unfortunately did, that was the best I could do'. 

In my opinion, programming these intentions is more immoral than not.

Currently, there is a randomness in the extremely minute 'unavoidable sphere' and programming that (or not programming that for that matter) would keep us sane, until our dream come true!

The dream will come true when there is a long lasting marriage between the transportation infrastructure and the autonomous vehicle. The data presented in the paper are vital for the policy makers however, the future is going to be substantially different when they become everyday part of our lives, because by then, the majority of the circumstances considered in the paper will have been eliminated, like today, just to keep you thinking, at least in the developed world, there is no one on the railroad, is there? Then how could there be a collision, well... if someone violated the law!

Last updated: 24 Oct 2018 3:07pm
Declared conflicts of interest:
None declared.
Professor Hossein Sarrafzadeh, High Tech Research, Unitec

While technical aspects of driverless cars have seen great advancement, the social aspects have not been studied well. Social scientists will certainly focus on ethics of technology including driverless cars as we get closer to wider use of this technology in the next few years. Cultural aspects of driverless cars and other artificially intelligent systems like emotion recognition systems have not been studied sufficiently either and there is a great need for research in these areas globally and in New Zealand. 

One aspect of driverless cars that is not taken into account in various studies of the social dimensions of this technology is the fact that future roads may not be the same roads we are using today. Even if we use similar roads they will be heavily sensored, intelligent roads. They will certainly be much safer, although these ethical dilemas will remain if the same roads are used. Future roads I believe will be different to what we have now. There may be no humans walking across the roads that autonomous vehicles travel in.  

Last updated: 24 Oct 2018 1:22pm
Declared conflicts of interest:
None declared.
Associate Professor Colin Gavaghan, New Zealand Law Foundation Chair in Law & Emerging Technologies, Faculty of Law, University of Otago

These sorts of ‘trolley problems’ are philosophically fascinating, but until now, they’re rarely been much of a concern for law. Most drivers will never have to face such a stark dilemma, and those who do will not have time to think through consequentialist and deontological ethics before swerving or braking! The law tends to be pretty forgiving of people who respond instinctively to sudden emergencies. The possibility of programming ethics into a driverless car, though, takes this to another level.

That being so, which ethics should we programme? And how much should that be dictated by majority views? Some of the preferences expressed in this research would be hard to square with our approaches to discrimination and equality – favouring lives on the basis of sex or income, for instance, really wouldn’t pass muster here.

Age is also a protected category, but the preference for saving young rather than old lives seems to be both fairly strong and almost universal. So should driverless ethics reflect this?

Even that preference seems likely to raise some hard questions. At what point does a ‘child’ cross the threshold to having a less ‘valuable’ life? 16? 18? Is an infant’s life more precious than a toddler’s? An 8-year-old's? Expressed like that, the prospect of building a preference for ‘young’ lives looks pretty challenging.

One preference that might be easier to understand and to accommodate is for the car to save as many lives as possible. Sometimes, that might mean ploughing ahead into the logging truck rather than swerving into the group of cyclists. Most of us might recognise that as the ‘right’ thing to do, but would we buy a car that sacrificed our lives – or the lives of our loved ones – for the good of the many?

Which brings us to the role of law in all this. Maybe it just shouldn’t be legal to buy a car that would discriminate on protected grounds, or that would sacrifice other people to preserve our own safety. But in that case, how many people would buy a driverless car at all?

What if we left it up to individual choice? Could driving a ‘selfless’ car come to be seen as an indication of virtue, like driving an electric now? Would drivers of ‘selfish’ cars be marking themselves out in the opposite direction?

Maybe the biggest issue is this: over a million people die on the roads every year. Hundreds die in New Zealand alone. Driverless cars have the potential to reduce this dramatically. It’s important to think about these rare ‘dilemma’ cases, but getting too caught up with them might see us lose sight of the real, everyday safety gains that this technology can offer.

Last updated: 24 Oct 2018 10:25am
Declared conflicts of interest:
No conflict of interest.

News for:

International

Media contact details for this story are only visible to registered journalists.