Science/ AAAS

EXPERT REACTION: Who to kill? An ethical dilemma for driverless cars

Embargoed until: Publicly released:

When it comes to autonomous cars, people generally approve of cars programmed to sacrifice their passengers to save others, but these same people are not enthusiastic about riding in such “utilitarian” vehicles themselves, a new US survey reveals. The authors say the results present new challenges for authorities in regulating the programming of driverless cars.

Journal/conference: Science

Organisation/s: University of Toulouse, France

Media Release

From: AAAS

When it comes to autonomous cars, people generally approve of cars programmed to sacrifice their passengers to save others, but these same people are not enthusiastic about riding in such “utilitarian” vehicles themselves, a new survey reveals. This inconsistency, which illustrates an inherent social tension between wanting the good of the individual and that of the public, persisted across a wide range of survey scenarios, revealing just how difficult it will be to make underlying programming decisions for autonomous cars – something that should be done well before these cars become a global commodity, the study’s authors note.

Autonomous vehicles, or AVs, have the potential to benefit the world by eliminating up to 90% of traffic accidents, but not all crashes will be avoided, and some crash scenarios will require AVs to make difficult ethical decisions. To begin to inform a collective discussion about the way AVs should make such decisions, Jean-Fran├žois Bonnefon and colleagues conducted six online surveys of U.S. residents between June and November 2015, asking participants questions about how they would want their AVs to behave. The scenarios involved in the survey varied in the number of pedestrian and passenger lives that could be saved, among other factors. (An interactive website created by the authors allows individuals to explore and create such scenarios; see link below). Overall, participants said that AVs should be programmed to be utilitarian, but the same people also said they would prefer to buy cars that protected them and their passengers, especially if family members were involved. This suggests that if both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in the latter, the authors say, even though they would prefer others to do so. Bonnefon et al. note that regulation may be necessary, but, based on their survey results (which reveal that regulation could substantially delay AV adoption), they say it could also be counterproductive.

In a related Perspective, Joshua D. Greene highlights additional challenges around programming driverless cars, including how manufacturers of utilitarian vehicles will be criticized for their willingness to kill their own passengers, while manufacturers of self-protective cars "will be criticized for devaluing the lives of others.” Determining just how to build ethical autonomous machines “is one of the thorniest challenges in artificial intelligence today,” Bonnefon and colleagues conclude. However, their data-driven approa ch highlights the way the field of experimental ethics can provide key insights in this space as more and more autonomous cars hit the road.

Attachments:

  • AAAS
    Web page
    The URL will go live after the embargo ends.

Expert Reaction

These comments have been collated by the Science Media Centre to provide a variety of expert perspectives and reflect independent opinion on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.

Professor Toby Walsh is the Research Leader of the Optimisation Research Group at Data61

Studies like this are to be welcomed. They illustrate the challenging ethical dilemmas we face as AI systems like autonomous cars get fielded.

Given that driverless cars are less than a decade away, we need to work out, as a society, how we program such systems. Unlike the past, where if you survived an accident, you could be brought in front of the courts if you drove irresponsibly, we will have to program computers with behaviours in advance that determine how they react in such situations.

I would, however, caution the results that can be taken away from studies like these undertaken on Amazon Turk where participants are not themselves under any danger and had plenty of time to decide what the system should do. This may not reflect how we would, as drivers of cars, act in such moments of crisis.

Nevertheless, it is good to see such work, for the uptake of driverless cars will have a profound benefit on society, reducing road deaths greatly, and liberating many groups like the elderly and the disabled who are currently denied personal mobility.

Last updated: 03 Nov 2016 7:20pm
Hussein Dia is an Associate Professor in Transport Engineering at the Swinburne University of Technology

The study sheds some light on the state of public sentiment on this ethical issue. It shows that aligning moral AI driving algorithms with human values is a major challenge – there is no easy answer!

What I found interesting in this research is that participants were reluctant to accept government regulations of utilitarian AVs – in fact, the surveys showed that participants would be less likely to consider purchasing an AV with such regulation than without. This to me is even a bigger challenge: (1) Deciding whether governments should regulate algorithms, (2) What tests and procedures should be put in place to ensure that the algorithms are compliant? After all, these are very rare events and such instances are not routine. Therefore, lacking a large set of examples, they are relatively resistant to training or programming

We also need to recognise that mobility and travel (whether by car, train, bus, aircraft etc.) are inherently risky, and can never be safe. We take calculated risks every time we travel. With AVs, however, there seems to be some hyped expectations that they should be perfectly safe. They won’t be and I believe the situation is not as complicated. For example, plenty of ethical decisions are already being made in automotive engineering today. Inherent in airbags, for example, is the assumption that they are going to save a substantial number of lives, and only kill a few. Some people have even gone further to suggest that given the number of fatal traffic accidents involving human error today, it could be considered unethical to introduce self-driving technology too slowly. The biggest ethical question then becomes: How quickly should we move towards full automation given that we have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill a few? Should they be allowed on our roads, even if they make such mistakes?

It is difficult to speculate what this would mean for Australia, because the results are based on surveys of U.S. residents. I suspect that the public sentiment would be similar though. We do need to engage with the public on this. I feel there is a leadership vacuum in this public policy space in Australia, and we need better engagement with the community to clarify the issues, concerns and expectations and lead in informing and shaping the future policies in this space. 

Last updated: 03 Nov 2016 4:49pm

Prof Hossein Sarrafzadeh, Director Centre of Computational Intelligence for Cyber Security and High Tech Transdisciplinary Research Network, Unitec Institute of Technology

We do not yet have answers to the questions raised in this study and many similar questions in relation to driverless cars and other new technologies, for example 'the Internet of Things'.

This is an example of a global issue which is best studied using a transdisciplinary approach. More work needs to be done by social scientists, computer scientists, engineers, insurance companies and those involved in legislation. The issue raised by this paper is an important one but perhaps not the only one. We need to decide how much control we give to machines and as machines are used more extensively in our lives this question becomes more central.

The challenge raised in the paper is similar to the decision a military jet pilot would need to make when the plane is crashing and there is a risk of the plane crashing in a residential area and whether or not the pilot(s) risk their lives to continue to fly the plane out of the residential zone. The possibility of cases such as this happening in aviation is not high. The same may be true in the case of driverless cars.

Many automotive companies and universities have already started research aimed at studying this life and death challenge. Although driverless cars will need to substantially reduce the risk of accident situations such as what is raised in this paper, such issues would need to be resolved before driverless cars become common on our roads. When we use artificial intelligence we are trusting a machine to make decisions for us. Trading shares, driving cars and flying airplanes are examples of such cases.

Last updated: 03 Nov 2016 4:40pm

Assoc Prof Ian Yeoman, School of Management, Victoria University Wellington

There is a built in trigger, that we fear the unknown and don’t trust science whatever the expert opinion and scientific studies. This is one of the reasons when the Docklands Light Railway system was introduced in 1987, autonomous safety fears meant each train had a safety operator. Driveless trains already operate in many cities and can be seen in most international airports connecting us between terminals.  We already have autonomous pizza delivery systems.

Autonomous vehicles will reach a tipping point where the advancement in science, the economic arguments and technology get to a point of incremental change and human consciousness ‘that it is going to happen’ and ‘will happen’. First of all we will see a series of small steps. Watch out for uber autonomous taxis in Pittsburgh or autonomous ships or autonomous cargo planes.

We always fear the future, but without science and advancement we would still be in the cave and the wheel would not have been invented.

Last updated: 03 Nov 2016 4:05pm

News for:

  • Australia
  • New Zealand
  • NSW
  • VIC