The end of humans as car drivers?
The announcement last week that driverless cars could be allowed on UK motorways by the end of 2021 is another step towards vehicles being powered by AI and the gradual reduction of human input to travelling by car.
“10 million self-driving cars will be on the road by 2020,” announced a Business Insider article back in 2016.”
However, the reality in 2021 is that there is a still a long way to go before we can sit back and let the car do the hard work. In the UK, the move is likely to be confined to automated lane-keeping systems and allowed only when traffic is moving slowly.
Part of the issue is that the necessary technology is still not yet fully developed. Part of it is some reluctance by us humans after a century of using cars to hand over control to a computer.
Simply put, can you trust the safety of your loved ones, yourself, and other road users with Artificial Intelligence (AI)?
Cars can be lethal, and the question of what a driverless car will do in a split-second life or death decision is fundamental.
The Moral Machine?
This fundamental question informed our research and the development of the Moral Machine website.
The website uses gamification to crowdsource people’s decisions based on the trolley problem (the experiment used in ethics and psychology on whether to sacrifice one person to save a larger number). To date, millions of people in every country in the world have logged over 40 million decisions via the website, making it one of the largest studies ever done on global moral preferences.
Our objective was to understand people’s decisions on how driverless cars could prioritise different lives in the event of a collision. We tested nine different situations – should a driverless car prioritise:
- humans over pets
- passengers over pedestrians
- more lives over fewer
- women over men
- young over old
- fit over large
- higher social status over lower
- law-abiders over law-benders?
- should the car swerve (take action) or stay on course (inaction)?
The results show that moral decisions across different countries vary considerably. For example, people in countries with weaker institutions are more tolerant of jaywalkers when compared to pedestrians who cross legally than those in countries with stronger institutions. In countries with high levels of economic inequality, people show larger gaps between the treatment of individuals with high and low social status.
However, there were some common themes. People tended to prioritise three of the nine attributes:
- a preference to spare humans over pets
- to spare more lives over fewer lives
- to spare younger humans over older humans.
How do we act on these results?
First, we are very clear that the results are not simply a guide to motor manufacturers on what decisions their computers and AI systems should follow. The results show peoples’ views and prejudices. Just because people report certain preferences doesn’t mean they make for wise or a fair policy. As humans we are inherently biased, and some of the preferences are worrying, such as the somewhat strong preference to spare a higher status person at the cost of a lower status person.
Second, the results do serve to illustrate the problem that there is no one overriding moral template that all driverless cars can follow. What you could see is that the programme development of a driverless car in China may differ to one in the United States, or perhaps the UK and France. This presents a whole conundrum of potential problems – imagine taking your car from one to another – would it be licenced? What about the issue of insurance? Who would really be liable in the event of a crash?
Third, this leads to the discussion of whether we need to develop more regional or even global standards for AI. Driverless cars are just one example of AI, and it is something that will be ever more pervading in our home, social and professional lives. AI can’t be kept neatly within borders, and there is a need for a serious discussion about the potential opportunities and challenges, but at a global level.
Fourth the research shows that there is a need for greater transparency around AI. We need to consider fully the moral and ethical dimensions, exploring further how all of us will react to the ethics of different design and policy decisions in AI applications. AI after all, can heavily reflect of the bias of the people developing it. With something as potently dangerous as cars, we need to be open about how it is designed, and consider what regulations are needed to guide its safe and responsible development.
There is no doubt that AI is an exciting development and from a technical perspective, there is something amazing about a driverless car. However, the moral dimensions need far more focus if we are to truly embrace this new technology and see the day when we can safely get behind the wheel and get taken for a drive.
Dr Edmond Awad is a Lecturer at the at the University of Exeter Business School (Department of Economics) and the Institute for Data Science and Artificial Intelligence.