Trolled by the Trolley Problem on What Maters for Ethical Decision Making in Automated Vehicles
Total Page:16
File Type:pdf, Size:1020Kb
CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK Trolled by the Trolley Problem On What Maters for Ethical Decision Making in Automated Vehicles Alexander G. Mirnig Alexander Meschtscherjakov University of Salzburg University of Salzburg Salzburg, Austria Salzburg, Austria [email protected] [email protected] Figure 1: The trolley problem is unsolvable by design. It always leads to fatal consequences. ABSTRACT solve the questions regarding ethical decision making in au- Automated vehicles have to make decisions, such as driving tomated vehicles. We show that the trolley problem serves maneuvers or rerouting, based on environment data and deci- several important functions but is an ill-suited benchmark sion algorithms. There is a question whether ethical aspects for the success or failure of an automated algorithm. We should be considered in these algorithms. When all available argue that research and design should focus on avoiding decisions within a situation have fatal consequences, this trolley-like problems at all rather than trying to solve an leads to a dilemma. Contemporary discourse surrounding unsolvable dilemma and discuss alternative approaches on this issue is dominated by the trolley problem, a specifc how to feasibly address ethical issues in automated agents. version of such a dilemma. Based on an outline of its ori- gins, we discuss the trolley problem and its viability to help CCS CONCEPTS • Human-centered computing → HCI theory, concepts and models. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not KEYWORDS made or distributed for proft or commercial advantage and that copies bear this notice and the full citation on the frst page. Copyrights for components Automated Vehicles; Trolley Problem; Dilemma; Ethics. of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to ACM Reference Format: redistribute to lists, requires prior specifc permission and/or a fee. Request Alexander G. Mirnig and Alexander Meschtscherjakov. 2019. Trolled permissions from [email protected]. by the Trolley Problem: On What Matters for Ethical Decision CHI 2019, May 4–9, 2019, Glasgow, Scotland Uk Making in Automated Vehicles. In CHI Conference on Human Fac- © 2019 Association for Computing Machinery. tors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019, ACM ISBN 978-1-4503-5970-2/19/05. $15.00 Glasgow, Scotland Uk. ACM, New York, NY, USA, 10 pages. https: https://doi.org/10.1145/3290605.3300739 //doi.org/10.1145/3290605.3300739 Paper 509 Page 1 CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK 1 INTRODUCTION introduce recommendations on how to treat the problem As automotive technology grows in maturity, so do our ex- in relation to automated vehicles and argue that the trolley pectations in said technology. Vehicles nowadays are not problem is unsolvable by design and that, thus, emphasis only improving on a mechanical level (being lighter, faster, should be put on avoiding a trolley-dilemma in the frst place. or safer in case of an accident) but possess a variety of assis- Finally, we present a discussion of suitable benchmarks for tive functions as well, with some of them assisting with or automated vehicles that appropriately consider the legal and even entirely performing individual driving tasks. moral perspectives in a vehicle’s decision making process Challenges in (partially) automated driving1 for HCI re- in relation to the specifc context and outline a design space search and interaction design have been discussed lately that can be addressed by the HCI community. (e.g., [8, 10, 28]). While there are also expectations related to In tradition with, for example, Brown et al. [7] who raised added comfort and more efcient use of transit times [3, 19], provocative questions for ethical HCI research, we want to the primary expectation in automated vehicle technology is question whether the omnipresent trolley dilemma is the best related to safety [30]. By eliminating human error from the way to deal with the subject of moral decisions of automated on-road equation, it is expected to eventually get rid of a – if vehicles. We intend to help HCI community members (includ- not the – primary cause of on-road accidents today. After all, ing individuals or bodies involved in regulation, developers a machine will never get tired, be negatively infuenced by and interaction designers) to build a nuanced understanding medication or alcohol, or get distracted. Given sufcient com- of the complexities of the trolley problem. putational power and input about the driving environment 2 RELATED WORK (be it via sensors or vehicle-infrastructure communication), the vehicle can also be expected to nearly always make the With automated vehicle technology continuing to be on the “right” decision in any known situation, as it might not be rise, discussions about the trolley problem have also gained subject to misjudgement or incomplete information due to increasing attention in public media and discourse [2, 37]. In limited perception. the scientifc community, the modern incarnation of the trol- This raises an old question within a new context: What ley problem are often attributed to Philippa Foot and Judith shall a deterministic computer do, when confronted with Jarvis Thomson. The former presented the trolley problem as morally conficting options? This is also know as an ethical one example variant in her 1967 paper about abortion dilem- dilemma often presented as the so-called “trolley problem”. mas [12], whereas the latter published a rigorous analysis In a nutshell, a trolley problem refers to a situation in which specifcally about the trolley problem [38]. a vehicle can take a limited number of possible actions, all Within automotive-related domains, such as robotics, soft- of which result in the loss of human life. This means that ware engineering, and HCI, the trolley problem has received whichever action is taken, the vehicle has “decided” to take signifcant attention, particularly in relation to automated a human life and in creating the vehicle, we have allowed a vehicles (see e.g., [33]). For example, in human-robot inter- machine to take this authority over human life, which seems action frst steps towards developing the feld of Moral HRI intuitively hard to accept. have been suggested to inform the design of social robots. In this paper, we argue that the trolley problem serves a Malle et al. [25] researched how people judge moral decisions very important social function, while also being a poor bench- of robots in comparison to human agents. They found that mark for the quality and/or performance of a machine’s de- participants expected robots to follow a utilitarian choice cision making process. We do this by providing background more often than human agents. Above that, their research information on the family of dilemmas the trolley problem suggest that people blame robots more often than humans, belongs to and the purpose of such dilemmas within philo- if the utilitarian choice was not taken. sophical discourses. We then outline what a solution to such Goodall [15] discussed ethical decision making of ma- a dilemma would entail and why these are not suitable for chines for the automated vehicle context providing a basis decision making within the automated vehicle context. We for ethical discussions. The MIT Media Lab set up the “Moral Machine2”, an on-line interactive database with diferent 1The SAE J3016 standard [34] distinguishes between six levels of driving moral trafc dilemmas, where the user can provide input automation from level 0 (i.e. no automation) to 5 (i.e. full automation). on how the vehicle should respond in these situations and In this paper we address automated vehicles of level 3 (i.e. conditional how they judge these situations from a moral perspective. automation) or higher, since then the vehicle is required to make its own Bonnefon et al. [5] pointed out a social dilemma when using decisions without any driver intervention acting as an autonomous agent. the trolley problem as means to inform decision making for Henceforth, we use the term ‘autonomous’ to highlight the capability of an agent (may it be vehicle or human) to act independently. For a vehicle, that autonomous vehicles. They found that participants would also includes the capability to transfer control to the driver as it is the case in SAE level 3. 2http://moralmachine.mit.edu Paper 509 Page 2 CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK approve utilitarian algorithms in automated vehicles in gen- discourse. Ximenes [40] provides three reasons not to pro- eral, but when it comes to sacrifcing their own live they gram moral decisions into automated vehicles: (1) the fact would demand automated vehicles to protect the passengers that a moral machine never can explain the context in its lives at all costs. Thus, participants were against enforcing entirety and moral decisions are heavily dependent on the utilitarian regulations predicting not to buy such vehicles. In these factors; (2) the impossibility to provide a metric across conclusion, it is argued that a regulation for utilitarian algo- all nations and cultures; (3) the difculty for humans to make rithms may postpone the adoption of such vehicles, which in complex moral judgments when the output is known. turn paradoxically decreases overall safety—one of the main In 2017, the Ethics Commission of the German Federal arguments for the introduction of automated vehicles. Ministry of Transport and digital Infrastructure responded Frison et al. [14] came to a diferent conclusion. They used to the challenges posed by automated vehicle technology the trolley problem in a driving simulator study to evaluate with a list of 20 rules for automated vehicles [11].