Trolled by the Trolley Problem on What Maters for Ethical Decision Making in Automated Vehicles

Total Page:16

File Type:pdf, Size:1020Kb

Trolled by the Trolley Problem on What Maters for Ethical Decision Making in Automated Vehicles CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK Trolled by the Trolley Problem On What Maters for Ethical Decision Making in Automated Vehicles Alexander G. Mirnig Alexander Meschtscherjakov University of Salzburg University of Salzburg Salzburg, Austria Salzburg, Austria [email protected] [email protected] Figure 1: The trolley problem is unsolvable by design. It always leads to fatal consequences. ABSTRACT solve the questions regarding ethical decision making in au- Automated vehicles have to make decisions, such as driving tomated vehicles. We show that the trolley problem serves maneuvers or rerouting, based on environment data and deci- several important functions but is an ill-suited benchmark sion algorithms. There is a question whether ethical aspects for the success or failure of an automated algorithm. We should be considered in these algorithms. When all available argue that research and design should focus on avoiding decisions within a situation have fatal consequences, this trolley-like problems at all rather than trying to solve an leads to a dilemma. Contemporary discourse surrounding unsolvable dilemma and discuss alternative approaches on this issue is dominated by the trolley problem, a specifc how to feasibly address ethical issues in automated agents. version of such a dilemma. Based on an outline of its ori- gins, we discuss the trolley problem and its viability to help CCS CONCEPTS • Human-centered computing → HCI theory, concepts and models. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not KEYWORDS made or distributed for proft or commercial advantage and that copies bear this notice and the full citation on the frst page. Copyrights for components Automated Vehicles; Trolley Problem; Dilemma; Ethics. of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to ACM Reference Format: redistribute to lists, requires prior specifc permission and/or a fee. Request Alexander G. Mirnig and Alexander Meschtscherjakov. 2019. Trolled permissions from [email protected]. by the Trolley Problem: On What Matters for Ethical Decision CHI 2019, May 4–9, 2019, Glasgow, Scotland Uk Making in Automated Vehicles. In CHI Conference on Human Fac- © 2019 Association for Computing Machinery. tors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019, ACM ISBN 978-1-4503-5970-2/19/05. $15.00 Glasgow, Scotland Uk. ACM, New York, NY, USA, 10 pages. https: https://doi.org/10.1145/3290605.3300739 //doi.org/10.1145/3290605.3300739 Paper 509 Page 1 CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK 1 INTRODUCTION introduce recommendations on how to treat the problem As automotive technology grows in maturity, so do our ex- in relation to automated vehicles and argue that the trolley pectations in said technology. Vehicles nowadays are not problem is unsolvable by design and that, thus, emphasis only improving on a mechanical level (being lighter, faster, should be put on avoiding a trolley-dilemma in the frst place. or safer in case of an accident) but possess a variety of assis- Finally, we present a discussion of suitable benchmarks for tive functions as well, with some of them assisting with or automated vehicles that appropriately consider the legal and even entirely performing individual driving tasks. moral perspectives in a vehicle’s decision making process Challenges in (partially) automated driving1 for HCI re- in relation to the specifc context and outline a design space search and interaction design have been discussed lately that can be addressed by the HCI community. (e.g., [8, 10, 28]). While there are also expectations related to In tradition with, for example, Brown et al. [7] who raised added comfort and more efcient use of transit times [3, 19], provocative questions for ethical HCI research, we want to the primary expectation in automated vehicle technology is question whether the omnipresent trolley dilemma is the best related to safety [30]. By eliminating human error from the way to deal with the subject of moral decisions of automated on-road equation, it is expected to eventually get rid of a – if vehicles. We intend to help HCI community members (includ- not the – primary cause of on-road accidents today. After all, ing individuals or bodies involved in regulation, developers a machine will never get tired, be negatively infuenced by and interaction designers) to build a nuanced understanding medication or alcohol, or get distracted. Given sufcient com- of the complexities of the trolley problem. putational power and input about the driving environment 2 RELATED WORK (be it via sensors or vehicle-infrastructure communication), the vehicle can also be expected to nearly always make the With automated vehicle technology continuing to be on the “right” decision in any known situation, as it might not be rise, discussions about the trolley problem have also gained subject to misjudgement or incomplete information due to increasing attention in public media and discourse [2, 37]. In limited perception. the scientifc community, the modern incarnation of the trol- This raises an old question within a new context: What ley problem are often attributed to Philippa Foot and Judith shall a deterministic computer do, when confronted with Jarvis Thomson. The former presented the trolley problem as morally conficting options? This is also know as an ethical one example variant in her 1967 paper about abortion dilem- dilemma often presented as the so-called “trolley problem”. mas [12], whereas the latter published a rigorous analysis In a nutshell, a trolley problem refers to a situation in which specifcally about the trolley problem [38]. a vehicle can take a limited number of possible actions, all Within automotive-related domains, such as robotics, soft- of which result in the loss of human life. This means that ware engineering, and HCI, the trolley problem has received whichever action is taken, the vehicle has “decided” to take signifcant attention, particularly in relation to automated a human life and in creating the vehicle, we have allowed a vehicles (see e.g., [33]). For example, in human-robot inter- machine to take this authority over human life, which seems action frst steps towards developing the feld of Moral HRI intuitively hard to accept. have been suggested to inform the design of social robots. In this paper, we argue that the trolley problem serves a Malle et al. [25] researched how people judge moral decisions very important social function, while also being a poor bench- of robots in comparison to human agents. They found that mark for the quality and/or performance of a machine’s de- participants expected robots to follow a utilitarian choice cision making process. We do this by providing background more often than human agents. Above that, their research information on the family of dilemmas the trolley problem suggest that people blame robots more often than humans, belongs to and the purpose of such dilemmas within philo- if the utilitarian choice was not taken. sophical discourses. We then outline what a solution to such Goodall [15] discussed ethical decision making of ma- a dilemma would entail and why these are not suitable for chines for the automated vehicle context providing a basis decision making within the automated vehicle context. We for ethical discussions. The MIT Media Lab set up the “Moral Machine2”, an on-line interactive database with diferent 1The SAE J3016 standard [34] distinguishes between six levels of driving moral trafc dilemmas, where the user can provide input automation from level 0 (i.e. no automation) to 5 (i.e. full automation). on how the vehicle should respond in these situations and In this paper we address automated vehicles of level 3 (i.e. conditional how they judge these situations from a moral perspective. automation) or higher, since then the vehicle is required to make its own Bonnefon et al. [5] pointed out a social dilemma when using decisions without any driver intervention acting as an autonomous agent. the trolley problem as means to inform decision making for Henceforth, we use the term ‘autonomous’ to highlight the capability of an agent (may it be vehicle or human) to act independently. For a vehicle, that autonomous vehicles. They found that participants would also includes the capability to transfer control to the driver as it is the case in SAE level 3. 2http://moralmachine.mit.edu Paper 509 Page 2 CHI 2019 Paper CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK approve utilitarian algorithms in automated vehicles in gen- discourse. Ximenes [40] provides three reasons not to pro- eral, but when it comes to sacrifcing their own live they gram moral decisions into automated vehicles: (1) the fact would demand automated vehicles to protect the passengers that a moral machine never can explain the context in its lives at all costs. Thus, participants were against enforcing entirety and moral decisions are heavily dependent on the utilitarian regulations predicting not to buy such vehicles. In these factors; (2) the impossibility to provide a metric across conclusion, it is argued that a regulation for utilitarian algo- all nations and cultures; (3) the difculty for humans to make rithms may postpone the adoption of such vehicles, which in complex moral judgments when the output is known. turn paradoxically decreases overall safety—one of the main In 2017, the Ethics Commission of the German Federal arguments for the introduction of automated vehicles. Ministry of Transport and digital Infrastructure responded Frison et al. [14] came to a diferent conclusion. They used to the challenges posed by automated vehicle technology the trolley problem in a driving simulator study to evaluate with a list of 20 rules for automated vehicles [11].
Recommended publications
  • The Philosophical Quarterly, 2019
    TH E N ORM ATIVE SIGNIFICANCE OF COGN ITIVE SCIEN CE R ECONSIDERED Dustin Locke Claremont McKenna College Forthcoming in The Philosophical Quarterly Draft of 8/22/19. Please cite only with permission. Abstract Josh Greene (2007) famously argued that his cognitive-scientific results undermine deontological moral theorizing. Greene is wrong about this: at best, his research has revealed that at least some characteristically deontological moral judgments are sensitive to factors that we deem morally irrelevant. This alone is not enough to undermine those judgments. However, cognitive science could someday tell us more: it could tell us that in forming those judgments, we treat certain factors as reasons to believe as we do. If we independently deem such factors to be morally irrelevant, such a result would undermine those judgments and any moral theorizing built upon them. This paper brings charity, clarity, and epistemological sophistication to debates surrounding empirical debunking arguments in ethics. W ord Count 9,935 1 1. INTRODUCTION Josh Greene (2007) famously attempts to empirically debunk deontological moral theorizing.1 Philosophers have been highly critical of Greene’s arguments. The most widely-cited critique by far is Selim Berker (2009).2 Berker proceeds by canvasing several arguments Greene might be offering, refuting each along the way. Berker thus presents us with a challenge: since these arguments are no good, is there a better argument from cognitive-scientific results to Greene’s moral-theoretic conclusion? Other critics present us with the same challenge.3 The present paper takes up this challenge. I spell out a plausible way in which cognitive- scientific results could undermine deontological moral theorizing.
    [Show full text]
  • The Pandemic Doesn't Run on Trolley Tracks: a Comment on Eyal's Essay
    Special Feature “Society after COVID-19” Sociologica. V.14 N.2 (2020) https://doi.org/10.6092/issn.1971-8853/11412 ISSN 1971-8853 The Pandemic Doesn’t Run on Trolley Tracks: A Comment on Eyal’s Essay “Beware the Trolley Zealots” Cansu Canca* Submitted: July 26, 2020 – Accepted: July 27, 2020 – Published: September 18, 2020 Abstract The COVID-19 pandemic raises various ethical questions, one of which is the question of when and how countries should move from lockdown to reopening. In his paper “Be- ware the Trolley Zealots” (2020), Gil Eyal looks at this question, arguing against a trolley problem approach and utilitarian reasoning. In this commentary, I show that his position suffers from misunderstanding the proposed policies and the trolley problem and assert- ing moral conclusions without moral justifications. Keywords: Coronavirus; COVID-19; health policy; population-level bioethics; trolley problem; utilitarianism. Acknowledgements For their very helpful comments and feedback, I thank Elena Esposito, Laura Haaber Ihle, Holger Spamann, and David Stark. * AI Ethics Lab, Boston (United States); [email protected]; https://orcid.org/0000-0003-0121- 3094 Copyright © 2020 Cansu Canca Art. #11412 The text in this work is licensed under the Creative Commons BY License. p. 73 https://creativecommons.org/licenses/by/4.0/ The Pandemic Doesn’t Run on Trolley Tracks Sociologica. V.14 N.2 (2020) 1 Introduction The COVID-19 pandemic forces policy makers around the world to make ethically loaded de- cisions about the lives and well-being of individuals. One such decision is often presented as a choice between “saving lives and saving the economy” (Bazelon, 2020).
    [Show full text]
  • Consequentialism, Group Acts, and Trolleys
    1 Consequentialism, Group Acts, and Trolleys Joseph Mendola Abstract: Multiple-Act Consequentialism specifies that one should only defect from a group act with good consequences if one can achieve better consequences by the defecting act alone than the entire group act achieves. It also holds that when different valuable group agents of which one is part specify roles which conflict, one should follow the role in the group act with more valuable consequences. This paper develops Multiple-Act Consequentialism as a solution to the Trolley Problem. Its relentless pursuit of the good provides act-consequentialism with one sort of intuitive ethical rationale. But more indirect forms of consequentialism promise more intuitive normative implications, for instance the evil of even beneficent murders. I favor a middle way which combines the intuitive rationale of act-consequentialism and the intuitive normative implications of the best indirect forms. Multiple-Act Consequentialism or ‘MAC’ requires direct consequentialist evaluation of the options of group agents. It holds that one should only defect from a group act with good consequences if one can achieve better consequences by the defecting act alone than the entire group act achieves, and that when different beneficent group acts of which one is part specify roles which conflict, one should follow the role in the group act with better consequences. This paper develops MAC as a solution to the Trolley Problem. Section 1 concerns the relative advantages of direct and indirect consequentialisms. Section 2 develops MAC by a focus on competing conceptions of group agency. Section 3 applies MAC to the Trolley Problem.
    [Show full text]
  • Experimental Philosophy and Feminist Epistemology: Conflicts and Complements
    City University of New York (CUNY) CUNY Academic Works All Dissertations, Theses, and Capstone Projects Dissertations, Theses, and Capstone Projects 9-2018 Experimental Philosophy and Feminist Epistemology: Conflicts and Complements Amanda Huminski The Graduate Center, City University of New York How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_etds/2826 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] EXPERIMENTAL PHILOSOPHY AND FEMINIST EPISTEMOLOGY: CONFLICTS AND COMPLEMENTS by AMANDA HUMINSKI A dissertation submitted to the Graduate Faculty in Philosophy in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York 2018 © 2018 AMANDA HUMINSKI All Rights Reserved ii Experimental Philosophy and Feminist Epistemology: Conflicts and Complements By Amanda Huminski This manuscript has been read and accepted for the Graduate Faculty in Philosophy in satisfaction of the dissertation requirement for the degree of Doctor of Philosophy. _______________________________ ________________________________________________ Date Linda Martín Alcoff Chair of Examining Committee _______________________________ ________________________________________________ Date Nickolas Pappas Executive Officer Supervisory Committee: Jesse Prinz Miranda Fricker THE CITY UNIVERSITY OF NEW YORK iii ABSTRACT Experimental Philosophy and Feminist Epistemology: Conflicts and Complements by Amanda Huminski Advisor: Jesse Prinz The recent turn toward experimental philosophy, particularly in ethics and epistemology, might appear to be supported by feminist epistemology, insofar as experimental philosophy signifies a break from the tradition of primarily white, middle-class men attempting to draw universal claims from within the limits of their own experience and research.
    [Show full text]
  • The Trolley Problem and the Dropping of Atomic Bombs Masahiro Morioka*
    Journal of Philosophy of Life Vol.7, No.2 (August 2017):316-337 The Trolley Problem and the Dropping of Atomic Bombs Masahiro Morioka* Abstract In this paper, the ethical and spiritual aspects of the trolley problem are discussed in connection with the dropping of atomic bombs on Hiroshima and Nagasaki. First, I show that the dropping of atomic bombs was a typical example of the events that contained the logic of the trolley problems in their decision-making processes and justifications. Second, I discuss five aspects of “the problem of the trolley problem;” that is to say, “Rarity,” “Inevitability,” “Safety Zone,” “Possibility of Becoming a Victim,” and “Lack of Perspective of the Dead Victims Who Were Deprived of Freedom of Choice,” in detail. Third, I argue that those who talk about the trolley problem are automatically placed in the sphere of the expectation of response on the spiritual level. I hope that my contribution will shed light on the trolley problem from a very different angle, which has not been made by our fellow philosophers. 1. Introduction Brian Short writes a concise explanation of a standard type of the trolley problem in a recent issue of the LSA Magazine. You’re standing next to a train track when you spot a locomotive approaching. Farther down the track are five people in the path of the train but too far away for you to shout a warning to them. A lever next to you would allow you to divert the train – saving the lives of five people – onto a track with only one person standing on it.
    [Show full text]
  • The Moral Psychology of AI and the Ethical Opt-Out Problem
    The Moral Psychology of AI and the Ethical Opt-Out Problem Jean-François Bonnefon1, Azim Shariff2, and Iyad Rahwan3,4 1Toulouse School of Economics (TSM-R), CNRS, Université Toulouse-1 Capitole, Toulouse, France 2Department of Psychology & Social Behavior, University of California, Irvine, USA 3The Media Lab, Massachusetts Institute of Technology, Cambridge MA 02139, USA 4Institute for Data, Systems and Society, Massachusetts Institute of Technology, Cambridge MA 02139, USA Abstract This chapter discusses the limits of normative ethics in new moral domains linked to the development of Artificial Intelligence. In these new domains, people have the possibility to opt out of using a machine, if they do not approve of the ethics that the machine is programmed to follow. In other words, even if normative ethics could determine the best moral programs, these programs would not be adopted (and thus have no positive impact) if they clashed with users’ preferences— a phenomenon that we label the “Ethical Opt-Out.” The chapter then explores various ways in which the field of moral psychology can illu- minate public perception of moral AI, and inform the regulations of such AI. The chapter’s main focus is on autonomous vehicles, but it also explores the role of psychological science for the study of other moral algorithms. 1 Introduction Most people are happy to use technology driven by Artificial Intelligence (AI), as long as they are not fully aware they are doing so. They enjoy music recommendations, email filters, and GPS advice without thinking too much about the machine learning algorithms that power these products. But 1 as people let AI-driven technology take an ever greater place in their lives, they also express anxiety and mistrust about things labeled AI.
    [Show full text]
  • ETHICS and INTUITIONS in One of His Many Fine Essays, Jim Rachels
    PETER SINGER ETHICS AND INTUITIONS (Received 25 January 2005; accepted 26 January 2005) ABSTRACT. For millennia, philosophers have speculated about the origins of ethics. Recent research in evolutionary psychology and the neurosciences has shed light on that question. But this research also has normative significance. A standard way of arguing against a normative ethical theory is to show that in some circum- stances the theory leads to judgments that are contrary to our common moral intuitions. If, however, these moral intuitions are the biological residue of our evo- lutionary history, it is not clear why we should regard them as having any normative force. Research in the neurosciences should therefore lead us to reconsider the role of intuitions in normative ethics. KEY WORDS: brain imaging, David Hume, ethics, evolutionary psychology, Henry Sidgwick, Immanuel Kant, intuitions, James Rachels, John Rawls, Jonathan Haidt, Joshua D. Greene, neuroscience, trolley problem, utilitarianism 1. INTRODUCTION In one of his many fine essays, Jim Rachels criticized philosophers who ‘‘shoot from the hip.’’ As he put it: The telephone rings, and a reporter rattles off a few ‘‘facts’’ about something somebody is supposed to have done. Ethical issues are involved – something alarming is said to have taken place – and so the ‘‘ethicist’’ is asked for a com- ment to be included in the next day’s story, which may be the first report the public will have seen about the events in question.1 In these circumstances, Rachels noted, the reporters want a short pithy quote, preferably one that says that the events described are bad. The philosopher makes a snap judgment, and the result is 1 James Rachels, ‘‘When Philosophers Shoot from the Hip,’’ in Helga Kuhse and Peter Singer (eds.), Bioethics: An Anthology (Oxford: Blackwell Publishers, 1999), p.
    [Show full text]
  • Measuring Consequentialism: Trolley Cases and Sample Bias
    Measuring Consequentialism: Trolley Cases and Sample Bias Amy Lara, Kansas State University 1,2,3 Michael Smith, Kansas State University 2,3 Scott Tanona, Kansas State University 2,4 Bill Schenck-Hamlin, Kansas State University 2 Ron Downey, Kansas State University 2 Bruce Glymour, Kansas State University 1,2,3 1Paper writing 2Survey design 3Data analysis 4Paper editing Contact author: Amy Lara, Philosophy Department, 201 Dickens Hall, Kansas State University, Manhattan, KS, 66506. (785) 532-0357. [email protected]. Acknowledgments: This project was funded by a National Science Foundation grant (0734922). The authors wish to thank David Rintoul, Larry Weaver, Maria Glymour, and Jon Mahoney for valuable assistance. 1 Measuring Consequentialism: Trolley Cases and Sample Bias Abstract: We use data from a recent survey on communication norms among scientists to argue that standard Trolley and Sophie’s Choice scenarios induce subjects to drop from studies, generating loss of data and sample bias. We explain why these effects make instruments based on such scenarios unreliable measures of the general deontic versus consequentialist commitments of study subjects. I. Moral Epistemology and Empirical Data. Since the seminal work by Harman (1999) and Greene et al. (2001), empirical data has been of increasing importance in moral philosophy, and several important studies surveying the range and variation of moral intuitions have been done (Nichols, 2004; Nichols and Mallon, 2006; Hauser et al., 2007). Largely, however, the import of this work has been critical rather than positive. To the extent that the moral intuitions of professional philosophers are not widely shared outside the confines of academic philosophy, the method employed by most of ethics, reflective equilibrium, is suspect, providing at best a check on consistency.
    [Show full text]
  • Trolley Problem Word Edit
    BRANDEIS UNIVERSITY • SPRING 2013 PHIL 22B: PHILOSOPHY OF LAW Professor Andreas Teuber “The Trolley Problem” : An Exercise in Moral Reasoning —————————————————— Table of Contents I. The Trolley Problem i. The Original Trolley Case ii. The Emergency Room Case iii. The Footbridge Case II. Killing v. Letting Die III. Principles v. Intuitions IV. The Trolley Problem Revisited V. The Principle of Double Effect and International Rules of War VI. Conclusion “The Trolley Problem” I. The Problem Imagine that a trolley is hurtling down the tracks. The hill is steep. It has lost its brakes. The driver of the trolley cannot stop its forward motion. Imagine, too, now, that you are standing beside the tracks a bit further down the hill (the drawing could be better) and you see the trolley coming. You can also see that even further down from where you are standing are five workmen on the tracks and you realize immediately that the trolley will kill all five of the workmen were it to continue on its path. There is steep ravine covered with gravel on either side of the tracks where the men are working. If they try to scramble out of the way, they will only slide back onto to the tracks, into the path of the trolley and face certain death. But suppose there is a side spur, off to the right and there is a lever in front of you that you can pull and that if you pull the lever you can switch the track and send the trolley off onto the side-spur and divert it from its path towards the workmen on the main track below.
    [Show full text]
  • The Trolley and the Sorites
    The Trolley and the Sorites John Martin Fischer In two fascinating and provocative papers, Judith Jarvis Thomson dis- cusses the "Trolley Problem."' In this paper I shall present my own version of the problem. I explain the problem and its apparent signifi- cance. Then I present a series of cases which calls into question the intui- tive judgments which generate the original problem. Thus, I propose to dissolve the problem. The method of argumentation that issues in this dissolution might be considered troublesome. For this reason I explore certain objections to the methodology. Finally, I undertake to explain the significance of the dissolution of the problem. II Let us follow Thomson in calling the first case "Bystander-at-the- Switch." A trolley is hurtling down the tracks. There are five "inno- cent" persons on the track ahead of the trolley, and they will all be killed if the trolley continues going straight ahead.2 There is a spur of track leading off to the right. The brakes of the trolley have failed, and you are strolling by the track. You see that you could throw a switch that would cause the trolley to go onto the right spur. Unfortunately, there is one 1. "Killing, Letting Die, and the Trolley Problem," The Monist (April 1976): 204-17; and "The Trolley Problem," The Yale Law Journal 94 (May 1985): 1395-1415. Both articles are reprinted in William Parent, ed., Rights Restitution, and Risk (Cambridge: Harvard University Press, 1986), 78- 116. Thomson attributes the original formulation and discussion of the problem to Philippa Foot, "The Problem of Abortion and the Doctrine of the Double Effect," Oxford Review 5 (1967): 5.
    [Show full text]
  • Law and Moral Dilemmas
    Columbia Law School Scholarship Archive Faculty Scholarship Faculty Publications 2017 Law and Moral Dilemmas Bert I. Huang Columbia Law School, [email protected] Follow this and additional works at: https://scholarship.law.columbia.edu/faculty_scholarship Part of the Law and Philosophy Commons, and the Law and Society Commons Recommended Citation Bert I. Huang, Law and Moral Dilemmas, HARVARD LAW REVIEW, VOL. 130, P. 659, 2016 (2017). Available at: https://scholarship.law.columbia.edu/faculty_scholarship/2021 This Working Paper is brought to you for free and open access by the Faculty Publications at Scholarship Archive. It has been accepted for inclusion in Faculty Scholarship by an authorized administrator of Scholarship Archive. For more information, please contact [email protected]. BOOK REVIEW LAW AND MORAL DILEMMAS THE TROLLEY PROBLEM MYSTERIES. By F.M. Kamm. New York, N.Y.: Oxford University Press. 2015. Pp. xi, 270. $29.95. Reviewed by Bert I. Huang∗ INTRODUCTION A runaway trolley rushes toward five people standing on the tracks, and it will surely kill them all. Fortunately, you can reach a switch that will turn the trolley onto a side track — but then you no- tice that one other person is standing there. Is it morally permissible for you to turn the trolley to that side track, where it will kill one per- son instead of five? Is it not only morally permissible, but even moral- ly required? This classic thought experiment is a mainstay in the rep- ertoire of law school hypotheticals, often raised alongside cases about cannibalism at sea, tossing people from overcrowded lifeboats, or de- stroying buildings to save a city from fire.1 The liturgy of the trolley problem is one we all know.
    [Show full text]
  • Veil-Of-Ignorance Reasoning Favors the Greater Good
    Veil-of-ignorance reasoning favors the greater good Karen Huanga,b,1, Joshua D. Greenea,c, and Max Bazermanb aDepartment of Psychology, Harvard University, Cambridge, MA 02138; bHarvard Business School, Harvard University, Boston, MA 02163; and cCenter for Brain Science, Harvard University, Cambridge, MA 02138 Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved October 22, 2019 (received for review June 13, 2019) The “veil of ignorance” is a moral reasoning device designed to any, does VOI reasoning have on people’s responses to such promote impartial decision making by denying decision makers dilemmas? access to potentially biasing information about who will benefit We predict that VOI reasoning will cause people to make most or least from the available options. Veil-of-ignorance reason- more utilitarian judgments, by which we mean judgments that ing was originally applied by philosophers and economists to maximize collective welfare.* This result is by no means guaranteed, foundational questions concerning the overall organization of so- as there are reasons to think that VOI reasoning could have the ciety. Here, we apply veil-of-ignorance reasoning in a more fo- opposite effect, or no effect at all. Rawls was one of utilitarianism’s cused way to specific moral dilemmas, all of which involve a leading critics (1), suggesting that VOI reasoning might reduce tension between the greater good and competing moral concerns. utilitarian judgment. In addition, even if VOI reasoning were to Across 7 experiments (n = 6,261), 4 preregistered, we find that support utilitarian choices, it is possible that people’sordinaryre- veil-of-ignorance reasoning favors the greater good.
    [Show full text]