Technological Moral Proxies and the Ethical Limits of Automating Decision-Making in Robotics and Artificial Intelligence
Total Page:16
File Type:pdf, Size:1020Kb
TECHNOLOGICAL MORAL PROXIES AND THE ETHICAL LIMITS OF AUTOMATING DECISION-MAKING IN ROBOTICS AND ARTIFICIAL INTELLIGENCE by Jason L Millar A thesis submitted to the Department of Philosophy in conformity with the requirements for the Degree of Doctor of Philosophy Queen’s University Kingston, Ontario, Canada October, 2015 Copyright ©Jason L Millar, 2015 Abstract Humans have been automating increasingly sophisticated behaviours for some time now. We have machines that lift and move things, that perform complex calculations, and even machines that build other machines. More recently, technological advances have allowed us to automate increasingly sophisticated decision-making processes. We have machines that decide when and how to use lethal force, when and how to perform life-saving medical interventions, and when and how to maneuver cars in chaotic traffic. Clearly, some automated decision-making processes have serious ethical implications—we are now automating ethical decision-making. In this dissertation, I identify and explore a novel set of ethical implications that stem from automating ethical decision-making in emerging and existing technologies. I begin the argument by setting up an analogy between moral proxies in healthcare, tasked with making ethical decisions on behalf of those who cannot, and technological moral proxies, those machines (artefacts) to which we delegate ethical decision-making. The technological moral proxy is a useful metaphor. It allows us to import the many established norms surrounding moral proxies in bioethics, to inform the ethics, design and governance of certain machines. As is the case with moral proxies in healthcare, we need norms to help guide the design and governance of machines that are making ethical decisions on behalf of their users. Importantly, I argue that we need a methodology and framework to ensure we aren’t subjecting technology users to unacceptable forms of paternalism. Having set up the analogy, I situate the discussion within various theories in science and technology studies (STS). STS provides a rich philosophical vocabulary to help ground the particular ethical implications, including implications for designers and engineers, raised by ii machines that automate ethical decision-making. Thus, STS helps to frame the issues I raise as design and governance issues. I then provide a rudimentary taxonomy of delegation to help distinguish between automated decision-making that is ethically permissible, and that which is impermissible. Finally, I combine the various pieces of the argument into a practical “tool” that could be used in design or governance contexts to help guide an ethical analysis of robots that automate ethical decision-making. iii Acknowledgements My wife, Michelle, has earned my most heartfelt thanks. Over the course of my doctorate she has shared my many highs and lows, listened patiently to my sometimes clear (but often incoherent) philosophical ramblings, and nudged me forward when I most needed it. My girls, Rachel and Sarah (ordered alphabetically!), gave me the balance I needed to keep coming back to the computer. My mom offered her constant encouragement and support, as she always has and will. She and my dad (who is no longer with us) planted a curiosity in me a long time ago that somehow grew into a desire to do this work. Sergio Sismondo and Ian Kerr were the constant, inspiring, and supportive mentors I needed to cut my academic chops. I could not have completed this dissertation without them all. Chapter 1, in which I introduce the concept of the technological moral proxy, is the product of several workshops and papers that have been published in a few different versions. Thanks to Keith Miller and all the other reviewers and judges at the 2014 IEEE Ethics in Engineering, Science and Technology conference, who provided invaluable reviews of the original version, and felt it deserved first place in the student paper competition. Thanks also to reviewers for IEEE Technology & Society, for their insights on another published version of the paper. Participants at We Robot 2014 provided thoughtful and constructive feedback, in particular Peter Asaro, who officially commented on a conference version of the paper, Ryan Calo, Woodrow Hartzog, Ian Kerr, Michael Froomkin, and Carolyn Nguyen also inspired various edits and clarifications. Patrick Lin deserves credit for taking constructive issue with my proxy analysis in the pages of WIRED—Patrick throws hard punches but leaves no bruises. Thanks also to Roger Crisp for his thoughtful comments on the Tunnel Problem, and to various participants at two iv separate Stanford University hosted workshops on Ethics and Autonomous Vehicles, all of whom were generous with their criticisms. Chapter 2 is the result of a paradigm shift in my thinking. Sergio Sismondo deserves every bit of credit for that event, which he nurtured patiently despite my initial incredulity with most of the readings he assigned me. Now thoroughly convinced of its value, I owe him a debt of gratitude for offering to introduce me to the Science and Technology Studies literature. It has forever changed the way I see the world and the role my science and engineering background plays within it. It might not be obvious to the reader, but this chapter, which I initially intended as the first, paved the way for all of the arguments in this dissertation. Chapter 3 is based on a paper originally co-authored with Ian Kerr and published in an edited volume (Millar & Kerr 2015). Ian Kerr is a great friend and collaborator. I owe him an incredible debt of gratitude for his years of selfless encouragement. Thanks to Michael Froomkin and Patrick Hubbard for their thoughtful reading and many thoughtful insights that helped shape this chapter. Participants at We Robot 2012 also helped shape the final version of the paper, and contributed to the ultimate form that this chapter took. Chapter 4 is based on a working paper published in the We Robot 2015 Proceedings. David Post, who presented the paper at the conference, provided particularly thoughtful comments on its original form, as did the many other participants. Aimee van Wynsberghe, who is doing fantastic work at the intersection of robotics and design ethics, contributed substantially to the many points raised in the chapter. The various designers, roboticists, lawyers and ethicists at HRI 2015 – Workshop on Ethics and Policy, particularly Laurel Riek and Don Howard, helped shape the structure and content of the tool I propose at the end of this chapter. v It is exciting to be writing about emerging topics in the ethics and governance of robotics among so many talented, thoughtful and generous people. Though the community is young, it is strong and growing. I couldn’t have produced this work without it. vi Table of Contents Abstract ........................................................................................................................................... ii Acknowledgements ........................................................................................................................ iv List of Abbreviations ...................................................................................................................... ix Chapter 1 Introduction: Technology as Moral Proxy - Paternalism and Autonomy by Design ..... 1 1.1 Introduction ............................................................................................................................ 1 1.2 Anthropomorphisms as Useful Metaphors .......................................................................... 11 1.3 Moral Proxies in Healthcare ................................................................................................ 21 1.4 Technology as Moral Proxy ................................................................................................. 25 1.5 Paternalism and Autonomy by Design ................................................................................ 30 1.6 The Moral Proxy Model as an Evaluative Design Tool ...................................................... 38 1.7 Proxies Beyond Healthcare Technologies ........................................................................... 44 1.7.1 Self-Driving Cars .......................................................................................................... 44 1.8 The Demands of Autonomy by Design and A Proportional Approach to Proxy Analysis . 46 1.9 Conclusions .......................................................................................................................... 49 Chapter 2 Locating the Moral Proxy Model: From Descriptive to Normative Considerations in Design ............................................................................................................................................ 54 2.1 Introduction .......................................................................................................................... 54 2.2 The Social Construction of Technology (SCOT) ................................................................ 60 2.3 Delegation and Scripts: the Moral Character of Things ...................................................... 70 2.4 The Normative Turn: Design Ethics .................................................................................... 77 2.5 Mediation Analysis .............................................................................................................. 85 2.6 Two Normative Aspects of Mediation in Design ...............................................................