Robots and Moral Concepts
Total Page:16
File Type:pdf, Size:1020Kb
ROBOTS AND MORAL CONCEPTS Are Asimov’s three laws of robotics sufficient to guide the actions of contemporary robots or do they need moral concepts to guide their actions? Date: September 22nd, 2016 Subject: Master thesis Universty: Tilburg University Program: MA Philosophy Track: Ethiek van Bedrijf en Organisatie Supervisor: dr. Alfred Archer Second reader: dr. Bart Engelen Student & anr: Matthijs Smakman, 759122 ACKNOWLEDGEMENTS For making this journey possible I would like to thank the Institute of ICT at the HU University of Applied Sciences Utrecht. For the many papers they reviewed and the discussions that followed I am especially thankful to my colleague Alex Bongers and my dear friend Frank Wassenaar. Also, I am grateful to Willemijn for her love and patience, since it is not always easy living with a student of philosophy. For his guidance and enthusiasm during this process and his encouragement to present some of my work at the Designing Moral Technologies conference in Ascona and the Research Conference Robophilosophy in Aarhus, I am thankful to my thesis supervisor Alfred Archer. Finally, I would like to thank al my colleagues, friends and family who supported me during the writing process of this thesis: without your backing, this work would not have been done. INTRODUCTION To answer the research question “Are Asimov’s three laws of robotics sufficient to guide the actions of contemporary robots or do they need moral concepts to guide their actions?”, this thesis will use four chapters. In the first chapter, I will outline the field of contemporary robotics to familiarise the reader with the subject at hand. This will meanly be done by describing and analysing existing work in the field social robotics and machine ethics. In the second chapter, I will examine Asimov’s three laws of robotics and provide the reader with the philosophical and practical issues concerning these laws. In the third chapter, I will use arguments for well-being as the ultimate source of moral reasoning, to argue that there are no ultimate, non-derivative reasons to program robots with moral concepts such as moral obligation, morally wrong or morally right. In chapter four, I will argue that implementing well-being as a guiding principle might be problematic in the field of robotics. Secondly, I will examine arguments given by Ross (1961) that the complexity of moral judgement requires more than just a concept of well-being, and then argue that there is no need for these extra concepts. Finally, I will argue that certain roles bring certain obligations, role obligations. Therefore, robots need different “moral” programming for the different roles robots are going to perform, but that even in such a theory there is no need for moral concepts. Roles bring certain reasons to act in specific ways, but robots do not need concepts such as moral obligation to apply this. 2 CHAPTER 1 - ROBOTS In this first chapter, I will firstly give a working definition of what a robot is and give some examples of contemporary robots. Secondly, I will analyse why people, especially in the last 50 years, have written extensively about their fears of robots, although the particular circumstances they feared were, and according to some still are, decades away. Thirdly, I will state that this thesis is about moral action, and not moral agency. The main purpose of this chapter is to show the relevance of this thesis and to familiarise the reader with the subject at hand. WHAT IS A ROBOT? A robot is an “engineered machine that senses, thinks and acts” (Lin, Abney, & Bekey, 2011). A machine senses using sensors. These sensors are used by the machine to obtain data about the external world, for example by using cameras, GPS-receivers or other ways to acquire data. This data needs to be processed if the robot is to react. This process is called thinking. It can be argued that this classification of 'thinking' is false, but the classification will be adequate for the question of this thesis because it will not effect the arguments. The software process that I call thinking uses rules to process the data from its sensors and to make decisions on how to react. At the basis of these rules is a human, but on the basis of the human programmed code, the robot can learn how to react to new unknown situations. Machine learning software enables machines to learn from past experiences. The definition of machine learning is: “A computer program is said to learn from experience ‘E’ with respect to some class of task ‘T’ and performance measure ‘P’, if its performance at tasks in ‘T’, as measured by ‘P’, improves with experience ‘E’ (Mitchell, 1997). In other words, if a robot can improve how it performs a certain task based on past experience, then it has learned. Robots that are programmed to use machine learning improve their actions, therefore situations can arise which the engineer or programmer of that robot could possibly be unaware of. Next to the abilities of a robot to sense and think, it must be able to react autonomously. This reaction follows from its ability to sense and to think. The reaction of the robot happens in the real world, the world humans are living in. Since robots can act autonomously without being under the direct control of a human and make decisions based on sense data, a number of questions arise. For example, how do we ensure that robots don’t harm humans? Lin et. al. (2011) describe an interesting list of ethical questions such as: “whose ethics and law ought to be the standard in robotics”, “if we could program a code of ethics to regulate robotic behaviour, which ethical theory should we use” “should robots merely be considered tools, such as guns and computers, and 3 regulated accordingly” and “will robotic companionship (that could replace human or animal companionship) for other purposes, such as drinking buddies, pets, other forms of entertainment, or sex, be morally problematic?”. If robots are not programmed with the capacity to make moral decisions disastrous situations can arise, for example, robots forcing people against their will. Because of this Wallach & Allen (2009) argue that robots should be provided with the ability for ethical reasoning and ethical decision- making. There have been many that have taken on the challenge of analysing and implementing moral decision making into robots, such as: Anderson (2008), Coeckelbergh, (2010), Crnkovic, & Çürüklü (2012), Malle (2015), Murphy and Woods (2009), and Wallach (2010). One possible strategy for the implementation of morality and for answering some of the ethical questions formulated by Lin et. al. (2011) is to program robots in such a way that they are guided by (the software equivalents of) moral concepts such as moral obligation, morally wrong, morally right, fairness, virtues and kindness. Another scenario to implement morality is to create strict, deontic like laws that a robot is to uphold. In the 1940’s Isaac Asimov (1942), in his novel Runaround, proposed three laws of robots that were to guide the action of robots in such a way that their action would by adequate and safe. Both strategies will be discussed in this thesis as ways to implement morality into machines. EXAMPLES OF CONTEMPORARY ROBOTS As robots will become part of our daily lives in the next decades it is necessary to ensure that their behaviour is morally adequate (Crnkovic & Çürüklü, 2012). A recent case shows how the behaviour of a robot with artificial intelligence (AI) can go wrong. In 2016 Microsoft launched an AI chat robot on Twitter called Tay. Tay learned by the conversations it had with real humans. After only 24 hours it was taken offline because of tweets like “Bush did 9/11” and “Hitler would have done a better job than the monkey we have got now” (Horton, 2016). Some suggest that hackers deliberately provided Tay with information so that it would learn immoral statements, but this is not confirmed. What this example shows is that a robot, without the right moral guidelines, can start to act immorally. Up until this day Microsoft has not yet re-launched Tay. While in the 1950s robots were mainly used as industrial tools, today robots have become more social. Already robots are being developed in areas such as entertainment, research and education, personal care and companions, military and security (Lin et. al. 2011). In this paragraph, I will introduce contemporary robots that are being applied in the field of education and healthcare. 4 In the field of education, experiments are being done in which robots, partially, or completely replace teachers. One example is an experiment at a secondary school in Uithoorn, the Netherlands. Although the scientific results of this experiment are not jet published the researchers, Elly Konijn and Johan Hoorn, gave an interesting interview to Martin Kuiper from the NRC (Kuiper, 2016). In the experiment, a small robot called Zora, made by the French company Aldebaran and programmed by the Belgian company QBMT, helps children practice with mathematics. According to Konijn and Hoorn, Zora can help with routine tasks done previously by the teacher, for example explaining a mathematical formula over and over again (Kuiper, 2016). Although more research needs to be done in the area of human-robot interaction studies suggest that college students are more likely to follow behavioural suggestions offered by an autonomous social robot in comparison to a robot directly controlled by a human (Edwards, Edwards, Spence, & Harris, 2016). Experiments using artificial and actual lecturers in insets in video lectures show that: “participants who saw the inset video of the actual lecturer replaced by an animated human lecturer recalled less information than those who saw the recording of the human lecturer” (Li, Rene,́ Jeremy, & Wendy, 2015).