Responsibility and Robot Ethics: a Critical Overview
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Artificial Intelligence and the Ethics of Self-Learning Robots Shannon Vallor Santa Clara University, [email protected]
Santa Clara University Scholar Commons Philosophy College of Arts & Sciences 10-3-2017 Artificial Intelligence and the Ethics of Self-learning Robots Shannon Vallor Santa Clara University, [email protected] George A. Bekey Follow this and additional works at: http://scholarcommons.scu.edu/phi Part of the Philosophy Commons Recommended Citation Vallor, S., & Bekey, G. A. (2017). Artificial Intelligence and the Ethics of Self-learning Robots. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot Ethics 2.0 (pp. 338–353). Oxford University Press. This material was originally published in Robot Ethics 2.0 edited by Patrick Lin, Keith Abney, and Ryan Jenkins, and has been reproduced by permission of Oxford University Press. For permission to reuse this material, please visit http://www.oup.co.uk/academic/rights/permissions. This Book Chapter is brought to you for free and open access by the College of Arts & Sciences at Scholar Commons. It has been accepted for inclusion in Philosophy by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. ARTIFICIAL INTELLIGENCE AND 22 THE ETHICS OF SELF - LEARNING ROBOTS Shannon Va ll or and George A. Bekey The convergence of robotics technology with the science of artificial intelligence (or AI) is rapidly enabling the development of robots that emulate a wide range of intelligent human behaviors.1 Recent advances in machine learning techniques have produced significant gains in the ability of artificial agents to perform or even excel in activities for merly thought to be the exclusive province of human intelligence, including abstract problem-solving, perceptual recognition, social interaction, and natural language use. -
Christian Social Ethics
How should the protection of privacy, threatened by new technologies like radio frequency identification (RFID), be seen from a Judeo-Christian perspective? A dissertation by Erwin Walter Schmidt submitted in accordance with the requirements for the degree of Master of Theology in the subject Theological Ethics at the UNIVERSITY OF SOUTH AFRICA Supervisor: Prof Dr Puleng LenkaBula Co-supervisor: Dr Dr Volker Kessler November 2011 1 © 2012-11-10 UNISA, Erwin Walter Schmidt, student no. 4306-490-6 Summary Radio Frequency Identification (RFID) is a new technology which allows people to identify objects automatically but there is a suspicion that, if people are tracked, their privacy may be infringed. This raises questions about how far this technology is acceptable and how privacy should be protected. It is also initiated a discussion involving a wide range of technical, philosophical, political, social, cultural, and economical aspects. There is also a need to consider the ethical and theological perspectives. This dissertation takes all its relevant directions from a Judeo-Christian theological perspective. On one side the use of technology is considered, and on the other side the value of privacy, its infringements and protection are investigated. According to Jewish and Christian understanding human dignity has to be respected including the right to privacy. As a consequence of this RFID may only used for applications that do not infringe this right. This conclusion, however, is not limited to RFID; it will be relevant for other, future surveillance technologies as well. 2 © 2012-11-10 UNISA, Erwin Walter Schmidt, student no. 4306-490-6 Key terms: Radio frequency identification, privacy, human rights, right to privacy, technology, information and communication technology, privacy enhancing technology, Internet of Things, data protection, privacy impact assessment. -
Pragmatism, Ethics, and Technology / 10 Pragmatism, Ethics, and Technology Hans Radder Free University of Amsterdam
Techné 7:3 Spring 2004 Radder, Pragmatism, Ethics, and Technology / 10 Pragmatism, Ethics, and Technology Hans Radder Free University of Amsterdam Pragmatist Ethics for a Technological Culture presents a variety of essays on a significant and timely issue. The plan of the book is thoughtful. It comprises nine major chapters, each followed by a brief commentary. The volume is divided into four parts: technology and ethics, the status of pragmatism, pragmatism and practices, and discourse ethics and deliberative democracy. In addition, the introductory and concluding chapters by the editors help to connect the various contributions. Moreover, these chapters sketch an interesting programmatic approach for dealing with the ethical problems of our technological culture. The only complaint one might have about the book's composition is the lack of a subject and name index. In this essay, I will not present the usual summary review but instead offer some reflections on the three main concepts of the book (pragmatism, ethics and technology) and on the way these concepts have been explained, employed and related in the various contributions. My overall conclusion is that, although much can be learned from the book, it also falls short in some respects. The most important problem is that, appearances notwithstanding, the full significance of technology for our ethical problems is not being appropriately acknowledged. Pragmatism Let me start with a preliminary issue. As do most of the authors, I think that it is appropriate to speak of a pragmatist, instead of a merely pragmatic, approach to ethics. As I see it, a pragmatist approach requires the commitment to engage in discursive explanation and argumentation, while a pragmatic approach suggests that these more theoretical activities may be omitted. -
Man As 'Aggregate of Data'
AI & SOCIETY https://doi.org/10.1007/s00146-018-0852-6 OPEN FORUM Man as ‘aggregate of data’ What computers shouldn’t do Sjoukje van der Meulen1 · Max Bruinsma2 Received: 4 October 2017 / Accepted: 10 June 2018 © The Author(s) 2018 Abstract Since the emergence of the innovative field of artificial intelligence (AI) in the 1960s, the late Hubert Dreyfus insisted on the ontological distinction between man and machine, human and artificial intelligence. In the different editions of his clas- sic and influential book What computers can’t do (1972), he posits that an algorithmic machine can never fully simulate the complex functioning of the human mind—not now, nor in the future. Dreyfus’ categorical distinctions between man and machine are still relevant today, but their relation has become more complex in our increasingly data-driven society. We, humans, are continuously immersed within a technological universe, while at the same time ubiquitous computing, in the words of computer scientist Mark Weiser, “forces computers to live out here in the world with people” (De Souza e Silva in Interfaces of hybrid spaces. In: Kavoori AP, Arceneaux N (eds) The cell phone reader. Peter Lang Publishing, New York, 2006, p 20). Dreyfus’ ideas are therefore challenged by thinkers such as Weiser, Kevin Kelly, Bruno Latour, Philip Agre, and Peter Paul Verbeek, who all argue that humans are much more intrinsically linked to machines than the original dichotomy suggests—they have evolved in concert. Through a discussion of the classical concepts of individuum and ‘authenticity’ within Western civilization, this paper argues that within the ever-expanding data-sphere of the twenty-first century, a new concept of man as ‘aggregate of data’ has emerged, which further erodes and undermines the categorical distinction between man and machine. -
Communication and Artificial Intelligence: Opportunities and Challenges for the 21St Century David J
communication +1 Volume 1 Article 1 Issue 1 Futures of Communication August 2012 Communication and Artificial Intelligence: Opportunities and Challenges for the 21st Century David J. Gunkel Northern Illinois University, [email protected] Abstract This essay advocates for a significant reorientation and reconceptualization of communication studies in order to accommodate the opportunities and challenges introduced by increasingly intelligent machines, autonomous decision making systems, and smart devices. Historically the discipline of communication has accommodated new technology by transforming these innovations into a medium of human interaction and message exchange. With the computer, this transaction is particularly evident with the development of computer-mediated communication (CMC) in the later half of the 20th century. In CMC, the computer is understood and investigated as a more-or-less neutral channel of message transfer and instrument of human interaction. This formalization, although not necessarily incorrect, neglects the fact that the computer, unlike previous technological advancements, also occupies the position of participant in communicative exchanges. Evidence of this is already available in the science of AI and has been explicitly described by some of the earliest writings on communication and the computer. The essay therefore 1) demonstrates that the CMC paradigm, although undeniably influential and successful, is insufficient and no longer tenable and 2) argues that communication studies needs to rework its basic framework in order to address and respond to the unique technological challenges and opportunities of the 21st century. Keywords Communication, Artificial Intelligence, Alan Turing, Media Studies, Computer Mediated Communication, Computers Gunkel / Communication and AI Introduction In a now well-known and often reproduced New Yorker cartoon by Peter Steiner, two dogs sit in front of an Internet-connected computer. -
Ordinary Technoethics
Ordinary Technoethics MICHEL PUECH Philosophy, Paris-Sorbonne University, Paris, France Email: [email protected] Web site: http://michel.puech.free.fr ABSTRACT From recent philosophy of technology emerges the need for an ethical assessment of the ordinary use of technological devices, in particular telephones, computers, and all kind of digital artifacts. The usual method of academic ethics, which is a top-down deduction starting with metaethics and ending in applied ethics, appears to be largely unproductive for this task. It provides “ideal” advice, that is to say formal and often sterile. As in the opposition between “ordinary language” philosophy and “ideal language” philosophy, the ordinary requires attention and an ethical investigation of the complex and pervasive use of everyday technological devices. Some examples indicate how a bottom-up reinvention of the ethics of technology can help in numerous techno-philosophical predicaments, including ethical sustainability. downloaded on http://michel.puech.free.fr 1/21 This paper resists “Ideal Technoethics”, which is implicit in mainstream academic applied ethics approaches and is currently favored by the bureaucratic implementation of ethics in public and private affairs. Instead, some trends in philosophy of technology emphasize the importance of ordinary technologically- laden behaviors. If we take this approach one step further, it leads to ordinary technoethics. In my take on ordinary technology, values are construed differently, starting from the importance of the ordinary use of technology (humble devices and focal1 familiar practices). The primacy of use in the history of the Internet provides a paradigm for the ordinary empowerment of users. What are the ethical consequences of this empowerment, and how is the average human being today equipped to address them in the innumerable micro-actions of ordinary life? Technoethics Technoethics as a research and practice field is situated in between philosophy of technology and applied ethics. -
Blame, What Is It Good For?
Blame, What is it Good For? Gordon Briggs1 Abstract— Blame is an vital social and cognitive mechanism [6] and reside in a much more ambiguous moral territory. that humans utilize in their interactions with other agents. In Additionally, it is not enough to be able to correctly reason this paper, we discuss how blame-reasoning mechanisms are about moral scenarios to ensure ethical or otherwise desirable needed to enable future social robots to: (1) appropriately adapt behavior in the context of repeated and/or long-term outcomes from human-robot interactions, the robot must interactions and relationships with other social agents; (2) avoid have other social competencies that support this reasoning behaviors that are perceived to be rude due to inappropriate mechanism. Deceptive interaction partners or incorrect per- and unintentional connotations of blame; and (3) avoid behav- ceptions/predictions about morally-charged scenarios may iors that could damage long-term, working relationships with lead even a perfect ethical-reasoner to make choices with other social agents. We also discuss how current computational models of blame and other relevant competencies (e.g. natural unethical outcomes [7]. What these challenges stress is the language generation) are currently insufficient to address these need to look beyond the ability to simply generate the proper concerns. Future work is necessary to increase the social answer to a moral dilemma in theory, and to consider the reasoning capabilities of artificially intelligent agents to achieve social mechanisms needed in practice. these goals. One such social mechanism that others have posited as I. INTRODUCTION being necessary to the construction of a artificial moral agent is blame [8]. -
An Ethical Framework for Smart Robots Mika Westerlund
An Ethical Framework for Smart Robots Mika Westerlund Never underestimate a droid. Leia Organa Star Wars: The Rise of Skywalker This article focuses on “roboethics” in the age of growing adoption of smart robots, which can now be seen as a new robotic “species”. As autonomous AI systems, they can collaborate with humans and are capable of learning from their operating environment, experiences, and human behaviour feedback in human-machine interaction. This enables smart robots to improve their performance and capabilities. This conceptual article reviews key perspectives to roboethics, as well as establishes a framework to illustrate its main ideas and features. Building on previous literature, roboethics has four major types of implications for smart robots: 1) smart robots as amoral and passive tools, 2) smart robots as recipients of ethical behaviour in society, 3) smart robots as moral and active agents, and 4) smart robots as ethical impact-makers in society. The study contributes to current literature by suggesting that there are two underlying ethical and moral dimensions behind these perspectives, namely the “ethical agency of smart robots” and “object of moral judgment”, as well as what this could look like as smart robots become more widespread in society. The article concludes by suggesting how scientists and smart robot designers can benefit from a framework, discussing the limitations of the present study, and proposing avenues for future research. Introduction capabilities (Lichocki et al., 2011; Petersen, 2007). Hence, Lin et al. (2011) define a “robot” as an Robots are becoming increasingly prevalent in our engineered machine that senses, thinks, and acts, thus daily, social, and professional lives, performing various being able to process information from sensors and work and household tasks, as well as operating other sources, such as an internal set of rules, either driverless vehicles and public transportation systems programmed or learned, that enables the machine to (Leenes et al., 2017). -
New Perspectives on Technology, Values, and Ethics Theoretical and Practical Boston Studies in the Philosophy and History of Science
Boston Studies in the Philosophy and History of Science 315 Wenceslao J. Gonzalez Editor New Perspectives on Technology, Values, and Ethics Theoretical and Practical Boston Studies in the Philosophy and History of Science Volume 315 Series editors Alisa Bokulich, Department of Philosophy, Boston University, Boston, MA, USA Robert S. Cohen, Boston University, Watertown, MA, USA Jürgen Renn, Max Planck Institute for the History of Science, Berlin, Germany Kostas Gavroglu, University of Athens, Athens, Greece The series Boston Studies in the Philosophy and History of Science was conceived in the broadest framework of interdisciplinary and international concerns. Natural scientists, mathematicians, social scientists and philosophers have contributed to the series, as have historians and sociologists of science, linguists, psychologists, physicians, and literary critics. The series has been able to include works by authors from many other countries around the world. The editors believe that the history and philosophy of science should itself be scientifi c, self-consciously critical, humane as well as rational, sceptical and undogmatic while also receptive to discussion of fi rst principles. One of the aims of Boston Studies, therefore, is to develop collaboration among scientists, historians and philosophers. Boston Studies in the Philosophy and History of Science looks into and refl ects on interactions between epistemological and historical dimensions in an effort to understand the scientifi c enterprise from every viewpoint. More information -
Nudging for Good: Robots and the Ethical Appropriateness of Nurturing Empathy and Charitable Behavior
Nudging for Good: Robots and the Ethical Appropriateness of Nurturing Empathy and Charitable Behavior Jason Borenstein* and Ron Arkin** Predictions are being commonly voiced about how robots are going to become an increasingly prominent feature of our day-to-day lives. Beyond the military and industrial sectors, they are in the process of being created to function as butlers, nannies, housekeepers, and even as companions (Wallach and Allen 2009). The design of these robotic technologies and their use in these roles raises numerous ethical issues. Indeed, entire conferences and books are now devoted to the subject (Lin et al. 2014).1 One particular under-examined aspect of human-robot interaction that requires systematic analysis is whether to allow robots to influence a user’s behavior for that person’s own good. However, an even more controversial practice is on the horizon and warrants attention, which is the ethical acceptability of allowing a robot to “nudge” a user’s behavior for the good of society. For the purposes of this paper, we will examine the feasibility of creating robots that would seek to nurture a user’s empathy towards other human beings. We specifically draw attention to whether it would be ethically appropriate for roboticists to pursue this type of design pathway. In our prior work, we examined the ethical aspects of encoding Rawls’ Theory of Social Justice into robots in order to encourage users to act more socially just towards other humans (Borenstein and Arkin 2016). Here, we primarily limit the focus to the performance of charitable acts, which could shed light on a range of socially just actions that a robot could potentially elicit from a user and what the associated ethical concerns may be. -
The Technological Mediation of Morality
The Technological Mediation of Morality of Technological Mediation The In this dissertation, Olya Kudina investigates the complex interactions between ethics and technology. Center stage to this is the phenomenon of “value dynamism” that explores how technologies co-shape the meaning of values that guide us through our lives and with which we evaluate these same technologies. The dissertation provides an encompassing view on value dynamism and the mediating role of technologies in it through empirical and philosophical investigations, as well as with attention to the larger fields of ethics, design and Technology Assessment. THE TECHNOLOGICAL MEDIATION OF MORALITY: Olya Kudina Olya VALUE DYNAMISM, AND THE COMPLEX INTERACTION BETWEEN ETHICS AND TECHNOLOGY University of Twente 2019 Olya Kudina THE TECHNOLOGICAL MEDIATION OF MORALITY VALUE DYNAMISM, AND THE COMPLEX INTERACTION BETWEEN ETHICS AND TECHNOLOGY Olya Kudina THE TECHNOLOGICAL MEDIATION OF MORALITY VALUE DYNAMISM, AND THE COMPLEX INTERACTION BETWEEN ETHICS AND TECHNOLOGY DISSERTATION to obtain the degree of doctor at the University of Twente, on the authority of the rector magnificus, prof.dr. T.T.M. Palstra, on account of the decision of the Doctorate Board, to be publicly defended on Friday the 17th of May 2019 at 14:45 hours by Olga Kudina born on the 11th of January 1990 in Vinnytsia, Ukraine This dissertation has been approved by: Graduation Committee Supervisor: Chairman/secretary Prof. dr. Th.A.J. Toonen University of Twente Supervisor Prof. dr. ir. P.P.C.C. Verbeek University of Twente Prof. dr. ir. P.P.C.C. Verbeek Co-supervisor Dr. M. Nagenborg University of Twente Co-supervisor: Committee Members Dr. -
Ethics and the Systemic Character of Modern Technology
PHIL & TECH 3:4 Summer 1998 Strijbos, Ethics and System/19 ETHICS AND THE SYSTEMIC CHARACTER OF MODERN TECHNOLOGY Sytse Strijbos, Free University, Amsterdam A distinguishing feature of today’s world is that technology has built the house in which humanity lives. More and more, our lives are lived within the confines of its walls. Yet this implies that technology entails far more than the material artifacts surrounding us. Technology is no longer simply a matter of objects in the hands of individuals; it has become a very complex system in which our everyday lives are embedded. The systemic character of modern technology confronts us with relatively new questions and dimensions of human responsibility. Hence this paper points out the need for exploring systems ethics as a new field of ethics essential for managing our technological world and for transforming it into a sane and healthy habitat for human life. Special attention is devoted to the introduction of information technology, which will continue unabated into coming decades and which is already changing our whole world of technology. Key words: technological system, systems ethics, social constructivism, technological determinism, information technology. 1. INTRODUCTION A burning question for our time concerns responsibility for technology. Through technological intervention, people have succeeded in virtually eliminating a variety of natural threats. At least that is true for the prosperous part of the world, although here too the forces of nature can unexpectedly smash through the safety barriers of a technological society, as in the case of the Kobe earthquake (1995). The greatest threat to technological societies appears to arise, however, from within.