AISB/IACAP World Congress 2012

Total Page:16

File Type:pdf, Size:1020Kb

AISB/IACAP World Congress 2012 AISB/IACAP World Congress 2012 Birmingham, UK, 2-6 July 2012 THE MACHINE QUESTION: AI, ETHICS AND MORAL RESPONSIBILITY David J. Gunkel, Joanna J. Bryson, and Steve Torrance (Editors) Part of Published by The Society for the Study of Artificial Intelligence and Simulation of Behaviour http://www.aisb.org.uk ISBN 978-1-908187-21-5 Foreword from the Congress Chairs For the Turing year 2012, AISB (The Society for the Study of Artificial Intel- ligence and Simulation of Behaviour) and IACAP (The International Associa- tion for Computing and Philosophy) merged their annual symposia/conferences to form the AISB/IACAP World Congress. The congress took place 2–6 July 2012 at the University of Birmingham, UK. The Congress was inspired by a desire to honour Alan Turing, and by the broad and deep significance of Turing’s work to AI, the philosophical ramifications of computing, and philosophy and computing more generally. The Congress was one of the events forming the Alan Turing Year. The Congress consisted mainly of a number of collocated Symposia on spe- cific research areas, together with six invited Plenary Talks. All papers other than the Plenaries were given within Symposia. This format is perfect for encouraging new dialogue and collaboration both within and between research areas. This volume forms the proceedings of one of the component symposia. We are most grateful to the organizers of the Symposium for their hard work in creating it, attracting papers, doing the necessary reviewing, defining an exciting programme for the symposium, and compiling this volume. We also thank them for their flexibility and patience concerning the complex matter of fitting all the symposia and other events into the Congress week. John Barnden (Computer Science, University of Birmingham) Programme Co-Chair and AISB Vice-Chair Anthony Beavers (University of Evansville, Indiana, USA) Programme Co-Chair and IACAP President Manfred Kerber (Computer Science, University of Birmingham) Local Arrangements Chair Foreword for The Machine Question One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Although initially limited to “other men,” the practice of ethics has developed in such a way that it continually challenges its own restrictions and comes to encompass what had been previously excluded in- dividuals and groups—foreigners, women, animals, and even the environment. Currently, we stand on the verge of another, fundamental challenge to moral think- ing. This challenge comes from the autonomous, intelligent machines of our own making, and it puts in question many deep-seated assumptions about who or what constitutes a moral subject. The way we address and respond to this challenge will have a profound effect on how we understand ourselves, our place in the world, and our responsibilities to the other entities encountered here. We organised the symposium that’s proceedings you find here because we be- lieve it is urgent that this new development in moral thinking be advanced in the light and perspective of ethics/moral philosophy, a discipline that reflects thou- sands of years of effort by our species’ civilisations. Fundamental philosophical questions include: • What kind of moral claim might an intelligent or autonomous machine have? • Is it possible for a machine to be a legitimate moral agent and/or moral patient? • What are the philosophical grounds supporting such a claim? • And what would it mean to articulate and practice an ethics of this claim? The Machine Question: AI, Ethics and Moral Responsibility seeks to address, evaluate, and respond to these and related questions. The Machine Question was one of three symposia in the Ethics, Morality, AI and Mind track at the AISB / IACAP 2012 World Congress in honour of Alan Tur- ing. The ethics track also included the symposia Moral Cognition and Theory of Mind and Framework for Responsible Research and Innovation in Artificial Intel- ligence. We would like to thank all of the contributors to this symposium, the other symposia, the other symposia’s organisers and the Congress leaders, particularly John Barnden for his seemingly tireless response to our many queries. We would also like to thank the keynote speakers for the Ethics track, Susan and Michael Anderson, whose talk The Relationship Between Intelligent, Autonomously Func- tioning Machines and Ethics appeared during our symposium, for agreeing to speak at the congress. The story of this symposium started a little like the old Lorne Greene song Ringo, except instead of anyone saving anyone’s life, Joanna Bryson loaned David Gunkel a laundry token in July of 1986, about a month after both had graduated with first degrees in the liberal arts and independently moved in to The Granville Grandeur a cheap apartment building on Chicago’s north side. Two decades later they discovered themselves on the opposite sides of not the law, but rather the AI-as-Moral-Subject debate, when Gunkel contacted Bryson about a chapter in his book Thinking Otherwise. The climactic shoot-out took place not between two isolated people, but with the wise advice and able assistance of our third co-editor Steve Torrance, who volunteered to help us through the AISB process (and mandated three-editor policy), and between a cadre of scholars who took time to submit to, revise for and participate in this symposium. We thank every one of them for their contributions and participation, and you for reading this proceedings. David J. Gunkel (Department of Communication, Northern Illinois University) Joanna J. Bryson (Department of Computer Science, University of Bath) Steve Torrance (Department of Computer Science, University of Sussex) Contents 1 Joel Parthemore and Blay Whitby — Moral Agency, Moral Responsi- bility, and Artefacts 8 2 John Basl — Machines as Moral Patients We Shouldn’t Care About (Yet) 17 3 Benjamin Matheson — Manipulation, Moral Responsibility and Ma- chines 25 4 Alejandro Rosas — The Holy Will of Ethical Machines 29 5 Keith Miller, Marty Wolf and Frances Grodzinsky — Behind the Mask: Machine Morality 33 6 Erica Neely — Machines and the Moral Community 38 7 Mark Coeckelbergh — Who Cares about Robots? 43 8 David J. Gunkel — A Vindication of the Rights of Machines 46 9 Steve Torrance — The Centrality of Machine Consciousness to Ma- chine Ethics 54 10 Rodger Kibble — Can an Unmanned Drone Be a Moral Agent? 62 11 Marc Champagne and Ryan Tonkens — Bridging the Responsibility Gap in Automated Warfare 68 12 Joanna Bryson — Patiency Is Not a Virtue: Suggestions for Co-Constructing an Ethical Framework Including Intelligent Artefacts 74 13 Johnny Søraker — Is There Continuity Between Man and Machine? 79 14 David Davenport — Poster: Moral Mechanisms 84 15 Marie-des-Neiges Ruffo — Poster: The Robot, a Stranger to Ethics 88 16 Mark Waser — Poster: Safety and Morality Require the Recognition of Self-Improving Machines as Moral/Justice Patients and Agents 93 17 Damien P. Williams — Poster: Strange Things Happen at the One Two Point: The Implications of Autonomous Created Intelligence in Specu- lative Fiction Media 98 Moral Agency, Moral Responsibility, and Artefacts: What Existing Artefacts Fail to Achieve (and Why), and Why They, Nevertheless, Can (and Do!) Make Moral Claims Upon Us Joel Parthemore 1 and Blay Whitby2 Abstract. This paper follows directly from our forthcoming pa- enough on our usage that an agent does the appropriate things: i.e., per in International Journal of Machine Consciousness, where we produces the correct consequences. It must do so for the appropriate discuss the requirements for an artefact to be a moral agent and con- reasons and using the appropriate means. Otherwise, it may be appro- clude that the artefactual question is ultimately a red herring. As we priate – even morally required in some circumstances (see Section 5) did in the earlier paper, we take moral agency to be that condition in – to treat the agent, at least to some limited extent, as if it were a which an agent can, appropriately, be held responsible for her actions moral agent / possessed moral agency; nevertheless, that agent will and their consequences. We set a number of stringent conditions on not be a moral agent. This alignment of motivations, means, and con- moral agency. A moral agent must be embedded in a cultural and sequences for attributing moral agency matches an alignment of mo- specifically moral context, and embodied in a suitable physical form. tivations, means, and consequences we see (pace the utilitarians and It must be, in some substantive sense, alive. It must exhibit self- most other consequentialists) as essential to “doing the morally right conscious awareness: who does the “I” who thinks “I” think that thing”– though any further discussion of that last point is necessarily “I” is? It must exhibit a range of highly sophisticated conceptual beyond the scope of this paper. abilities, going well beyond what the likely majority of conceptual In [28], we set out a number of conceptual requirements for pos- agents possess: not least that it must possess a well-developed moral sessing moral agency as well as grounds for appropriately attributing space of reasons. Finally, it must be able to communicate its moral it, as a way of addressing the claims and counterclaims over so-called agency through some system of signs: a “private” moral world is not artificial morality and machine consciousness. Our conclusion was enough. After reviewing these conditions and pouring cold water on that concerns over artefactual moral agency and consciousness were a number of recent claims for having achieved “minimal” machine useful for initiating discussion but ultimately a distraction from the consciousness, we turn our attention to a number of existing and, in bigger question of what it takes for any agent to be a conscious moral some cases, commonplace artefacts that lack moral agency yet nev- agent.
Recommended publications
  • Man As 'Aggregate of Data'
    AI & SOCIETY https://doi.org/10.1007/s00146-018-0852-6 OPEN FORUM Man as ‘aggregate of data’ What computers shouldn’t do Sjoukje van der Meulen1 · Max Bruinsma2 Received: 4 October 2017 / Accepted: 10 June 2018 © The Author(s) 2018 Abstract Since the emergence of the innovative field of artificial intelligence (AI) in the 1960s, the late Hubert Dreyfus insisted on the ontological distinction between man and machine, human and artificial intelligence. In the different editions of his clas- sic and influential book What computers can’t do (1972), he posits that an algorithmic machine can never fully simulate the complex functioning of the human mind—not now, nor in the future. Dreyfus’ categorical distinctions between man and machine are still relevant today, but their relation has become more complex in our increasingly data-driven society. We, humans, are continuously immersed within a technological universe, while at the same time ubiquitous computing, in the words of computer scientist Mark Weiser, “forces computers to live out here in the world with people” (De Souza e Silva in Interfaces of hybrid spaces. In: Kavoori AP, Arceneaux N (eds) The cell phone reader. Peter Lang Publishing, New York, 2006, p 20). Dreyfus’ ideas are therefore challenged by thinkers such as Weiser, Kevin Kelly, Bruno Latour, Philip Agre, and Peter Paul Verbeek, who all argue that humans are much more intrinsically linked to machines than the original dichotomy suggests—they have evolved in concert. Through a discussion of the classical concepts of individuum and ‘authenticity’ within Western civilization, this paper argues that within the ever-expanding data-sphere of the twenty-first century, a new concept of man as ‘aggregate of data’ has emerged, which further erodes and undermines the categorical distinction between man and machine.
    [Show full text]
  • Communication and Artificial Intelligence: Opportunities and Challenges for the 21St Century David J
    communication +1 Volume 1 Article 1 Issue 1 Futures of Communication August 2012 Communication and Artificial Intelligence: Opportunities and Challenges for the 21st Century David J. Gunkel Northern Illinois University, [email protected] Abstract This essay advocates for a significant reorientation and reconceptualization of communication studies in order to accommodate the opportunities and challenges introduced by increasingly intelligent machines, autonomous decision making systems, and smart devices. Historically the discipline of communication has accommodated new technology by transforming these innovations into a medium of human interaction and message exchange. With the computer, this transaction is particularly evident with the development of computer-mediated communication (CMC) in the later half of the 20th century. In CMC, the computer is understood and investigated as a more-or-less neutral channel of message transfer and instrument of human interaction. This formalization, although not necessarily incorrect, neglects the fact that the computer, unlike previous technological advancements, also occupies the position of participant in communicative exchanges. Evidence of this is already available in the science of AI and has been explicitly described by some of the earliest writings on communication and the computer. The essay therefore 1) demonstrates that the CMC paradigm, although undeniably influential and successful, is insufficient and no longer tenable and 2) argues that communication studies needs to rework its basic framework in order to address and respond to the unique technological challenges and opportunities of the 21st century. Keywords Communication, Artificial Intelligence, Alan Turing, Media Studies, Computer Mediated Communication, Computers Gunkel / Communication and AI Introduction In a now well-known and often reproduced New Yorker cartoon by Peter Steiner, two dogs sit in front of an Internet-connected computer.
    [Show full text]
  • Can Social Robots Qualify for Moral Consideration? Reframing the Question About Robot Rights
    information Article Can Social Robots Qualify for Moral Consideration? Reframing the Question about Robot Rights Herman T. Tavani Department of Philosophy, Rivier University, Nashua, NH 03060, USA; [email protected] Received: 25 February 2018; Accepted: 22 March 2018; Published: 29 March 2018 Abstract: A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and unambiguous. For example, those who favor granting rights to robots have not always been clear as to which kinds of robots should (or should not) be eligible; nor have they been consistent with regard to which kinds of rights—civil, legal, moral, etc.—should be granted to qualifying robots. Also, there has been considerable disagreement about which essential criterion, or cluster of criteria, a robot would need to satisfy to be eligible for rights, and there is ongoing disagreement as to whether a robot must satisfy the conditions for (moral) agency to qualify either for rights or (at least some level of) moral consideration. One aim of this paper is to show how the current debate about whether to grant rights to robots would benefit from an analysis and clarification of some key concepts and assumptions underlying that question. My principal objective, however, is to show why we should reframe that question by asking instead whether some kinds of social robots qualify for moral consideration as moral patients. In arguing that the answer to this question is “yes,” I draw from some insights in the writings of Hans Jonas to defend my position.
    [Show full text]
  • “The Problem of the Question About Animal Ethics” by Michal Piekarski
    J Agric Environ Ethics DOI 10.1007/s10806-016-9627-6 REVIEW PAPER Response to ‘‘The Problem of the Question About Animal Ethics’’ by Michal Piekarski 1 2 Mark Coeckelbergh • David J. Gunkel Accepted: 4 July 2016 Ó The Author(s) 2016. This article is published with open access at Springerlink.com Abstract In this brief article we reply to Michal Piekarski’s response to our article ‘Facing Animals’ published previously in this journal. In our article we criticized the properties approach to defining the moral standing of animals, and in its place pro- posed a relational and other-oriented concept that is based on a transcendental and phenomenological perspective, mainly inspired by Heidegger, Levinas, and Derrida. In this reply we question and problematize Piekarski’s interpretation of our essay and critically evaluate ‘‘the ethics of commitment’’ that he offers as an alternative. Keywords Animal ethics Á Moral standing Á Other Á Moral language Á Virtue ethics Á Transcendental methodology Á Essentialism Á Heidegger Á Levinas We thank Michal Piekarski for his thoughtful review and response to our article ‘Facing Animals’ (Coeckelbergh and Gunkel 2014), which will likely stimulate further discussion about the moral standing of animals and related issues. However, in spite of the positioning of his article as a polemic and as being in disagreement with ours, we believe that the arguments made are similar and that Piekarski shares our critique of the properties approach to deciding the question of moral standing and our interest in questioning the hegemony of this procedure and the anthropocentric forms of ethics it has informed and enabled.
    [Show full text]
  • Responsibility and Robot Ethics: a Critical Overview
    philosophies Article Responsibility and Robot Ethics: A Critical Overview Janina Loh Philosophy of Media and Technology, University of Vienna, 1010 Vienna, Austria; [email protected] Received: 13 July 2019; Accepted: 23 October 2019; Published: 8 November 2019 Abstract: This paper has three concerns: first, it represents an etymological and genealogical study of the phenomenon of responsibility. Secondly, it gives an overview of the three fields of robot ethics as a philosophical discipline and discusses the fundamental questions that arise within these three fields. Thirdly, it will be explained how in these three fields of robot ethics is spoken about responsibility and how responsibility is attributed in general. As a philosophical paper, it presents a theoretical approach and no practical suggestions are made as to which robots should bear responsibility under which circumstances or how guidelines should be formulated in which a responsible use of robots is outlined. Keywords: robots; responsibility; moral agency; moral patiency; inclusive approaches 1. Introduction It is currently assumed that technological developments are radically changing our understanding of the concept of and the possibilities of ascribing responsibility. The assumption of a transformation of responsibility is fed on the one hand by the fundamental upheavals in the nature of ‘the’ human being, which are attributed to the development of autonomous, self-learning robots. On the other hand, one speaks of radical paradigm shifts and a corresponding transformation of our understanding of responsibility in the organizational forms of our social, political, and economic systems due to the challenges posed by robotization, automation, digitization, and industry 4.0.
    [Show full text]
  • Artificial Consciousness and Artificial Ethics: Between Realism and Social-Relationism
    Artificial Consciousness and Artificial Ethics: Between Realism and Social-Relationism. Steve Torrance School of Engineering and Informatics, University of Sussex, Falmer, Brighton, BN1 9QJ, UK. School of Psychology, Goldsmiths, University of London, New Cross, London, SE14 6NW, UK Email: [email protected] Revised 20 June 2013 Abstract. I compare a ‘realist’ with a ‘social-relational’ perspective on our judgments of the moral status of artificial agents (AAs). I develop a realist position according to which the moral status of a being - particularly in relation to moral patiency attribution - is closely bound up with that being’s ability to experience states of conscious satisfaction or suffering (CSS). For a realist both moral status and experiential capacity are objective properties of agents. A social-relationist denies the existence of any such objective properties in the case of either moral status or consciousness, suggesting that the determination of such properties are rests solely upon social attribution or consensus. A wide variety of social interactions between us and various kinds of artificial agent will no doubt proliferate in future generations, and the social- relational view may well be right that the appearance of CSS-features in such artificial beings will make moral-role attribution socially prevalent in human-AA relations. But there is still the question of what actual CSS states a given AA is actually capable of undergoing, independently of the appearances. This is not just a matter of changes in the structure of social existence that seem inevitable as human-AA interaction becomes more prevalent. The social world is itself enabled and constrained by the physical world, and by the biological features of living social participants.
    [Show full text]
  • The Machine Question: Critical Perspectives on AI, Robots, and Ethics MIT Press, Cambridge, MA, 2012, 272 Pp, ISBN-10: 0-262-01743-1, ISBN-13: 978-0-262-01743-5
    Ethics Inf Technol (2013) 15:235–238 DOI 10.1007/s10676-012-9305-y BOOK REVIEW David J. Gunkel: The machine question: critical perspectives on AI, robots, and ethics MIT Press, Cambridge, MA, 2012, 272 pp, ISBN-10: 0-262-01743-1, ISBN-13: 978-0-262-01743-5 Mark Coeckelbergh Published online: 3 November 2012 Ó Springer Science+Business Media Dordrecht 2012 Are machines worthy of moral consideration? Can they be words ‘‘who’’ and ‘‘what’’. But he also shows that current included in the moral community? What is the moral status attempts to end this exclusion are highly problematic. of machines? Can they be moral agents, or (at least) moral In the first chapter the author shows many of the com- patients? Can we create ‘‘moral machines’’? In the past plications with arguments for including machines as moral decade, there has been a growing interest in these ques- agents. For example, he shows not only that certain con- tions, especially within the fields of what some call ‘‘robot cepts such as consciousness (p. 55) and personhood (p. 90) ethics’’ and others ‘‘machine ethics’’. David Gunkel’s The are problematic—this is generally acknowledged by phi- Machine Question, however, differs from much of the losophers across the spectrum—but also that there are existing literature since it does not aim to offer answers to epistemological problems involved with ascribing these these particular questions. Instead of arguing for a partic- properties to other entities. In particular, as Dennett and ular position in the debate, the author attends to the ques- Derrida point out, participants in the debate make a ‘leap tion itself.
    [Show full text]
  • Caring for Robotic Care-Givers
    The Rights of Machines—Caring for Robotic Care-Givers David J. Gunkel1 Abstract. Work in the field of machine medical ethics, about how the existence of robots may positively or negatively especially as it applies to healthcare robots, generally focuses affect the lives of care recipients” [5]. attention on controlling the decision making capabilities and Absent from much of the current literature, however, is a actions of autonomous machines for the sake of respecting the consideration of the other side of this issue, namely the moral rights of human beings. Absent from much of the current status and standing of these machines. Unlike the seemingly cold literature is a consideration of the other side of this issue. That is, and rather impersonal industrial robots that have been the question of machine rights or the moral standing of these successfully developed for and implemented in manufacturing, socially situated and interactive technologies. This chapter transportation, and maintenance operations, home healthcare investigates the moral situation of healthcare robots by robots will occupy a unique social position and “share physical examining how human beings should respond to these artificial and emotional spaces with the user” [6]. In providing care for us, entities that will increasingly come to care for us. A range of these machines will take up residence in the home and will be possible responses will be considered bounded by two opposing involved in daily personal and perhaps even intimate interactions positions. We can, on the one hand, deal with these mechanisms (i.e. monitoring, feeding, bathing, mobility, and companionship). by deploying the standard instrumental theory of technology, For this reason, it is reasonable to inquire about the social status which renders care-giving robots nothing more than tools and and moral standing of these technologies.
    [Show full text]
  • The Other Question: Can and Should Robots Have Rights?
    Ethics Inf Technol DOI 10.1007/s10676-017-9442-4 ORIGINAL PAPER The other question: can and should robots have rights? David J. Gunkel1 © The Author(s) 2017. This article is an open access publication Abstract This essay addresses the other side of the robot (2014, vii) identifies as the “control problem,” for what ethics debate, taking up and investigating the question “Can Anderson and Anderson (2011) develop under the project and should robots have rights?” The examination of this sub- of Machine Ethics, and what Allen and Wallach (2009, p. ject proceeds by way of three steps or movements. We begin 4) propose in Moral Machines: Teaching Robots Right from by looking at and analyzing the form of the question itself. Wrong under the concept “artificial moral agent,” or AMA. There is an important philosophical difference between the And it holds for most of the work that was assembled for two modal verbs that organize the inquiry—can and should. and recently presented at Robophilosophy 2016 (Seibt et al. This difference has considerable history behind it that influ- 2016). The organizing question of the conference—“What ences what is asked about and how. Second, capitalizing on can and should social robots do?”—is principally a question this verbal distinction, it is possible to identify four modali- about the possibilities and limits of machine action or agency. ties concerning social robots and the question of rights. The But this is only one-half of the story. As Floridi (2013, second section will identify and critically assess these four pp. 135–136) reminds us, moral situations involve at least modalities as they have been deployed and developed in the two interacting components—the initiator of the action or current literature.
    [Show full text]
  • Introduction
    Introduction One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Although initially limited to “other men,” the practice of ethics has developed in such a way that it continually challenges its own restrictions and comes to encompass what had been previously excluded individuals and groups—foreigners, women, animals, and even the environment. Currently, we stand on the verge of another fundamental challenge to moral thinking. This challenge comes from the autonomous, intelligent machines of our own making, and it puts in question many deep-seated assumptions about who or what con- stitutes a moral subject. The way we address and respond to this challenge will have a profound effect on how we understand ourselves, our place in the world, and our responsibilities to the other entities encountered here. Take for example one of the quintessential illustrations of both the promise and peril of autonomous machine decision making, Stanley Kubrick’s 2001: A Space Odyssey (1968). In this popular science fiction film, the HAL 9000 computer endeavors to protect the integrity of a deep-space mission to Jupiter by ending the life of the spacecraft’s human crew. In response to this action, the remaining human occupant of the spacecraft terminates HAL by shutting down the computer’s higher cognitive func- tions, effectively killing this artificially intelligent machine. The scenario obviously makes for compelling cinematic drama, but it also illustrates a number of intriguing and important philosophical
    [Show full text]
  • Assessing Agency, Moral and Otherwise- Beyond the Machine
    ! ! ! ! Symposium!on!Assessing!Agency,!Moral! and!Otherwise:!Beyond!the!Machine! Question! ! ! In!conjunction!with!the!2018!Convention!of!the!Society!for!the!Study! of!Artificial!Intelligence!and!Simulation!of!Behaviour!(AISB!2018)! ! ! ! 4th!April!2018! ! ! Rilkean Memories for a Robot Antonio Chella Abstract. The paper discusses the role of Rilkean memories, Rowlands discusses in detail how “this form of memory is recently introduced by Rowlands, in the building of the typically, embodied and embedded; “it is a form of involuntary, autobiographic self of a robot. autobiographical memory that is neither implicit nor explicit, neither declarative nor procedural, neither episodic nor semantic, and not Freudian.” ([2], 141). Rowlands points out that Rilkean memories of a person are 1 INTRODUCTION responsible for the style of that person. An example is the motion style of the person: a person habit may be to walk on the left side It has been debated about the characteristics for an agent to be of a path because of a traumatic episode during her life. The considered morally responsible for her actions. A generally person may not be able to remember the traumatic episode recognized characteristic for moral agency is the capability for explicitly, but the event entered into her blood, i.e., it becomes a the agent to have a sense of self. According to this line of part of her style. thinking, it has been debated whether a robot could ever be a Then, a Rilkean memory derives from episodic memory as a morally responsible agent (see Gunkel [1], 46, for a discussion). transformation of the act of remembering an episode in a With the term “robot” we consider a humanoid robot, i.e., a behavioral and bodily disposition, even when the content of the mechanical entity with a human-like body shape, equipped with episode is lost ([3], 73).
    [Show full text]
  • The Moral Consideration of Artificial Entities: a Literature Review
    Science and Engineering Ethics (2021) 27:53 https://doi.org/10.1007/s11948-021-00331-8 REVIEW The Moral Consideration of Artifcial Entities: A Literature Review Jamie Harris1 · Jacy Reese Anthis2 Received: 23 February 2021 / Accepted: 20 July 2021 / Published online: 9 August 2021 © The Author(s) 2021 Abstract Ethicists, policy-makers, and the general public have questioned whether artifcial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agree- ment among scholars that some artifcial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the efects on artifcial entities and concern for the efects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage “information ethics” and “social-relational” approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artifcial entities. There is limited relevant empirical data col- lection, primarily in a few psychological studies on current moral and social atti- tudes of humans towards robots and other artifcial entities. This suggests an impor- tant gap for psychological, sociological, economic, and organizational research on how artifcial entities will be integrated into society and the factors that will deter- mine how the interests of artifcial entities are considered. Keywords Artifcial intelligence · Robots · Rights · Moral consideration · Ethics · Philosophy of technology Introduction Recent decades have seen a substantial increase in human interaction with artif- cial entities.
    [Show full text]