AISB/IACAP World Congress 2012
Total Page:16
File Type:pdf, Size:1020Kb
AISB/IACAP World Congress 2012 Birmingham, UK, 2-6 July 2012 THE MACHINE QUESTION: AI, ETHICS AND MORAL RESPONSIBILITY David J. Gunkel, Joanna J. Bryson, and Steve Torrance (Editors) Part of Published by The Society for the Study of Artificial Intelligence and Simulation of Behaviour http://www.aisb.org.uk ISBN 978-1-908187-21-5 Foreword from the Congress Chairs For the Turing year 2012, AISB (The Society for the Study of Artificial Intel- ligence and Simulation of Behaviour) and IACAP (The International Associa- tion for Computing and Philosophy) merged their annual symposia/conferences to form the AISB/IACAP World Congress. The congress took place 2–6 July 2012 at the University of Birmingham, UK. The Congress was inspired by a desire to honour Alan Turing, and by the broad and deep significance of Turing’s work to AI, the philosophical ramifications of computing, and philosophy and computing more generally. The Congress was one of the events forming the Alan Turing Year. The Congress consisted mainly of a number of collocated Symposia on spe- cific research areas, together with six invited Plenary Talks. All papers other than the Plenaries were given within Symposia. This format is perfect for encouraging new dialogue and collaboration both within and between research areas. This volume forms the proceedings of one of the component symposia. We are most grateful to the organizers of the Symposium for their hard work in creating it, attracting papers, doing the necessary reviewing, defining an exciting programme for the symposium, and compiling this volume. We also thank them for their flexibility and patience concerning the complex matter of fitting all the symposia and other events into the Congress week. John Barnden (Computer Science, University of Birmingham) Programme Co-Chair and AISB Vice-Chair Anthony Beavers (University of Evansville, Indiana, USA) Programme Co-Chair and IACAP President Manfred Kerber (Computer Science, University of Birmingham) Local Arrangements Chair Foreword for The Machine Question One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Although initially limited to “other men,” the practice of ethics has developed in such a way that it continually challenges its own restrictions and comes to encompass what had been previously excluded in- dividuals and groups—foreigners, women, animals, and even the environment. Currently, we stand on the verge of another, fundamental challenge to moral think- ing. This challenge comes from the autonomous, intelligent machines of our own making, and it puts in question many deep-seated assumptions about who or what constitutes a moral subject. The way we address and respond to this challenge will have a profound effect on how we understand ourselves, our place in the world, and our responsibilities to the other entities encountered here. We organised the symposium that’s proceedings you find here because we be- lieve it is urgent that this new development in moral thinking be advanced in the light and perspective of ethics/moral philosophy, a discipline that reflects thou- sands of years of effort by our species’ civilisations. Fundamental philosophical questions include: • What kind of moral claim might an intelligent or autonomous machine have? • Is it possible for a machine to be a legitimate moral agent and/or moral patient? • What are the philosophical grounds supporting such a claim? • And what would it mean to articulate and practice an ethics of this claim? The Machine Question: AI, Ethics and Moral Responsibility seeks to address, evaluate, and respond to these and related questions. The Machine Question was one of three symposia in the Ethics, Morality, AI and Mind track at the AISB / IACAP 2012 World Congress in honour of Alan Tur- ing. The ethics track also included the symposia Moral Cognition and Theory of Mind and Framework for Responsible Research and Innovation in Artificial Intel- ligence. We would like to thank all of the contributors to this symposium, the other symposia, the other symposia’s organisers and the Congress leaders, particularly John Barnden for his seemingly tireless response to our many queries. We would also like to thank the keynote speakers for the Ethics track, Susan and Michael Anderson, whose talk The Relationship Between Intelligent, Autonomously Func- tioning Machines and Ethics appeared during our symposium, for agreeing to speak at the congress. The story of this symposium started a little like the old Lorne Greene song Ringo, except instead of anyone saving anyone’s life, Joanna Bryson loaned David Gunkel a laundry token in July of 1986, about a month after both had graduated with first degrees in the liberal arts and independently moved in to The Granville Grandeur a cheap apartment building on Chicago’s north side. Two decades later they discovered themselves on the opposite sides of not the law, but rather the AI-as-Moral-Subject debate, when Gunkel contacted Bryson about a chapter in his book Thinking Otherwise. The climactic shoot-out took place not between two isolated people, but with the wise advice and able assistance of our third co-editor Steve Torrance, who volunteered to help us through the AISB process (and mandated three-editor policy), and between a cadre of scholars who took time to submit to, revise for and participate in this symposium. We thank every one of them for their contributions and participation, and you for reading this proceedings. David J. Gunkel (Department of Communication, Northern Illinois University) Joanna J. Bryson (Department of Computer Science, University of Bath) Steve Torrance (Department of Computer Science, University of Sussex) Contents 1 Joel Parthemore and Blay Whitby — Moral Agency, Moral Responsi- bility, and Artefacts 8 2 John Basl — Machines as Moral Patients We Shouldn’t Care About (Yet) 17 3 Benjamin Matheson — Manipulation, Moral Responsibility and Ma- chines 25 4 Alejandro Rosas — The Holy Will of Ethical Machines 29 5 Keith Miller, Marty Wolf and Frances Grodzinsky — Behind the Mask: Machine Morality 33 6 Erica Neely — Machines and the Moral Community 38 7 Mark Coeckelbergh — Who Cares about Robots? 43 8 David J. Gunkel — A Vindication of the Rights of Machines 46 9 Steve Torrance — The Centrality of Machine Consciousness to Ma- chine Ethics 54 10 Rodger Kibble — Can an Unmanned Drone Be a Moral Agent? 62 11 Marc Champagne and Ryan Tonkens — Bridging the Responsibility Gap in Automated Warfare 68 12 Joanna Bryson — Patiency Is Not a Virtue: Suggestions for Co-Constructing an Ethical Framework Including Intelligent Artefacts 74 13 Johnny Søraker — Is There Continuity Between Man and Machine? 79 14 David Davenport — Poster: Moral Mechanisms 84 15 Marie-des-Neiges Ruffo — Poster: The Robot, a Stranger to Ethics 88 16 Mark Waser — Poster: Safety and Morality Require the Recognition of Self-Improving Machines as Moral/Justice Patients and Agents 93 17 Damien P. Williams — Poster: Strange Things Happen at the One Two Point: The Implications of Autonomous Created Intelligence in Specu- lative Fiction Media 98 Moral Agency, Moral Responsibility, and Artefacts: What Existing Artefacts Fail to Achieve (and Why), and Why They, Nevertheless, Can (and Do!) Make Moral Claims Upon Us Joel Parthemore 1 and Blay Whitby2 Abstract. This paper follows directly from our forthcoming pa- enough on our usage that an agent does the appropriate things: i.e., per in International Journal of Machine Consciousness, where we produces the correct consequences. It must do so for the appropriate discuss the requirements for an artefact to be a moral agent and con- reasons and using the appropriate means. Otherwise, it may be appro- clude that the artefactual question is ultimately a red herring. As we priate – even morally required in some circumstances (see Section 5) did in the earlier paper, we take moral agency to be that condition in – to treat the agent, at least to some limited extent, as if it were a which an agent can, appropriately, be held responsible for her actions moral agent / possessed moral agency; nevertheless, that agent will and their consequences. We set a number of stringent conditions on not be a moral agent. This alignment of motivations, means, and con- moral agency. A moral agent must be embedded in a cultural and sequences for attributing moral agency matches an alignment of mo- specifically moral context, and embodied in a suitable physical form. tivations, means, and consequences we see (pace the utilitarians and It must be, in some substantive sense, alive. It must exhibit self- most other consequentialists) as essential to “doing the morally right conscious awareness: who does the “I” who thinks “I” think that thing”– though any further discussion of that last point is necessarily “I” is? It must exhibit a range of highly sophisticated conceptual beyond the scope of this paper. abilities, going well beyond what the likely majority of conceptual In [28], we set out a number of conceptual requirements for pos- agents possess: not least that it must possess a well-developed moral sessing moral agency as well as grounds for appropriately attributing space of reasons. Finally, it must be able to communicate its moral it, as a way of addressing the claims and counterclaims over so-called agency through some system of signs: a “private” moral world is not artificial morality and machine consciousness. Our conclusion was enough. After reviewing these conditions and pouring cold water on that concerns over artefactual moral agency and consciousness were a number of recent claims for having achieved “minimal” machine useful for initiating discussion but ultimately a distraction from the consciousness, we turn our attention to a number of existing and, in bigger question of what it takes for any agent to be a conscious moral some cases, commonplace artefacts that lack moral agency yet nev- agent.