Interactional Morality
Total Page:16
File Type:pdf, Size:1020Kb
이민하 MINHA LEE INTERACTIONAL T I N E MINHA LEE was born in Seoul, Korea. T C MORALITY: E Her first research was on the puzzling H R N phenomenon called “why rockos eat A TECHNOLOGY O no chipos?” She has since then moved C L T O AS OUR on to investigate how technology shapes I O G us as moral beings, and whether our N Y MORAL A interactions with digital entities reveal A L S our moral selves in a new light. M MIRROR O O U R R A L M I T O Y R : A L M I R R O R 이민하 MINHA LEE Minha Lee Interactional Morality: Technology as Our Moral Mirror The work in this dissertation has been carried out at the Human-Technology Group, Eindhoven University of Technology. Copyright © 2021 by Minha Lee. All rights reserved. A catalogue record for this book is available from the Eindhoven University of Technology library. ISBN: 978-94-6423-115-1 NUR: 600 Keywords: Morality, Human-computer interaction Cover design: Minha Lee Printed by: ProefschriftMaken || www.proefschriftmaken.nl Interactional Morality: Technology as our Moral Mirror PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Eindhoven, op gezag van de rector magnificus prof.dr.ir. F.P.T. Baaijens, voor een commissie aangewezen door het College voor Promoties, in het openbaar te verdedigen op dinsdag 16 februari 2021 om 16:00 uur. door Minha Lee geboren te Seoul, Republiek Korea Dit proefschrift van het proefontwerp is goedgekeurd door de promotoren en de samenstelling van de promotiecommissie is als volgt: voorzitter: prof. dr. I.E.J. Heynderickx e 1 promotor: prof. dr. W.A. IJsselsteijn e 2 promotor: prof. dr. ir. Y.A.W. de Kort copromotor(en): dr. L.E. Frank leden: prof. dr. D.K.J. Heylen (Universiteit Twente) prof. dr. N. Krämer (Universität Duisburg-Essen) prof. dr. P. Markopoulos prof. dr. S. Vallor (The University of Edinburgh) Het onderzoek of ontwerp dat in dit proefschrift wordt beschreven is uitgevoerd in overeenstemming met de TU/e Gedragscode Wetenschapsbeoefening. v Abstract This dissertation concerns how technology shapes us as moral beings, and whether our interactions with digital entities reveal our moral selves in a new light. We see a proliferation of technological agents (like chatbots or robots) in our everyday settings in the present day. Accordingly, how these technologies are perceived to have emotional and cognitive abilities during social interactions is receiving increasing research attention. In comparison, if and how people perceive these agents to have moral status is sparsely investigated. To fill this research gap, studies herein are about how interactive technologies’ displays of emotion and cognition relate to how we perceive them to have moral capacities. A diverse set of morally relevant interactions are thus considered, ranging from negotiations, debates, to taking care of a technological agent. The motivating assumption is that our own and technologies’ moral development becomes intertwined through our perception of technologies’ moral status when considering different situations. The research on artificial agents as extensions of who we are morally can therefore impact the development of ethics through human-computer interaction. In accordance, the following was explored. How does a machine’s artificial cognitive and affective capacities in morally relevant dyadic interactions relate to (1) a human partner’s self-perception of their own moral bearing and (2) the perception of an agent’s moral status? To answer, interrelated studies have combined qualitative and quantitative methods to observe social, technical, and moral dimensions of technological agents. Results highlight three factors that matter for people to see technology as morally endowed: Upon assessing (1) an agent’s perceived mind, (2) the agent’s displayed affective capacity, rather than cognitive capacity, was more crucial for people’s perception on whether a machine can be moral at all. But, (3) their perception can change over time through morally relevant interactions. The concept of interactional morality is introduced based on presented research about how people undergo morally pertinent experiences, like debates, with artificial beings that display morally relevant behavior, such as sharing moral opinions or displaying suffering based on people’s acts. Interactional morality is an interplay between a person and an artificial agent within a particular morally relevant situation. In this, vi a capacity like agency is demonstrated through an interaction between a dyad, e.g., a person exercising agency against the artificial agent during a game, rather than being a trait solely intrinsic to the person or artificial agent. Similarly, being moral in a dyadic interaction describes how a person or an agent acted morally or immorally towards the other, more so than how a person or an agent is intrinsically moral as a trait. Building on this, studies included herein collectively foreshadow an emerging era. AI becomes our moral mirror when we envision what we can do for AI as a way to ask what we can do for ourselves; AI will serve us better when we serve it as an extension of us, which is a different starting point than asking what AI can do for us. Designing technology to be as moral as a human can (or should) behave requires a radical shift to interactional morality with technological others, starting with examining how their artificial minds and emotions stand to influence our own moral decisions, emotions, and self-perception. vii Contents 1 Introduction 1 1.1 The merging of the social-technical, and its gap from the moral 3 1.2 How do we relate to artificial agents? 5 1.3 Dyadic interactions 6 1.4 Levels of abstraction 7 1.5 Reactive attitudes: Emotions in moral interactions 9 1.6 Moral identity 12 1.7 Recapitulation 13 1.8 Summaries of empirical chapters 14 1.9 Interactional morality 17 1.10 Conclusion 18 2 Where is Vincent? Artificial emotions and the real self 19 2.1 Introduction 19 2.2 Literature review 21 2.3 Methodology 26 2.4 Story: Where is Vincent? 27 2.5 Results 31 2.6 Discussion 40 2.7 Conclusion 44 3 Mind perception: Dimensions of agency and patiency 45 3.1 Introduction 45 3.2 Background 46 3.3 Study 1: Dictator and Ultimatum Games 53 3.4 Study 2: Negotiation 57 viii 3.5 General discussion 63 3.6 Conclusion 66 4 “You’re a robot, so you don’t feel so much" 69 4.1 Introduction 69 4.2 Related work 70 4.3 Methods 79 4.4 Results 83 4.5 Discussion 90 4.6 Conclusion and future work 95 5 People may punish, but not blame artificial agents 99 5.1 Introduction 99 5.2 Background 100 5.3 Study 1: An online study with American participants 106 5.4 Study 2: An online study with Dutch participants 110 5.5 Study 3: A lab study with Dutch participants 113 5.6 Discussion 117 5.7 Conclusion 120 6 Caring for Vincent: A Chatbot for Self-compassion 123 6.1 Introduction 123 6.2 Background 125 6.3 Study 1: Ten minutes with Vincent 129 6.4 Study 2: Two weeks with Vincent 133 6.5 Results 138 6.6 Conclusion 152 7 Reflections 153 7.1 Introduction 154 7.2 Summary 155 7.3 Setting the scene 162 7.4 Interactional morality 167 7.5 Merging of self and the other: Compassion 170 A Measurements 205 ix A.1 Chapter 3 205 A.2 Chapter 4 211 A.3 Chapter 5 213 A.4 Chapter 6 215 B List of publications 223 C Summary 227 D Biography 231 E Acknowledgements 233 xi List of Figures 1.1 Available since 2019, temi robot is equipped with a touchscreen, sensors and Alexa ($1,999, https://www.robotemi.com/). 1 1.2 The social-technical, moral-technical, and social-moral dimensions. 3 1.3 The merging of the social-technical, and its gap from the moral. 4 1.4 Google Assistant (top) and Siri (bottom) responding to three ex- pressives (February, 2020). 11 2.1 Furby- A popular robotic toy of the 90’s by Tiger Electronics and later, Hasboro (https://furby.hasbro.com/en-us). 20 2.2 Tamagotchi - A hand-held 90’s toy that showed a digital pet one can take care of by Bandei (https://tamagotchi.com/building-lifelong- tamagotchi-friends/). 20 2.3 First- and second-order emotions 33 3.1 Agency and patiency in a social exchange. 48 3.2 Negotiation interface in Study 2 57 3.3 Agents’ standardized scores across DG, UG, and negotiation as outcomes, over low and high agency and patiency. 60 4.1 Experimental set-up: Transparency condition is shown with an additional screen that had mental state diagrams with text. 81 4.2 The robot’s dialogue states. 81 5.1 Roomba by iRobot is the name for robotic vacuum cleaners of dif- ferent categories that can autonomously vacuum the floor (https://www.irobot.com/roomba). 99 5.2 Nao robot in a video that people had to watch to answer ques- tions. 107 5.3 Participants sat in front of a Nao robot during the experiment and answered survey questions on the computer behind the robot. 113 5.4 The average perceived agency and patiency from Studies 1 through 3, across no emotion and emotion conditions. 117 6.1 Introduction stage with Vincent in Study 1 131 xii 6.2 Care-receiving Vincent in Study 2 134 A.1 IOS 210 xiii List of Tables 1.1 An overview of all studies in the dissertation. 15 3.1 Agent types and excerpts from their descriptions and dialogues in Study 2.