1 2 3 Do We Adopt the Intentional Stance Toward Humanoid Robots?

1 2 3 Do We Adopt the Intentional Stance Toward Humanoid Robots?

1 2 3 4 Do we adopt the Intentional Stance toward humanoid robots? 5 6 7 Serena Marchesi1, Davide Ghiglino1,2, Francesca Ciardo1, Ebru Baykara1, Agnieszka Wykowska1* 8 1Istituto Italiano di Tecnologia, Genoa, Italy 9 2DIBRIS, Università di Genova 10 11 *Corresponding author: 12 13 Agnieszka Wykowska 14 Istituto Italiano di Tecnologia 15 Centre for Human Technologies 16 Via E. Melen, 83 17 16040 Genova, Italy 18 e-mail: [email protected] 19 20 1 21 Abstract 22 In daily social interactions, we explain and predict behaviours of other humans by referring to mental states 23 that we assume underlie their behaviours. In other words, we adopt the intentional stance (Dennett, 1971) 24 towards other humans. However, today we also routinely interact with artificial agents: from Apple’s Siri 25 to GPS navigation systems. In the near future, we will casually interact with robots. Since we consider 26 artificial agents to have no mental states, we might not adopt the intentional stance towards them, and this 27 might result in not attuning socially with them. This paper addresses the question of whether adopting the 28 intentional stance towards artificial agents is possible. We propose a new method to examine if people have 29 adopted the intentional stance towards an artificial agent (humanoid robot). The method consists in a 30 questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation 31 (mentalistic vs. mechanistic) of behaviour of a robot depicted in a naturalistic scenario (a sequence of 32 photographs). The results of the first study conducted with this questionnaire showed that although the 33 explanations were somewhat biased towards the mechanistic stance, a substantial number of mentalistic 34 explanations was also given. This suggests that it is possible to induce adoption of the intentional stance 35 towards artificial agents. Addressing this question is important not only for theoretical purposes of 36 understanding the mechanisms of human social cognition, but also for practical endeavours aiming at 37 designing social robots. 38 39 40 2 41 Introduction 42 Over the last decades, new technologies have entered inexorably our houses, becoming an integral part of 43 our everyday life. Our constant exposure to digital devices, some of which seemingly “smart”, makes the 44 interaction with technology increasingly more smooth and dynamic, from generation to generation (Baack, 45 Brown &Brown, 1991; Zickuhr, K., & Smith, A., 2012; Gonzàles, Ramirez, Viadel, 2012). Some studies 46 support the hypothesis that this exposure is only at its beginning: it seems likely that technologically 47 sophisticated artifacts, such as humanoid robots, will soon be present in our private lives, as assistive 48 technologies and housework helpers (for a review see Stone et al., 2017). 49 Despite the technological habituation, we are (and presumably we will continue to be) undergoing, little is 50 known about social cognition processes we put in place during interaction with machines, humanoid robots 51 specifically. Several authors have theorized that humans possess a natural tendency to anthropomorphize 52 what they do not fully understand. Epley et al. (2007) for instance, defined anthropomorphism as the 53 attribution of human-like characteristics and properties to non-human agents and/or objects, independent 54 of whether they are imaginary or real. The likelihood of spontaneous attribution of anthropomorphic 55 characteristics depends on three major conditions (Epley et al., 2007; Waytz, et al, 2010): first, availability 56 of characteristics that activate existing knowledge that we have about humans ; second, the need of social 57 connection and efficient interaction in the environment; and finally individual traits (such as need of 58 control) or circumstances (e.g., loneliness, lack of bonds with other humans). In the vol. I of the Dictionary 59 of the History of Ideas, Agassi argues that anthropomorphism is a form of parochialism allowing projecting 60 our limited knowledge into a world that we do not fully understand. Some other authors claimed that we, 61 as humans, are the foremost experts in what it means to be human, but we have no phenomenological 62 knowledge about what it means to be a non-human (Nagel, 1974; Gould, 1996). For this reason, when we 63 interact with entities for which we lack specific knowledge, we commonly choose “human” models to 64 predict their behaviors. 65 This stance might be taken also when we interact with technology, the operation of which we cannot 66 understand. Almost everyone has seen someone shouting at their malfunctioning printer, or imploring their 3 67 phone to work, as if they were interacting with another person. Thinking in rational terms, no adult in the 68 healthy population would ever consider his/her smartphone as a sentient being, yet, sometimes, we behave 69 as if technological devices around us possessed intentions, desires and beliefs. 70 This has an important theoretical implication: from this perspective, it seems that other’s mental states are 71 a matter of external attribution rather than a propriety of an entity. Waytz et al. (2010) support this 72 hypothesis, and consider the attribution of mental states as the result of perception of an observer of an 73 observed entity, as a sum of motivational, cognitive, physical and situational factors. Other authors found 74 that several cues can foster the attribution of mental states to non-human entities, from motion (Heider, 75 Simmel, 1944; Scholl, Tremoulet, 2000), and physical properties of an object (Morewedge et al., 2007) to 76 the personological characteristic of the observer (Epley et al., 2007). Furthermore, it seems that the 77 tendency to attribute mental states is ubiquitous and unavoidable for human beings (Parlangeli et al, 2012). 78 Anthropomorphizing seems to be a default mode that we use to interpret and interact with the agents with 79 which we are sharing (or going to share) our environment (Epley et al., 2007, Waytz et al., 2010, Wiese et 80 al., 2017), unless other manners of interpreting other entities’ behavior are available; for example, if a 81 coffee machine is not working because it is not connected to a power socket, we would certainly not explain 82 the failure in delivering the coffee by the coffee machine with reference to its bad intentions. 83 Already in the seventies, Dennett proposed that humans use different strategies to explain and predict other 84 entities’ (objects, artifacts or conspecifics) behaviors (Dennett, 1971, 1987). Dennett defines three main 85 strategies or “stances” that humans use. Consider chemists or physicists in their laboratory, studying a 86 certain kind of molecules. They try to explain (or predict) the molecules’ behavior through the laws of 87 physics. This is what Dennett calls physical instance. There are cases in which laws of physics are an 88 inadequate (or not the most efficient) way to predict the behavior of a system. For example, when we drive 89 a car, we can fairly predict that the speed will decrease if we push the brake pedal, since the car itself is 90 designed this way. To make this kind of prediction, we do not need to know the precise physical 91 mechanisms behind all atoms and molecules in the braking system of the car, but it is sufficient to rely on 92 our experience and knowledge of how the car is designed. This is described by Dennett as the design stance. 93 Dennett proposes the existence of also a third strategy, the intentional stance. Intentional stance relies on 4 94 ascription of beliefs, desires, intentions and, more broadly, mental states to a system. Dennett (1987) 95 defines each system whose behaviour is predictable by others with reference to mental states as an 96 intentional system. Epistemologically speaking, Dennett refers to mental states in a realistic manner: he 97 treats mental states as an objective phenomenon. This is because adopting the intentional stance is the best 98 strategy only towards truly intentional systems (the “true believers”), while for other systems, other 99 strategies might work better. What makes Dennett an interpretationist, however, is that according to his 100 account, mental states can be discerned only from the point of view of the one who adopts intentional stance 101 as a predictive strategy, and dependent on whether this strategy is successful or not. In other words, despite 102 beliefs being a real phenomenon, the success of a predictive strategy confirms their existence. 103 Interestingly, adopting the intentional stance towards an artefact (such as a humanoid robot) does not 104 necessarily require that the artefact itself possesses true intentionality. However, adopting the intentional 105 stance might be a useful or default (as argued above) way to explain a robot’s behavior. Perhaps we do not 106 attribute mental states to robots but we treat them as if they had mental states. Breazeal (1999) highlighted 107 that this process does not require endowing machines with mental states in the human sense, but the user 108 might be able to intuitively and reliably explain and predict the robot’s behavior in these terms. Intentional 109 stance is a powerful tool to interpret other agents’ behavior. It leads to interpret behavioral patterns in a 110 general and flexible way. Specifically, flexibility in changing predictions about others’ intentions is a 111 pivotal characteristic of humans. Adopting the intentional stance is effortless for humans, but of course, it 112 is not the perfect strategy: if we realize that this is not the best stance to make predictions, we can refer to 113 the design stance or even the physical stance. The choice of which stance to adopt is totally free and might 114 be context-dependent: it is a matter of which explanation works best.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    27 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us