Emotion Modelling for Social Robots

Emotion Modelling for Social Robots

Emotion Modelling for Social Robots Ana Paiva, Iolanda Leite and Tiago Ribeiro INESC-ID and Instituto Superior Técnico, Technical University of Lisbon Av. Prof. Cavaco Silva, 2744-016, Oeiras, Portugal {ana.paiva, iolanda.leite, tiago.ribeiro}@inesc-id.pt Abstract This chapter describes current advances in emotion modelling for social robots. It begins by contextualising the role of emotions in social robots, considering the concept of the Affective Loop. It describes a number of elements for the synthesis and expression of emotions through robotic embodiments and provides an overview of the area of emotional adaptation and empathy in social robots. Keywords: Emotion Modelling, Social Robots, Human-Robot Interaction, Affective Loop. 1. Introduction: Robots in the Affective Loop The concept of a self-operating machine that resembles humans and behaves similarly to humans dates to ancient civilizations. From Homer, Leonardo da Vinci and Isaac Asimov, robots have captured the imagination of philosophers and writers throughout history, and robots continue to inspire us. Robotics is a technology that is expected to change the world and the way we live. During the early stages of robotics research, much of the development was focused on the usefulness of robots in industrial settings. As robots progress from these very controlled settings and laboratory environments and are deployed in homes and social contexts, the ability of robots to interact with humans in ways that resemble human interaction becomes increasingly more relevant (Breazeal, 2009). Emotions are essential for that interaction. To portray emotions in robots, researchers require methods to model and express emotions with different embodiments and distinct manners. Such modelling of emotions allows the placement of robots in the context of the Affective Loop to foster social interaction. As defined by Höök (2009), the Affective Loop is the interactive process in which “the user [of the system] first expresses her emotions through some physical interaction involving her body, for example, through gestures or manipulations; and the system then responds by generating affective expression, using for example, colours, animations, and haptics” which “in turn affects the user (mind and body) making the user respond and step-by-step feel more and more involved with the system.” To establish this Affective Loop between users and robots (Figure 1.1), robots will require an affect detection system that recognises, among other states, whether the user is experiencing positive or negative feelings, and a reasoning and action selection mechanism that chooses the optimum emotional response to display at a cognitive level. The method by which robots express the intended affective states should be effective (to be perceived by the users), and the actions of the emotional robot will affect the user (the third step of the Affective Loop). The robot perceives the user with the goal of personalising the interaction by analysing the user’s responses to the various affective expressions of the robot and adapting its emotional behaviour for each particular user. Figure 1.1. Affective Loop of Emotional Robots. Affective interactions play different roles and have various purposes in the context of Human-Robot Interaction (HRI). Among others, we can distinguish the following: 1 Give the illusion of life - The design of adaptive emotional behaviour must use particular caution to avoid unexpected or unintelligible behaviour. This problem can be solved by following a number of guidelines on the methods for creating expressive behaviour in robots, which provide the robots with the “illusion of life” (Ribeiro & Paiva, 2012). This illusion will lead to the user’s “suspension of disbelief,” which increases the perception of social presence, thus rendering the robot as a believable character (Bates, 1994). 2 Augment engagement - Emotions contribute to engagement in a social interaction context. Engagement, in this context, is defined as “the process by which two (or more) participants establish, maintain and end their perceived connection” (Sidner et al., 2004) and has received increasingly more attention by the HRI community (Rich et al., 2010). As previous research has highlighted, appropriate displays of affect have a significant effect on the user’s engagement while interacting with social robots (Leite et al., 2012). 3 Augment social presence in the long-term - The lack of adaptive emotional behaviour decreases the user’s perception of social presence, especially during long-term interactions (Leite et al., 2009), which in turn renders the robots to be non-believable characters (Bates, 1994). To be perceived as socially present, social robots must not only convey believable affective expressions, but also be able to do so in an intelligent and personalised manner, for example, by gradually adapting their affective behaviour to the particular needs and/or preferences of the users. This chapter will discuss the modelling of emotions in robots, not only through explicit computational mechanisms in which the emotions are captured but also in terms of adaptation, personalisation and expression, leading our emotional robots to become empathic, emotional creatures that can sustain the Affective Loop with the user. 2. Creating Synthetic Emotions in Robots Robots possess the power to convey the illusion of life simply by their physical presence and simple movements. When a robot moves towards a door and suddenly backs up, one may interpret its actions as avoidance and fear. Our perception of the robots’ actions may be biased, or perhaps enriched, by our inclination to suspend our disbelief and see robots as intelligent creatures that act according to their desires, goals and emotions. However, a robot’s behaviour can result from computational mechanisms that do not explicitly capture any of those aspects. Significant work on emotional behaviour in robots has been driven by the fact that simple behaviour may “fool us” by leading us to perceive robots as having “emotions” when, in reality, those “emotions” are not explicitly modelled and simply arise as an emergent effect of specific simple patterns of behaviour. When situations are such that a robot may need to interact with users at a higher level, often using natural language and gestures, and their actions need to be rich enough to convey some goal-oriented behaviour, emotions may represent a way to model different responses and thus provide the robot with more believable and appropriate responses to the tasks. In those situations, emotions may represent abstract constructs that simplify the generation of robots’ behaviours, as described by Leite et al. (2013). In this case, emotion modelling in robots becomes explicit, and specific architectures for that modelling have emerged in recent years. Emotions, modelled explicitly, may affect not only the action selection but also other cognitive processes such as reasoning, planning and learning. As this area grows, new types of architectures exploring different mental processes (e.g., theory of mind or affect regulation) are also arising, allowing robots to perform better in the world and interact with humans in a more natural way. In general, emotional architectures for robots (and virtual agents) seek inspiration from the way humans and other species perceive, reason, learn and act upon the world. We may distinguish different types of architectures according to their inspiration (from a neurobiological inspiration, to more psychological or data-driven models); the affective states they try to model (e.g., emotions, moods, personality); the types of processes captured (e.g., appraisal, coping); the integration with other cognitive capabilities; and the expressive power they possess. Most of the existing types of architectures are built with different processes and levels, thus extending a generic hybrid model. As argued by A. Sloman (2002), different types of architectures will support distinct collections of states and processes. A number of these types rely on symbolic models to represent the perceptual elements of the world, whereas others take a non- symbolic approach based on neural modelling. Considering the neurobiological approaches, they generally take the view that modelling of emotions can be done through different computational constructs associated with structures from the central nervous system such as the amygdala and the hypothalamus (Arbib, 2004). One of the earlier types of architectures for emotions in robots was Cathexis (Velazquez, 1998), which is a computational model of emotions and action selection inspired by neurobiological theories. The architecture integrates drives and emotions in a way that guides the behaviours and decision making of the robot. Many models capture affective states in an emergent manner, as the resulting pattern of behaviour arising from a variety of different processes embedded in the agent. More explicit modelling is accomplished by representing affective states using a symbolic approach. The classic BDI reference model (Wooldridge, 1995) considers “Beliefs, Desires and Intentions” to be the basic mental attitudes for generating an agent’s intelligent behaviour. This reference model has been extended to capture other mental states, in particular, emotions (see, for example, Gratch & Marsella, 2004 and Dias & Paiva, 2005). The majority of these types of architectures focus primarily on representing emotional states (e.g., quickly active, short and focused

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us