User Modeling for Social Robots

User Modeling for Social Robots

User modeling for social robots Cristina Gena Marco Botta Federica Cena Claudio Mattutino Computer Science Department Computer Science Department Computer Science Department Computer Science Department University of Turin University of Turin University of Turin University of Turin Italy Italy Italy Italy [email protected] [email protected] [email protected] [email protected] Abstract—This paper presents our first attempt to integrate modify its behavior according to them. This last task is focused user modeling features in social and affective robots. We propose on real-time emotion detection and on how these could change a cloud-based architecture for modeling the user-robot interac- human robot interaction. As case study, we started from a tion in order to re-use the approach with different kind of social robots. scenario wherein the Sanbot robot welcomes people who enter Index Terms—social robot, user modeling, affective computing the hall of the Computer Science Dept. of the University of Turin. The robot must be able to recognize the people it has already interacted with and remember past interactions. I. INTRODUCTION The architecture used for developing the iRobot application A social robot is an autonomous robot that interact with is a client-server one. At high level, there are two main people by engaging in social-emotive behaviors, skills, capac- components: a client, written in Java for Android, and a ities, and rules related to its collaborative role [1]. Thus, social server, written in Java, and located on a dedicated machine. interactive robots need to perceive and understand the richness The client’s goal is to interact with the user and to acquire and complexity of user’s natural social behavior, in order to data aimed at customizing the user-robot interaction, while interact with people in a human-like manner [1]. Detecting and the server represents the robot’s ”brain” in the cloud. More in recognizing human action and communication provides a good detail, the architecture has been implemented as follow. starting point, but more important is the ability to interpret and The client side, represented in the current scenario by react to human behavior, and a key mechanism to carry out Sanbot hosting the iSanbot application, is compatible with any these actions is user modeling. User models are used for a other robot equipped with a network connection and able of variety of purposes by robots [2]. First, user models can help acquiring audio and video streaming. The server side, written the robot understand an individual’s behavior and dialogue. in Java as well, acts as a glue between the various software Secondly, user models are useful for adapting the behavior of components, each of which creates a different service. In the robot to the different abilities, experiences and knowledge particular, the main services are: the emotion recognition, the of user. Finally, they determine the control form and feedback face recognition, the user modeling and finally the speech- given to the user (for example, stimulating the interaction). to-text service. The latter has been created using the Wit 2 This paper presents the our attempt to integrate user model- service (a free Speech to Text service) and communication ing features in social and affective robots. We propose a cloud- takes place through a simple HTTP request, sending the based architecture for modeling the user-robot interaction in audio to be converted into text and waiting for the response. order to re-use the approach with different kind of social The face recognition, instead, has been developed in Python arXiv:2108.02513v1 [cs.RO] 5 Aug 2021 robots. and implements the librarY ”FaceRecognition.py”, while the emotion recognition service has been developed in C# and II. THE IROBOT APPLICATION implements the libraries of Affectiva3. The application starts The aim of our work is to develop a general purpose cloud- with a natural language conversation that has the goal of based application, called iRobot, offering cloud components knowing the user (e.g., face, name, sex, age, interests and for managing social, affective and adaptive services for social predominant mood, etc.) and then recovering all the acquired robots (i.e., Sanbot Elf, Softbak Pepper, etc.). This first version and elaborated information from the user modeling component of the application has been developed and tested for the Sanbot for the following interactions, so that the robot will be able to Elf1 (henceforth simply Sanbot) robot acting as client, thus adapt its behavior. this client-side application has been called iSanbot. Thanks to A. Face recognition service the cloud-based service, the robot is able to handle a basic conversation with users, to recognized them and follow a set First, we implemented the face recognition service. Sanbot, of social relationship rules, to detect the user’s emotions and as soon as the user approaches, welcomes and takes a picture 2https://wit.ai/ 1http://en.sanbot.com/product/sanbot-elf/ 3https://www.affectiva.com/ of her, and sends it to the Java server, which delegates the The Affectiva SDK and API also provide estimation on Python component to the user’s recognition. If the user has gender, age range, ethnicity and a number of other metrics not been recognized, a new user will be inserted into the user related to the analyzed user. In particular we use the informa- database. The user is associated to a sets of attributes, such as tion on gender and age range to implicitly acquire this data name, surname, age, sex, profession, interests, moods, etc. to about the user and store them on the database. Since also this be filled in the following. Some of the attribute indeed may information is returned with a perceived intensity level, we be associated with an explicit question, while other may be use a certainty factor described in [4]. implicitly inferred. Thus the client, during the user registration C. User modeling phase, asks the server what the next question is and then it reads it to the user. Once the user has answered, the audio is Currently, we have created a very simple user model sent to the Wit server, converted into text and, one returned, organized as a set of feature-value pairs, and stored in a saved into the corresponding attribute associated with the database. The current stored features are: user name, age range, question. For example Sanbot, if does not recognized the user, gender, favourite sport, favorite color, profession, number of asks the user her name, then, before taking her a picture, it interaction, predominant emotions. Some of the user acquired asks the permission to (Do you want me to remember you information are explicitly asked to the user (name, profession, the next time i see you?), then it asks her profession, and her sport, color), other ones (age, sex, emotions) are inferred favorite color and sport, and stores all these information on the trough the emotion recognition component described above. user DB. The detected predominant emotions will be inferred Please notice that explicit information as favorite color and and stored, as well as other biometric data,as described in the sport are used for mere demonstration purposes. following. In the future we would like to extend the user features On the other hand, if the face recognition component with other inferred features such as kind of personality (e.g. recognizes the user, all her stored information are recovered Extraversion, Agreeableness, Neuroticism, etc), kind of user and the user is welcomed (e.g., Hello Cristina! Do you feel dominant emotion, stereotypical user classification depending better today?) as well the next question to read. When there on socio-demographic information as sex, age and profession, are no more questions, the conversation ends. and so on. Concerning this last point, we will base our As far as the robot adaptive behavior is concerned, it is stereotypes on shared social studies categorizing somehow now based on simple rules, which are used for demonstration the user’s interests and preferences on the basis of socio- purposes and which will be then modified and extended in demographic information. We would also like to import part our future work. For example, in the current demo, the robot of the user profile by scanning social web and social media performs a more informal conversation with young people and [5], with the user’s consent. a more formal one with older people, it is more playful with III. CONCLUSION young people and more serious with older people. For accessibility reasons and even better understanding, the Our cloud-based service to model the user and her emotion application replicates all the conversation transcription on the is just at the beginning. We are now working with the Pepper Sanbot tablet screen, both the phrases spoken by Sanbot and robot to replicate the same scenario implemented in Sanbot. those ones acquired by the user. In the future we would like to replace Affectiva with a similar service we are developing, extend the user model and integrate B. Emotion recognition service more sophisticated rules of inferences and adaptation, improve During conversation with the user, the emotion recognition the component of natural language processing. component is running. This component has been realized REFERENCES through a continuous streaming of frames that is directed [1] C. L. Breazeal, Designing sociable robots. MIT press, 2004. towards the C# component, which analyzes each frame and [2] T. Fong, I. Nourbakhsh, and K. Dautenhahn, “A survey of socially extracts the predominant emotions. After that, the server sends interactive robots,” Robotics and autonomous systems, vol. 42, no. 3-4, a response via a JSON object to the client, which can therefore pp.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    2 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us