
Human Perceptions of the Severity of Domestic Robot Errors B Alessandra Rossi( ), Kerstin Dautenhahn, Kheng Lee Koay, and Michael L. Walters Adaptive Systems Research Group, University of Hertfordshire, College Lane, Hatfield, UK {a.rossi,k.dautenhahn,k.l.koay,m.l.walters}@herts.ac.uk Abstract. As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human- oriented environments. We can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occur with other electrical consumer devices. For example, these errors could include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. As a first step, the current study investigated human users’ perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. We conducted a questionnaire-based study, where participants rated 20 different scenarios in which a domestic robot made an error. The potential errors were rated by participants by severity. Our findings indicate that people perceptions of the magnitude of the errors presented in the questionnaire were consistent. We did not find any sig- nificant differences in users’ ratings due to age and gender. We clearly identified scenarios that were rated by participants as having limited con- sequences (“small” errors) and that were rated as having severe conse- quences (“big” errors). Future work will use these two sets of consistently rated robot error scenarios as baseline scenarios to perform studies with repeated interactions investigating human perceptions of robot tasks and error severity. Keywords: Human-Robot Interaction · Social robotics · Robot companion 1 Introduction The increasing presence of humanoid and human-friendly robots in daily living activities requires the development of more natural Human-Robot Interaction (HRI) capabilities for robots. Autonomous robots are beginning to be used for tasks such as assisting older adults [18,20,31] and to interact with children with c Springer International Publishing AG 2017 A. Kheddar et al. (Eds.): ICSR 2017, LNAI 10652, pp. 647–656, 2017. https://doi.org/10.1007/978-3-319-70022-9_64 648 A. Rossi et al. autism [8,21,32]. They are also used to assist humans in common activities in household situations, including answering questions and for support in a kitchen [27]. Autonomous robots can also provide cognitive assistance [16,28]. However, in designing robots that are able to interact and cooperate in human oriented complex environments, two main factors should be considered: Humans need to accept the presence of the robot and they have to trust that their robotic com- panion will be able to look after their well-being without compromising their safety. For example, a robot should assist to prevent a fire starting if a kettle was left to boil water. Human users should also be able to trust their robot to not open the door to strangers - potential thieves - when they are not at home or asleep. If people wake up in the middle of the night for whatever reason, and switch on the light, they should be confident that they will not stumble over their robot and get injured. Several previous research studies have shown that socially interactive robots are better accepted by humans [3,4]. Syrdal et al. [29] showed that dog-inspired affective cues communicate a sense of affinity and relationship with humans. Martelaro et al. [17] established that trust, disclosure, and a sense of companionship are related to expressiveness and vulnerability. They showed how a sense of the robot’s vulnerability, through facial expressions, colour and movements, increased trust and companionship, and increased disclosure. Other aspects such as the type, size, proximity, and behaviour also affected perception of robots [1,10]. Lohse et al. [15] demonstrated that robots with more extrovert personalities are perceived in a more positive way by users. Lee et al. [13] showed that trust enhances people’s acceptance of robots. Trust is a complex feeling even between humans [12] and it can change during the course of interactions due to several factors [2]. In the case of robots, human-robot trust is likely to be influenced by the reliability of the robot’s capabilities. Hancock et al. [9] iden- tify 33 factors influencing trust in HRI, grouped within 3 categories and 6 sub- categories. The main categories are: Human-related, such as self-confidence, prior experience with robots and operator workload; Robot-related, such as proximity, robot’s embodiment, transparency, level of autonomy and failure rates; and Envi- ronmental, such as communication and team collaboration. They showed that robot characteristics, with a special focus on performance-based factors, have great influence on perceived trust in HRI. Deutsch [7] defines trust as confidence that [one] will find what is desired [from another] rather than what is feared. [7, p. 148]. Higher trust is associated with the perception of higher reliability [24]. Therefore, humans may perceive erroneous robot behaviours according to their expectations of a robot’s proper functions [33]. However, robots can be faulty, due to mechanical or functional errors. For example, a robot might be too slow due to batteries running low. It might not be able to detect an obstacle and destroy a human user’s favourite object, or the arm of the robot might cause a breakage during a delicate task. Each of these examples are robot errors, though their magnitude might be perceived differently according to the resultant con- sequences. But which type of errors have more impact on human perceptions of robots? Factors may include severity and duration, the impact of isolated ‘big errors’, or an accumulation of ‘small errors’. For example, Muir and Moray [31] Human Perceptions of the Severity of Domestic Robot Errors 649 argue that human perceptions of a machine are affected in a more severe and long-term way by an accumulation of ‘small’ errors rather than one single ‘big’ error. The embodiment of a robot may also have a major impact on the percep- tion of it by humans [1]. But what is perceived as a ‘big error’ and what is a ‘small error’? People have individual differences, including age, gender, cultural and social habits, which may impact their perceptions of what are considered big or small errors. In order to study the differences in terms of the impact of errors in a human-robot interaction, first we have to establish what people consider subjectively to be ‘small’ or ‘big’ errors exhibited by a home companion robot. For this purpose, a questionnaire survey was conducted in order to classify the perceived magnitude of various robot errors by humans. Specifically, the current study addressed human users’ perceptions of different types of errors. 2 Evaluation of the Magnitude of Robot’s Errors The purpose of this study was to investigate human perceptions of some poten- tial errors that might be made by a robot. In this context, our first study was directed towards the classification of the robot’s errors according to their per- ceived magnitude. 2.1 Method The study was organised as a within-subject experiment. Human participants’ responses to different robot error scenarios were recording their ratings on a 7-point Semantic Differential Scale [1 = small error and 7 = big error]. The ques- tions included life-threatening situations, such as “Your robot brings you a dish- washing tablet instead of paracetamol.”, and more common errors, such as “You are watching your favourite show on TV and your robot changes the channel.”. 2.2 Procedure Participants were tested individually and the experimenter first provided them with a brief description of the experimental procedure. Participants were told to imagine that they live with a robot as a companion in their home, but the robot might exhibit some mistakes. The embodiment of a robot plays an impor- tant role in how people perceive robots [1,14,17,25]. However, since we are not interested in the perception of any specific robot but only in the general percep- tion of errors in specific tasks we did not provide a description of the robot to the participants. The participant’s completed a questionnaire, with 20 questions (one per error scenario) where they rated the magnitude of the consequences of the error illustrated in the different task scenarios. For example, “You ask for a cup of coffee. Your robot brings you an orange.” or “Your robot leaves your pet hamster outside the house in very cold weather.”. The questionnaire also included two optional open-ended questions in which the participant was free to add their own examples of possible small and big errors not already included 650 A. Rossi et al. in the proposed scenarios. The duration of the survey for each participant was about 5 min. In the future robots will be able to carry out a large number of tasks in domestic environments, so potentially we could have considered hun- dreds of different scenarios. Therefore, for practical reasons we used a smaller set of possible scenarios. We designed the scenarios used in the study to cover a wide range of generic types of errors based on previous HRI research with home companion robots. For example, Syrdal et al. [30] used a scenario in which a robot collected information from its user and then disclosed the data to a third party. Salem et al. [26] used a robotic companion that offered to play music or to setup the table.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-