<<

Increasing User Confdence in Privacy-Sensitive

by Raniah Abdullah Bamagain

Bachelor of Science Information Technology Computing and Information Technology 2012

A thesis submitted to the College of Computer Engineering and Sciences at Florida Institute of Technology in partial fulfllment of the requirements for the degree of

Master of Science in Information Assurance and Cybersecurity

Melbourne, Florida May, 2019 ⃝ Copyright 2019 Raniah Abdullah Bamagain All Rights Reserved

The author grants permission to make single copies. We the undersigned committee hereby approve the attached thesis

Increasing User Confdence in Privacy-Sensitive Robots by

Raniah Abdullah Bamagain

Marius Silaghi, Ph.D. Associate Professor Department of Computer Engineering and Sciences Committee Chair

Hector Gutierrez, Ph.D. Professor Department of Mechanical and Civil Engineering Outside Committee Member

Lucas Stephane, Ph.D. Assistant Professor Department of Computer Engineering and Sciences Committee Member

Philip Bernhard, Ph.D Associate Professor and Head Department of Computer Engineering and Sciences ABSTRACT Title: Increasing User Confdence in Privacy-Sensitive Robots Author: Raniah Abdullah Bamagain Major Advisor: Marius Silaghi, Ph.D.

As the deployment and availability of robots grow rapidly, and spreads everywhere to reach places where they can communicate with humans, and they can constantly sense, watch, hear, process, and record all the environment around them, numerous new benefts and services can be provided, but at the same time, various types of privacy issues appear. Indeed, the use of robots that process data remotely causes privacy concerns. There are some main factors that could increase the capabil- ity of violating the users’ privacy, such as the robots’ appearance, perception, or navigation capability, as well as the lack of authentication, the lack of warning sys- tem, and the characteristics of the application. Here we analyze these factors and propose solutions that assist in mitigating the problem of privacy violation while using social robots. These solutions assist in solving the limitations of current robots and in producing privacy-sensitive robots. The result consists in usable, trusted, and comfortable techniques to bring security in the context of social utilization, to protect users’ privacy in the presence of social robots, to increase users’ awareness towards associated privacy risks, and to fnd trade-ofs between privacy loss and utility achieved. The aim is to increase the user confdence in the privacy guarantees made available by the robots. The results are verifed with surveys and experiment.

iii Table of Contents

Abstract iii

List of Figures ix

List of Tables xiv

Acknowledgments xv

1 Introduction 1 1.1 Overview ...... 1 1.2 Motivation ...... 3 1.3 Research Problem ...... 4 1.3.1 Research Questions ...... 6 1.4 Proposed Solutions ...... 7 1.4.1 Research Hypotheses ...... 9 1.5 Thesis Structure ...... 12

2 Background 13 2.1 Overview ...... 13 2.2 in History ...... 14 2.3 Robotics Classifcations ...... 24

iv 2.4 Robotic Sensors ...... 28 2.5 Robot Communications ...... 30 2.6 Summary ...... 32

3 Literature Review 33 3.1 Overview ...... 33 3.2 Threat on Robotics ...... 33 3.2.1 Meaning of Privacy ...... 35 3.3 Privacy Issues in Robotics ...... 36 3.3.1 Privacy Concerns ...... 38 3.3.1.1 Overview ...... 38 3.3.1.2 Related Works ...... 38 3.3.1.3 Limitations ...... 43 3.3.2 Shape of Robots ...... 44 3.3.2.1 Overview ...... 44 3.3.2.2 Related Works ...... 44 3.3.2.3 Limitations ...... 48 3.3.3 Robots’ Perception (Camera and Microphone) ...... 49 3.3.3.1 Overview ...... 49 3.3.3.2 Related Works ...... 49 3.3.3.3 Limitations ...... 58 3.3.3.4 Related Works ...... 58 3.3.3.5 Limitations ...... 59 3.3.4 Robots Navigation (Movement) ...... 59 3.3.4.1 Overview ...... 59 3.3.4.2 Related Works ...... 59

v 3.3.4.3 Limitations ...... 62 3.3.5 Authentications on Robots ...... 62 3.3.5.1 Overview ...... 62 3.3.5.2 Related Works ...... 63 3.3.5.3 Limitations ...... 66 3.3.6 Robot Warning System ...... 66 3.3.6.1 Overview ...... 66 3.3.6.2 Related Works ...... 66 3.3.6.3 Limitations ...... 67 3.3.7 Robots’ Application’s Characteristics ...... 68 3.3.7.1 Overview ...... 68 3.3.7.2 Related Works ...... 68 3.3.7.3 Limitations ...... 69 3.4 Summary ...... 69

4 Proposed Privacy Sensitive Robots 70 4.1 Overview ...... 70 4.2 Shape of Robots ...... 71 4.3 Constraining Robots’ Perception (Camera and Microphone) . . . . 77 4.3.1 Constraining Robots’ Camera ...... 77 4.3.2 Constraining the Robots’ Microphone ...... 86 4.4 Constraining Robots’ Navigation (Movement) ...... 88 4.5 Users’ Authentications on Robots ...... 91 4.6 Robot Warning System ...... 92 4.7 Robots Application’s Characteristics ...... 95 4.8 Summary ...... 97

vi 5 Methodology 98 5.1 Overview ...... 98 5.2 The Goals of the Study ...... 98 5.3 The Methods of the Study ...... 99 5.3.1 Surveys ...... 99 5.3.2 Experiment ...... 101 5.4 The Participants of the Study ...... 102 5.5 Summary ...... 102

6 Study Analysis and Results 103 6.1 Overview ...... 103 6.2 Surveys Results ...... 104 6.2.1 The ”Cameras’ Covers” Survey ...... 104 6.2.2 The ”Filters’ Efects” Survey ...... 105 6.2.3 The Main Survey ...... 109 6.2.3.1 Demographics Information ...... 109 6.2.3.2 General Background and Concerns ...... 111 6.2.3.3 The Possible Techniques that Could Assist in Mit- igating Privacy Violation ...... 132 6.2.3.4 Additional Features of the for Privacy Protection ...... 144 6.2.3.5 Awareness/ Warning System ...... 153 6.3 Experiment Results ...... 161 6.4 Summary ...... 162

7 Discussions and Future Work 164

vii 7.1 Limitations ...... 164 7.2 Future Work ...... 165

8 Conclusion 166

References 168

A IRB Approval Letter 183

B Surveys 184 B.1 The Informed Consent ...... 184 B.2 The ”Cameras’ Covers” Survey ...... 187 B.3 The ”Filters’ Efects” Survey ...... 187 B.4 The Main Survey (Social Robots) ...... 193

viii List of Figures

4.1 The results of people’s opinions on the outer shape of robots [73] . . 72 4.2 iCub, Flobi, Dreamer, Simon ...... 74 4.3 Robots eyeglasses ...... 75 4.4 robot equipped with screen ...... 76 4.5 1) abstraction, 2) blurring, 3) pixelating, and 4) redacting . . . . . 78 4.6 Abstraction, blur, and redact flters used by a previous study [34] . 78 4.7 Replacing flter ...... 79 4.8 Replacing flter used by a previous study [34] ...... 79 4.9 Replacing flter applied on money safe box ...... 80 4.10 Morphing and image melding [62] ...... 81 4.11 The left image is the original image and the second image is our proposed adaptive flter “delete and replace” ...... 83 4.12 The left image is the original image and the second image is our proposed adaptive flter “delete and replace” ...... 83 4.13 The images on the left and on the right, from the top to the bottom, show the original image, abstract, blur, redact, replace, and our proposed flter ...... 85 4.14 The process of the scenario ...... 91 4.15 Examples of the proposed warning system ...... 94

ix 4.16 Example of the existing applications of robots, and Pepper . . 96

6.1 The number of participants who use or do not use covers to protect their privacy ...... 105 6.2 The picture that shows there is no manipulation and make sense: picture 1 shows abstraction, picture 2 shows replace, picture 3 shows redact, picture 4 shows blur, and picture 5 shows ”delete and replace”107 6.3 The picture that shows there is no manipulation and makes sense: picture 1 shows abstraction, picture 2 shows replace, picture 3 shows redact, picture 4 shows blur, and picture 5 shows ”delete and replace”107 6.4 The preferred flters for protecting privacy ...... 108 6.5 Smart devices that are owned by participants ...... 112 6.6 Number of participants who have or do not have a social robot . . . 113 6.7 The places that participants have the social robots ...... 114 6.8 Participants’ desire to own a social robot ...... 115 6.9 The places that participants want to have the social robots . . . . . 116 6.10 The tasks that the participants want to use the social robot for . . 117 6.11 The users’ concerns about the social robots ...... 120 6.12 Level of signifcance of privacy for users at their home and workplace122 6.13 Participants’ rating regarding the objects that they are concerned about regarding privacy ...... 124 6.14 Participants’ rating regarding the information that they are con- cerned about regarding privacy ...... 126 6.15 Participants’ rating regarding the locations that they are concerned about regarding privacy ...... 128

x 6.16 Participants’ rating regarding the situations that they are concerned about regarding privacy ...... 130 6.17 Participants’ preferences regarding the ways that could be used with the robot’s camera while performing a task to provide privacy . . . 134 6.18 Participants’ preferences regarding the ways that could be used with the robot’s camera while performing a task to protect private situ- ations ...... 136 6.19 Participants’ preferences regarding the ways that could be used with the robot’s camera to disable the camera after fnishing tasks . . . . 138 6.20 Participants’ preferences regarding the ways that could be used with the robot’s microphone while performing a task to protect privacy . 140 6.21 Participants’ preferences regarding the ways that could be used with the robot’s microphone after fnishing tasks to protect their privacy 141 6.22 Participants’ preferences regarding the ways that could be used to limit the robot’s movement while performing a task to protect pri- vate areas ...... 143 6.23 Users’ desire to allow sharing the use of their social robots . . . . . 145 6.24 Users’ desire to add an authentication method on the social robots that are used by diferent members ...... 146 6.25 The preferred authentication methods ...... 147 6.26 Users’ desire to adjust the robots’ confgurations ...... 149 6.27 Users’ desire to have an application to manage their social robots . 150 6.28 Places that users preferred to install the application that is used to manage their robots ...... 151

xi 6.29 Participants’ opinions regarding the policies for controlling the so- cial robots’ permissions and limitations and if they could assist in mitigating the problem of privacy violation ...... 153 6.30 Participants’ opinions regarding whether or not the warning system could help in mitigating a privacy violation ...... 154 6.31 Users’ preferences for receiving an alert from the social robots when they are moving toward them ...... 155 6.32 Preferred method to send an alerts when their robots are moving toward them ...... 156 6.33 Preferred warning methods to alert users who do not notice the robot’s existence ...... 157 6.34 Users’ desire to be warned by using colors of light ...... 158 6.35 Users’ preferred ways to be informed about the robots’ ability to hear them from far a way ...... 159 6.36 The preferences methods of users to be informed about the robots capabilities ...... 160

A.1 IRB Approval Letter ...... 183

B.1 Picture 1 ...... 187 B.2 Picture 2 ...... 188 B.3 Picture 3 ...... 188 B.4 Picture 4 ...... 188 B.5 Picture 5 ...... 189 B.6 Picture 1 ...... 190 B.7 Picture 2 ...... 190

xii B.8 Picture 3 ...... 191 B.9 Picture 4 ...... 191 B.10 Picture 5 ...... 192 B.11 NAO Robot ...... 195 B.12 Importance ...... 199 B.13 Objects ...... 200 B.14 Information ...... 201 B.15 Locations ...... 202 B.16 Situations ...... 203 B.17 From lift to right, showing the following flters types: Blurring, Pixelation, Redacting, and Replacing ...... 204

xiii List of Tables

6.1 Demographic information ...... 111 6.2 Descriptive statistics for objects ...... 124 6.3 Descriptive statistics for information ...... 126 6.4 Descriptive statistics for locations ...... 128 6.5 Descriptive statistics for situations ...... 130 6.6 Descriptive statistics for the question’s choices ...... 134 6.7 Descriptive statistics for the question’s choices ...... 136 6.8 Descriptive statistics for the question’s choices ...... 138 6.9 Descriptive statistics for the question’s choices ...... 140 6.10 Descriptive statistics for the question’s choices ...... 142 6.11 Descriptive statistics for the question’s choices ...... 143

xiv Acknowledgements

I would like to express my gratitude to everyone who has assisted me in com- pleting this research. First and foremost, I would like to thank my Allah for giving me everything that I would need to succeed in completing this research. Without His blessings and satisfaction, I would not be able to reach this achievement. I am sincerely grateful to my advisor Dr. Marius Silaghi for his invaluable assistance and continuous guidance and support during this research. He always welcomed me when I had any problems or questions about my research or writ- ing. Besides my advisor, I am incredibly grateful to Dr. Philip Bernhard and to my committee members Dr. Hector Gutierrez and Dr. Lucas Stephane for their insightful feedback and precious time. I am thankful to all my professors for their support during my period of study. Sincere gratitude goes to Dr. William Allen, whose ofce door was always open to me. I am sincerely grateful for his help and guidance in this research. I also want to thank Dr. Heather Crawford for her assistance and guidance in reviewing the questionnaire for this study. I am thankful to all the respondents for their participation and time and also to my friends and colleagues for their encouragement. Also, I am grateful to the government of , my sponsor, for their fnancial support.

xv All my successes are dedicated to my biggest and constant source of my strength and happiness, my parents, who always support me, assist me, encourage me, and believe in me in an invaluable way. My deepest gratitude goes to my lovely parents for their sincere love, countless sacrifces, and patience. Their positive words were always a motivation for me. Sincere thanks goes to my mother for her staying with me and for taking care of my son and me; if she were not by my side, I could not achieve success in this efort. I am sincerely grateful to all my brothers for their invaluable support, help, and encouragement. They were also essential for my success. Sincere gratitude goes to my brother Rayan for his sacrifce and for staying with me while I earned my master’s degree. Last but not least, sincere and great gratitude goes to my husband and life partner, Abdulrahman, for his constant and invaluable support, help, encouragement, patience, and sacrifce. His sincerity and love have deeply inspired me. My success is also due to my husband and his belief in my ability to succeed. A great thanks to my beloved son who is a reason for my happiness and inspiration, and my new baby, whom I have not yet seen, for making my life more beautiful and motivating me to complete this research. I am truly thankful for having my family in my life. For my family, that I love so much, no matter how intensively I express my thanks to them, it would never be enough to show all my deepest and sincerest gratitude. Thank you, my Allah, for everything.

xvi Chapter 1

Introduction

1.1 Overview

We move from the fction world to the real world, and in the real world, we feel as if we are in the fction world again. This is what happening with technology; it was seen as a fantasy world, and then it became a reality. From science fction, the creators have been inspired to merge the fctional technologies in the real world. However, with great innovations and the massive advances in technologies, we feel as if we are living a fctional life. According to Pranav Mistry, “Whatever science fction movies we watch now, we can make the technology real in two days. What we can do is not important. What we should do is more important” [58]. This means developing a new technology is not impossible even if this technology appears for the frst time in the fction world. Recently, various types of advanced and intelligent technologies have been rapidly developed to become popular and common. In fact, “Technology has ad- vanced more in the last thirty years than in the previous two thousand years.

1 The exponential increase in advancement will only continue,” Niels Bohr said [7]. Today, many people use these diferent technologies in their daily life. However, robots are one of the fctional technologies that have emerged in the real world to become one of the cutting-edge technologies that will play a signifcant role in our society. In fact, in many diferent areas, the robots provide various opportunities to increase the number of robots used [90]. There are various types and classifca- tions of robots that are used in many diferent sectors and areas, such as industries, hospitals, homes, and companies. Robots are becoming increasingly popular everywhere. Based on statistics, between 2017 and 2025, the growth of both industrial and non-industrial robotics is still expanding [81, 93]. According to the United Nations estimations in 2000, the global number of industrial robots that are used is 742,500 [64]. Indeed, the statistics demonstrated that between 2004 and 2017, the global sales of industrial robots increased [95] and between 2015 to 2025, the global sales of both social and entertainment robots could continue to grow to achieve 4.22 million robots [94]. In addition, the use of service robots for diferent areas, such as professional, personal, and domestic is still growing [66]. In fact, between 2016 and 2019, the use of domestic household robots could globally grow to become 31 million robots [65]. Actually, these industrial, social, entertainment, service, and domestic household robots are only some branches of robotics, so how will the size of robots’ growth be if we counted these types with the other types? These statistics illustrate that the use of this incredible technology of all its types has increased year by year in all sectors and areas in this world. According to Jan C. Ting, in the future, the number of workers will reduce because of the progress of both technology and robotics [99]. Thus, in the near future; many robots will be created and developed

2 to replace humans in some jobs and places. In addition, the founder of Microsoft, Bill Gates, stated that “I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives” [29]. This means the future of using robots everywhere is in progress and is promising. Thus, fnding robots in every workstation, home, and everywhere else could be possible and could become popular.

1.2 Motivation

The main diference between robots and other types of technology lies in their ability to move independently. This impacts on the type of privacy concerns and challenges that stem from their adoption in households and industrial environment. As the creation and development of robots grows rapidly, and as the presence of robots has become widespread everywhere, and as the spread of these robots has reached places where they can communicate and interact with humans, massive benefts and services could be provided. At the same time, various types of privacy and security issues could appear. However, because the popularity of the presence of robots that interact and communicate with humans is growing, the robots are becoming able to constantly sense, watch, hear, process, and record all the environment around them (video and audio), such as all the humans’ activities, actions, and data. This means huge amounts of information could be transferred continually in and out of a robot’s system [96]. Thus, these abilities of those social robots that interact with humans anywhere and the amounts of data that are processed by the robots could lead to violating users’ privacy and afecting humans’ behaviors. Indeed, the abilities of

3 these social robots are not limited to only those capabilities, but these robots are also able to move, to enter many diferent places that people cannot enter or could have a variety of private data, and do even more [56]. In fact, the developments of the robots have embodied these social robots to become similar to living organisms. This tendency to anthropomorphize the social robots could lead people to trust and contact these robots in a very risky way [107]. The incidents and facts that are involving robots have attracted high attention in the media and have gained the researchers’ attention as well. In addition, due to the privacy issues that have ap- peared as a result of using social robots, many researchers have become interested in this topic that could occupy a large place in the research area. Additionally, many users have become concerned about their privacy from using these robots in their daily lives or from seeing those robots in the environment around them [26]. However, it is signifcant to keep working on this research area in order to pro- vide solutions that assist in producing a vast number of privacy-sensitive robotics that make people very comfortable to accept and use. In order to achieve bene- fcial progress in the privacy-sensitive robotics research area, researchers need to study the current progress of this incredible technology, discover the gaps in the research papers, and solve the research limitations that are related to privacy- sensitive robotics. Therefore, to understand the privacy-sensitive robotics, many more research papers that cover this area are needed.

1.3 Research Problem

Despite the signifcant development in the research of the science of robotics, despite the many researchers that are interested in the feld of robots of all kinds,

4 and despite the many risks that may result from the use of these robots in our environments, there are few research papers that have studied and addressed the privacy issues on robots. Thus, a guidance document for privacy in designing social robots that could assist during creating, developing, and designing privacy-sensitive social robots, or when enhancing the present social robots regarding privacy is absent [17]. In other words, there are no privacy standards or techniques that could be applied to diferent social robots when designing privacy-sensitive social robots [29]. In addition, even though there are many research papers, such as [82, 4, 102, 30] that studied the appearance of robots that users can be comfortable with, there is a lack of research papers that examine the social robots’ appearance regarding privacy. Moreover, there are limitations and issues regarding the techniques that are used to constrain the capabilities of robots, such as perception, movement, touch- ing, and recording, which are the main sources of privacy violation. Additionally, techniques and standardized warning systems that could assist in increasing the users’ awareness about privacy risks that could arise from the existence of robots around them, and that could assist in understanding the robots’ actions, work, and the capabilities are absent. Furthermore, there are limitations on providing diferent techniques that could be used to disclose diferent new concerns of privacy while using the robots and then providing the best solutions that could assist in trading of between the utility of and privacy concerns related to the robot. Besides, there is a lack of viable adaptive authentication methods that are trusted and require little efort from users, and that could be used by the robot to

5 authenticate diferent users in order to protect their privacy when many diferent users use this one robot. Finally, even with the existence of applications that are used to control the robot, the limitations of these applications that could be used to aid users to manage their robots regarding their privacy preferences are apparent. In fact, according to Sharan Burrow, “Technology can be used to make people’s lives easier, to reduce inequality, to facilitate inclusion, or to solve intractable global problems, but without dialogue and governance, it can be used against humanity- the choice of how we use technology is ours” [10]. Indeed, the lack of solutions that could assist in producing privacy-sensitive robots could lead to violating users’ privacy. Thus, there is a need for solutions that could help in solving and improving the aforementioned issues and limitations in a more appropriate way.

1.3.1 Research Questions

The following research questions could assist in highlighting this research: What are the most users’ concerns when using robots? What are the factors that could afect users’ privacy? What are the most usable, trusted, and comfort- able techniques that could be used to bring security in the context of social robot utilization, to protect users’ privacy in the presence of social robots, to increase users’ awareness towards associated privacy risks, and to fnd trade-ofs between privacy loss and utility achieved? How reliable are our proposed techniques in increasing the users’ confdence in the privacy guarantees available in the context of robotics? Do some of those methods assist in solving the limitations of the aforementioned solutions that could be found in the previous research papers that studied privacy-sensitive robots?

6 1.4 Proposed Solutions

To overcome all of these problems mentioned above, appropriate solutions that assist in producing privacy-sensitive robotics and that allow users to use robots in a usable, trusted, and comfortable way must be proposed. The following proposed solutions summarize what needs to be considered when designing privacy-sensitive social robots. First, because there are several social robots that were designed without pri- vacy protection principles in mind, such as equipping the robots with unnecessary sensors, placing the sensors on areas that the participants could not expect, or ignoring adding some additional hardware parts or some awareness systems that could assist in mitigating the privacy violation [53], many privacy issues appeared. This inappropriate design regarding privacy could increase the risk of privacy vi- olation; thus, we propose a design for the outer appearance of social robots that could assist in increasing the users’ awareness to avoid the risk of privacy violation and that could assist in mitigating the privacy violation that could occur because of the use of the robots. Second, because of the ability of robots to process data remotely and record what they see or hear via audio and video, we propose an optimized type of flter. The flter would be used on the robot’s camera to constrain the perception (video) and solve the problem of preexisting flters that draw extra attention to private objects or places when the fltered objects, areas, or data being noticed. In addi- tion, we propose to use a combination of flters that could be adaptive to a certain object, area, data, situation, time, or user. Moreover, we propose a hardware cover that could be added to the robot’s camera and help in protecting privacy.

7 Furthermore, regarding the robot’s microphone, because there are no tech- niques that are used to protect the private data that could be heard by the social robot, we suggest using an encryption mechanism to constraining the perception (audio). In addition, we suggest a hardware cover that could be added to the robot’s microphone and help in protecting privacy. Moreover, to constrain the movement capability of the robots, we propose con- necting the robot’s sensors with other environmental sensors, such as movement and infrared sensors besides using the existing movement constraints. This pro- posed method could assist in constraining the movement in an adaptive way ac- cording to a specifc situation. Additionally, robots have abilities that exceed the user’s expectation; thus, proposing warning techniques and a system that assists in warning the users about the robot’s abilities and that helps in increasing the user’s awareness about the privacy risk that could arise while using the robots is signifcant. In addition, because the robots could be shared between diferent users, which means many users can use the same robots, we proposed an authentication method that is trusted and requires a little efort from the users. Thus, we can build an adaptive user profling system that contains a profle for each user. Indeed, this authentication mechanism, such as face or voice recognition, could be used to adjust the privacy setting on robots regarding the users when the robot recognizes the voice or the face of a particular user. Moreover, we suggest that with those social robots, there should be an appli- cation that is not only used to control the robot, but also to allow the users to manually adjust their privacy setting on the robot system, to activate or deacti- vate some type of privacy techniques. These include flters regarding time, user,

8 or purpose, to see what the robot sees, to hear what the robot hears, to locate the place of the robot, to receive alerts, and to control the robot’s permissions. Finally, after understanding the users’ privacy concerns when using robots and after proposing all of these techniques of constraints to protect privacy, we have to be able to trade of between utility and privacy by using diferent scenarios that could explain the work of each proposed technique. By studying and explaining all of these proposed solutions and fnding the best techniques that are preferred by users, we could produce a guidance document for robot privacy assessments. It could be used by the creators, designers, and developers of social robots to take these proposed techniques, which are preferred by users, into consideration while producing privacy-sensitive social robots.

1.4.1 Research Hypotheses

Generally, we hypothesize that our proposed solutions would be preferred by users and make them more comfortable.

• Violating privacy by recording private information or leaking sensitive infor- mation would be the most users’ concerns when using social robots, more than utility achievement.

• The users are more concerned about privacy related to fnancial objects or in- formation, bedroom and bathroom areas, and the situations where the users are naked.

9 • The preferred techniques for protecting users’ privacy Regarding the Robots’ Cameras:

– During the working time, users prefer their robots to use both an adap- tive flter and automatic cover.

– After fnishing tasks, users prefer their robots to use the automatic cover.

– The proposed flter ”delete and replace” would solve the problem of drawing attention and would be the most preferred flter.

• The preferred techniques for protecting users’ privacy Regarding the Robots’ Microphones:

– During the working time, users prefer their robots to use the encryption method.

– After fnishing tasks, users prefer their robots to use the automatic cover.

• The preferred techniques for protecting users’ privacy Regarding the Robots’ Movements:

– During the working time, more than half of the users prefer their robots to use multiple techniques as a frst option, then the other group prefers connecting robots with the environment sensors as a second option.

• The users want to use an authentication method if every household member or workplace member uses the robot.

10 • The face or voice recognition would be the most preferred authentication methods for users.

• Users would be more confdent if their robots have an application that allows them to create their own user account, which would allow them to control and manage their privacy setting and the policies regarding their preferences.

• The best place for users to install the robots’ application would be in the external device that comes with their robots.

• The users would believe that using a warning system could assist in mitigat- ing the privacy violation problem.

• Users would prefer their robots to alert them verbally (loudly) when those robots are far away and either move toward them or hear them.

• When the robots and users are in the same place, users would prefer the robots to alert users who do not notice their existence by using light.

• Users would prefer that the social robot turn on a specifc color of light around its camera and microphone or other sensors to show when these sensors are on, of, or when they are recording.

• Users prefer their robots to announce their capability by using sound.

• With the robots that are similar to the humans in their appearance, it would be more comfortable to the users to use the automatic cover and to place the robots’ sensors in similar places to humans, such as the cameras in the eyes and the microphones in the ears to increase the users’ awareness.

11 1.5 Thesis Structure

This thesis is organized as follows: Chapter 2 provides detail about the back- ground information on robots. In Chapter 3, an overview of the related works on privacy-sensitive robots is given. The research’s proposed solutions are explained in Chapter 4. Then, the methodology that is used to examine the proposed solu- tions is discussed in Chapter 5. In Chapter 6, the results of the surveys’ study are illustrated. The research limitations and future works are explained in Chapter 7. Finally, the research conclusion is drawn in Chapter 8.

12 Chapter 2

Background

2.1 Overview

When thinking of the word “Robot,” people go with their imagination into the science fction world and movies. All of the concepts of robots come from the science fction world and movies. However, even if the idea of illustrating the meaning of ”robots” could be complicated, explaining the meaning of this word could assist in understanding the great history of robotics and could assist in following up the development stage of robotics as well [91]. In fact, going back to the middle ages, to follow up the idea of the robots could be possible, even if at that time, humans had no idea about the appropriate word that could be used to defne ”robot,” and had no idea about the machines that could simulate the functions of humans [31]. However, in 1921, the concept of “Robot” entered the people’s consciousness when the Czech writer Karel Capek used this term the frst time in a show that is called ”Rossum’s Universal Robots” (“R.U.R”) [35, 64]. In fact, the term robot is

13 a Czech word that is derived from a “robota,” which means a worker or laborer [31, 76], and that was added to the English dictionary without any manipulation or translation [31]. Robots were born again in the 1940s, although the concept and the notion of robots have existed for a long time [31]. In addition, the word “Robotics” was used the frst time in 1941 by the author Isaac Asimov to defne the technology of robots, and the idea of “laws of Robotics” was suggested at that time as well [35, 64]. Because it is very difcult to defne robots, they could be generally defned as a multifunctional manipulator machine that is programmed to do one or many diferent tasks and actions, that could be self-controlled or remote-controlled, and that could move and interact with the environment [91]. Indeed, there is no certain feld or autonomous’ degree for robotics. Robots nowadays have the same components as human beings. A robot has a physical structure, and some have a mobile physical structure as a human’s body, a motor as a human’s muscle system, a sensor system as a human’s sense system, a power supply as a human’s power source, and a computer as a human’s brain [20]. For hundreds of years, the robotics feld has been evolving; thus, let’s delve deeper into the history of robotics and see how far they have developed [91].

2.2 Robotics in History

It could be confusing to describe the revolution of this incredible invention “Robots” in chronological order because they are heterogeneous. There are a variety of classifcations for these robots that are described later in this Chapter. However, to make it easy to follow the evolution of robots, we describe the develop-

14 ment of robots sequentially regarding their common features instead of describing the evolution in a historically sequential way. In addition, it could be difcult to list all robots that were created from the past until present in this research. However, trying to list the most common and special robots is the goal of this section. In the past, from 3500 BCE until 1300 CE, there were various of simple mechan- ical tools, simple mechanical devices, a simple mechanical system, simple automate machines, and a simple automated car [35, 64]. In the 12th and 13th centuries, the inventor Al-Jazari created a “mechanized wine-servant,” clocks that were powered by water, and a machine for washing hands that provided the users with soap and towels automatically. In addition, he designed an “automaton orchestra” that was powered by water, could foat on water, and that could play music [33]. In 1495, Leonardo da Vinci designed an artifcial man that was similar to an armored knight, which was able to sit up, wave its arm, move its head, and open and close its jaw [35, 64, 60]. This was the frst simple that appeared in the real world. In fact, the robots that could be designed to look like humans in their appearance in some way, that could simulate some of the humans’ functions, and that could be used as assistance increased and developed rapidly. In 1560, Turriano, which had three small wheels, eyes, lips, and head, and that could walk, nod its head, beat its chest with its right arm, and kiss the rosary was created [33]. Moreover, in the 1730s, a robot, such as a human- sized futist, which used two artifcial lungs for performing 12 songs, was created [35, 64]. In 1760, an android that was able to hold a pen and write limited

15 words was created [64]. In addition, in the 18th century, a “Steam Man” that was used to pull “wheeled carts” was created [35]. However, the creators of robots strived to develop robots more and more to allow them to communicate with humans and environments. Thus, in 1939, the “,” which was the robot that was similar to the human in its appear- ance, was created. Indeed, ELEKTRO was able to walk, talk, and smoke [35, 64]. In addition, in 1969, the WAP-1 was the frst biped robot that had airbags as muscles, could walk on fat surfaces, could turn while walking, and could climb the stairs [35]. Moreover, the frst full anthropomorphic robot that was able to communicate in one language, Japanese, and was able to walk and hold objects with its hand, was known as Wabot 1; it was created in 1973 [35, 64]. In 1980, a human-like robot, “Quasi-dynamic walking,” was developed. It was controlled by a microcomputer so that every 10 seconds, it would move one step [35]. Moreover, in 1985, the human-like robot, WHL-11, was designed to walk on a fat surface, and turn and move every 13 seconds [35]. In 1989, WL12RIII was the frst “biped walking” robot that could walk on terrain, move one step every 0.64 seconds, and walk up and down stairs [35]. Furthermore, in 1993, the humanoid robot, upper- body Cog, was created to emulate humans and live as a human [6]. In 1996, Honda created the P2, the humanoid robot that had two legs, that was able to walk, climb stairs, and carry loads. It was considered as the frst self-regulating robot. In fact, the P2 was the frst step to creating the ASIMO robot [35, 64]. Consequently, in 1996, Honda created the P3, which was the second step to creating ASIMO, and the frst completed autonomous humanoid robot for Honda [35]. However, in the 1990s, many humanoid robots were created.

16 Inventors have made robots not only communicate with humans but also emo- tionally interact with them. In 1998, the Kismet was created, which was the robot that emotionally interacted with humans [35]. Additionally, came back again with a new humanoid robot in 2000, the humanoid Sony Dream Robots (SDR). It could walk on both fat and unequal surfaces, recognize ten diferent faces’ expres- sion, and express emotions by using body language and speech [35, 64]. Further- more, in 2002, ASIMO was fnally created and released by Honda. ASIMO was designed to be a personal assistant. ASIMO could recognize the face, voice, and name of its owner, could read email, and could transmit videos from its camera to any PC [35, 64]. In 2005, HUBO, the robot that could move and that was connected to a computer through a high-speed wireless connection was created. Indeed, the thinking process for that robot was done by the computer [35]. The evolution of robots has allowed the humanoid robots to travel and some- times go outside Earth to reach space and far farther to reach other planets. In 2010, NASA and General Motors developed a humanoid robot assistant, Robo- naut 2, that was launched into space to become resident of “the International Space Station” permanently [64, 70]. In addition, the robot hitchBOT was cre- ated in 2014 to discover the attitudes of culture toward social robots. It could travel across Canada and Europe, and it was destroyed in 2015 [64]. However, in the 2000s, many advanced robots that are equipped with diferent sensors, that perform many functions, and that are very similar to humans have been created, such as HPR, NAO, iCup, REEM, Pepper, and . Many inventors have cre- ated humanoid robots that seem like a real person, such as in 2003, Nadine and in 2015, and TALOS in 2017.

17 There are some types of robots that can perform humans’ tasks or assist humans, but they do not look like a human. In addition, some of them could be considered as well as a walking machine, so let’s go back to history and see their developments. In 1968, Shakey was the frst robot that could move and understand its environment [35, 64]. The robot RB5X was created in 1985. It was equipped with diferent types of sensors, such as infrared and bump sensors, was equipped with a remote transmission of audio or video, was equipped with a voice synthesizer, and could learn about its environments [35]. In 1988, the frst “HelpMate service robot” was created to assist humans in a hos- pital [35]. In 1999, Probotics created the Cye robot, which was a personal robot that could perform diferent types of household tasks and work, such as “deliv- ering mail, carrying dishes, and vacuuming” [35, 64]. Furthermore, the vacuum cleaner, Roomba, was released by iRobot in 2002. Many others have been devel- oped recently [35, 64]. In the 2000s, many other robots that are used as personal assistants, such as Jibo, TAPIA, and ZIMBO, have been developed. Indeed, there are some assistance robots that can be remotely controlled as well. However, now, let’s go back again in history and see the evolution of the robots that have the appearance of animals and that could simulate the functions of animals. In the 17th century, many mechanized puppets and toys that were used for entertainment for wealthy people were created [35, 64]. Indeed, in the 1730s, some robotic beings, such as an automatic duck, which was producing sounds, eating, drinking, paddling, digesting, and excreting, was de- veloped [35, 64, 33]. In addition, in 1948, the tortoise robots, Elmer and Elsie, autonomous machines, were able to mimic realistic behavior with a simple elec- tric circuit and were able to fnd their charging place when they need to charge

18 themselves again [35, 64]. Moreover, the Soft Gripper, a snake-like design that could wrap around was created in 1975 [35, 64]. Another snake-like design was created in 1978. It was the ACMVI (Oblix) robot, which was used in industry as an arm [35]. In 1993, inventors created the smallest micro-, Monsieur, which was a self-propelled robot [64] that could climb slopes of approximately 5 degrees [21]. A fsh robot, Robo Tuna, which could map the foor of the ocean, fnd underwater pollution sources, and more, was created in 1994. Moreover, in 1996, there was another creation, which was a fsh robot used to discover the fsh swimming process [35, 64]. The creation of these animal-like robots has allowed the robots to interact with the environment in more advanced ways. The Furby, which was an “animatronic pet,” was created in 1998 to interact and communicate in English and Furbish languages [64]. Furthermore, in 1999, Sony produced AIBO, the frst robotic dog, which could respond to sounds and had some certain behaviors [35, 64]. In the same year, a version of an extinct species of fsh was created by Mitsubishi as a robot [35]. In 2005, the Big Dog, a quadruped robot that was dynamically stable, was created. Indeed, the Big Dog could pass difcult terrain, could climb a 35-degree slope, and could run an hour while carrying an object weighing 340 pounds [64]. In the 2000s, many other robots that simulate animals in more advanced ways have been developed, such as CHIP. The evolution of arm robotics could be remarkably observed during the history of this type of robot. In 1951, the frst articulated arm that operated remotely was designed for the Commission of Atomic Energy [35, 64]. Actually, UNIMATE was the frst programmable robot arm, industrial robot and

19 electromechanical machine, which was design for an automated assembly line in 1954 [35, 64, 75]. In 1960, Versatran was the frst “cylindrical robot arm” that was designed as a transfer machine [64]. Moreover, in 1968, another robot arm was created. It was the tentacle arm that was controlled by a computer, mounted on the wall, and hydraulic powered [64]. Furthermore, in 1969, a Stanford Arm was considered as the frst robot arm that was powered electrically and that was controlled by a computer [35, 64]. In addition, the frst commercial industrial robot arm, T3, which was controlled by the available minicomputer, was created in 1973 [35]. Additionally, in 1975, the Puma arm was created. It was used in industrial operations [35, 64]. In 1992, the CyberKnife robot was invented to screen patients and send a pre-planned measure of radiation [64]. Additionally, in the 2000s, the FDA approved the frst surgery robot, the “da Vinci Surgery System” [64]. However, the robot arm could be also used as part of a human. In fact, in 1963, the frst robotic arm that was controlled by a computer was designed to work as a tool for the handicapped [64]. In addition, in 1998, Edinburg Modular Arm System (EMAS), which was the frst bionic arm, was created [35]. However, in the 2000s, many diferent types of advanced arms have been used in industries, as human parts, or to assist humans. One example is the Moley Robotic cooking arm. Let’s go back again in history to realize the revolution of other types of robots, such as vehicles or robots that could be similar to vehicles that also known as walking machines that could use legs or wheels. In the 18th century, the robots’ inventors started to create the frst vehicle that was controlled remotely [64]. In 1968, the frst ”computer-controlled” walking machine

20 had four legs and could walk, crawl, and trot at a very slow speed was created [35]. Furthermore, the frst walking truck that was controlled manually and that could walk for four miles an hour was created in the same year [35]. In early 1973, the frst walking vehicle that had six legs was created [35]. In 1977, another machine that had six legs, Variante Masha, was created, but with a more advanced way of movement [35]. In 1979, robots as vehicles became more advanced. An autonomous vehicle, the Stanford Cart, could move through a room that was full of obstacles. In fact, the Stanford Cart had a TV camera to take pictures and transmit them to a computer in order to measure the distance between that cart and the obstacles in the room [35, 64]. However, the evolution of these types of robots, led to the invention of another vehicle and walking machine that could go underwater. In 1979, the ReCUS (Re- motely Controlled Underwater Surveyor) machine that had eight legs was created. The maximum speed of ReCUS machine reached 0.07 m/s. In 1983, there was ODEX, a remote operated walking machine that had six legs, could reconfgure its appearance, could carry more than 900 pounds, and had a normal speed of walking [68]. Moreover, in 1985, Collie1, a robot that could walk with four legs, and could move its legs with three degrees of freedom per leg was created for experimental purposes [35, 64]. Furthermore, in the same year, two remotely-operated robots were developed and sent to the “fooded basement of the damaged reactor building” in order to send information and drill samples for measuring the radiation levels [64]. In 1989, Genghis, a walking machine, was able to show how complicated behaviors could appear from simple and distributed controller systems [64, 71]. In 1989, another robot, Aquarobot, which could go underwater and walk undersea, was developed [35].

21 In the same year, Dante-I, the robot that used eight legs to walk, was created in order to explore the Mount Erebrus in Antarctica. In the next year, the Dante- I was made stronger to explore the Mount Spurr in Alaska so that the robot could descend into the volcanic hole of the mountain to examine the volcanic gases [35, 64]. In addition, in 1996, Gastrobot was used to produce carbon dioxide when digesting an organic mass. Indeed, the carbon dioxide was then used for power [64]. The robots of these types have also traveled outside Earth to reach Mars. In 1997, NASA created Sojourner which was the wheeled and free-ranging robotic rover that was used to send more than 17,000 diferent images, more than 2 billion bits of data, more than 15 rocks and soil chemical analyses, and massive weather factor data about the planet Mars to NASA [35, 64]. In addition, in 2003, Spirit and Sojourner, the twin robotic rovers, were launched by NASA in order to explore Mars [35]. Moreover, in 2011, the “Mars Science Laboratory Curiosity Rover” was launched by NASA in order to assess the habitability of planets and it is still there until today [64]. The frst car that was driven without a driver was licensed in 2012 [64]. How- , in the 2000s, more robots of these types have been developed in a more advanced way and could be used anywhere because of their mobility. Let’s go back again in history to discover how the inventors of robots strived to create robots that can fy. In 1995, the robots’ developers devel- oped the “unmanned Predator drone” that could fy. In fact, recently, many other fying robots that used advanced technology have been developed. These include the Lily Camera drone and Phantom three drone.

22 From the development , we can recognize that the earlier cre- ation of robots was very simple and limited. There were no programs, sensors, Internet or any advanced technology that were used by robots. In addition, the robots were moving slowly in very limited places. The robots at that time were able to perform simple tasks and were used for a simple goal, such as decorations and entertainment. After that, the robots were used in more than one place, so they were used in industries and in hospitals to perform simple tasks. Then, the inventors started to increase the capabilities of robots to allow robots to communicate with humans everywhere in more advanced ways. Indeed, the robots these days have reached high levels of intelligence, have diferent body sizes from robots that can be seen by the human eye to micro-robots that would be difcult to be seen by the human eye. Additionally, robots nowadays can move or can be sent anywhere on Earth, to space, and to other planets. Indeed, the robots recently can “drive, walk, swim or fy” [32]. Recently, robots can recognize faces, voices, and even emotions. In fact, their abilities to connect to the Internet allows them to become more advanced technology. The existence of these capabilities, such as using cameras, microphones, and other types of sensors, moving widely ev- erywhere, touching and interacting with humans and environments, and accessing the Internet raise the issues of privacy violation. From history, we can also notice that the goal of inventors changed from de- veloping simple robots to developing robots that could perform diferent types of tasks in more advanced ways. Thus, we can notice that the autonomies of robots were increased year after year. In the past, the robots were controlled by humans, but nowadays the robots can control themselves easily and in diferent ways. In ad- dition, the inventors designed robots without taking into consideration the privacy

23 issues that could emerge. They programmed the robots and equipped them with diferent sensors and technology to produce more advanced robots without keeping in mind the principle of ”privacy in design.” They did not include any types of con- straints to protect the users’ privacy and they did not trade of between usefulness and privacy. However, recently, some researchers started to provide solutions for protecting privacy while using these advanced robots, and they began to balance between usefulness and privacy, which is explained later in this research.

2.3 Robotics Classifcations

According to the history of robotics, the creation of robots started with very simple robots and a few types of these robots; however, with the years, it seems that there are many diferent types of robots, and there are many diferent ways to classify these diferent types of robots. There are many diferent felds of classifcations regarding the “positioning” of robots, the “application” of robots, the “locomotion” of robots, the “architecture” of robots, the ”environments” of robots, the “generation” of robots, the “size” of robots, ”powers” of robots, ”relationships” of robots, “controller” types, “sensor” type, and the “design” of robots [20].

Positioning of Robots: In this type of classifcation, there are stationary robots and mobile robots. The fxed robots are those that work without the need to change their position from one end while moving the other end. For example, they could be fxed to the foor from one end and able to move freely from the other end, such as the articulated arms. They could be articulated, cartesian, cylindrical, polar (spherical), SCARA, or delta [20, 83, 84]. Unlike

24 fxed robots, mobile robots can move from one place to another, such as NAO, AIBO, and Pepper [20].

Application of Robots: There are industrial and non-industrial robots, also as service robots. In industrial robots, there are several other classi- fcations. However, the industrial robots are the robots that are used in the industrial manufacturing environments that are fully structured, fully automatic or stationary robots. They are used for specifc tasks, such as articulated arms that have the same capability of the human limbs, so these robots can change the position of the objects from one place to another [112, 20, 69, 111].

In contrast, the service robots are the robots that could be classifed into two main types, which are personal service robots and professional service robots [112, 69, 23, 111]. Indeed, the service robots perform services and do tasks for humans, mostly have variable levels of autonomy, and most of them are mobile robots [23]. The personal service robots, or those that are used for personal and private uses, are robots that work in quasi-structural envi- ronments, perform services, and do tasks for humans for their own purposes. Examples are assistance robots, robots for domestic tasks, entertainment and leisure robots, personal transportation, and home security and surveillance robots [112, 23, 111]. However, the service robots that are used for profes- sional uses are robots that work in fully unstructured environments and work for a group of anonymous users [23]. Examples of professional uses and types are feld robotics, professional cleaning, inspection and maintenance systems, construction and demolition, logistic systems, medical robotics, defense, res- cue and security applications, search and rescue, underwater systems, un-

25 manned aerial vehicles in general use, mobile platforms in general use, lab- oratory robots, public relations robots, and humanoid robots [112, 23]. In- deed, some researchers classify the service robots as many other main types of robots [69].

Locomotion of Robots: In this type of classifcation, the robots are classi- fed according to their locomotion systems. Thus, there are stationary robots; wheeled robots, which could have one, two, three, or more wheels; legged robots, which could be ”bipedal, tripedal, quadrupedal, hexapod” or have more legs [20]; swimming robots, which are underwater robots; fying robots, which can foat or fy on air. In addition, there are rolling robot balls; swarm robots, which contain many other small robots [69, 83]; modular robots, which have many robots in the confguration; micro-robots; Nano-robots; and soft elastic robots, which are bio-inspired [83].

Architecture of Robots: In this type, the robots are classifed regard- ing the framework of the software and hardware that is used to control the robots [20]. Those architectures could assist the designers of the robot’s sys- tem in developing the robot’s devices and in equipping the subsystems with arithmetic services.

Environment of Robots: The robots can be classifed regarding the en- vironment in which they could work. The robots’ environments could be on the earth’s foor, in space, in air, or in/on/under water [20].

Generation of Robots: The robots can be classifed based on their gen- eration of development. Thus, in the frst generation, there are simple me- chanical robots, such as the arms [20]. In the second generation, there are

26 robots that have a basic level of intelligence because of the sensors that they have. They can communicate and synchronize independently with the other robots, which means those robots do not need human supervision. Finally, the last generation of robots consists of those that are categorized as insect robots and as robots with a high degree of autonomy, which are considered as a part of both “computer techniques” and “artifcial intelligence” [20].

Size of Robots: In this type of classifcation, it is signifcant to classify the robots regarding standard unit of measurements, such as the ”linear dimension” or the ”weight” [20]. Indeed, in 1989, the idea developed that creating a huge number of small robots that are not expensive could be better than creating a few large robots that are expensive [64].

Types of Controller: There are three types of robots under this type of classifcation [20]. First, “non-servo,” which is the “open loop system.” Second is “servo” which is the “closed loop system.” Finally, the “servo- controlled” is the “closed-loop system” that is controlled in a continuous way [20].

Power of Robots: In this type, the robots are classifed according to their power, such as ”pneumatic, hydraulics, electrical, fywheel energy storage, organic garbage, nuclear fusion, a radioactive source...” [20].

Relationships of Robots: There are three types of robots in this classi- fcation, which are automated robots, bio-technical robots, and interactive robots [20]. The automated robots are the robots that work and do their tasks independently, which means those robots do not need the direct contri- bution of humans. However, the bio-technical robots need the contribution

27 of a human, such as a human command. In addition, the interactive robots need the contribution of a human, but periodically [20].

Types of Sensors: The robots in this type could be classifed regarding the sensor that they are equipped with [20].

Design of Robots: In this type of classifcation, the robots could be clas- sifed based on their design and shape, such as the robots that are similar to a human, animal, arm, planes, or vehicle.

It would be difcult to count all robots in one type of classifcation; however, a combination of these classifcations could be appropriate for this research. In this research, we are focusing on social service robots that interact closely with humans using cameras or microphones. We address surface autonomous mobile robots that can process or record data, which are the more common current type of social robots. In addition, robots could be humanoid robots or those that are similar to animals.

2.4 Robotic Sensors

The robots in the past were very simple; they could have no sensors or a few sensors. Nowadays, robots are equipped with a wide variety of sensors. Indeed, the frst database that consists of common sense was created in 1984, in order to assist the robots in understanding the human’s world [64]. The following sensors are the most common types of sensors.

Light Sensors: These sensors, such as a photoresistor, photovoltaic, pho- totubes, and CCD’s, are used for light detection and a voltage diference creation [77].

28 Sound Sensor: Generally, this is the microphone that is used for sound detection and a voltage proportional to the sound level returning. It could be used for navigation, interaction, speech, and voice recognition [77]. In addition, there is a speaker that could be used by robots.

Vision Sensor: Generally, this is the camera that is used to see the envi- ronment and allow robots to move around [106, 77].

Temperature Sensor: This is used for measuring the temperature of the surrounding area [77].

Contact Sensor: These sensors, such as “a push button switch, limit switch or tactile bumper switch,” could be used for triggering when there is a phys- ical contact against objects. In addition, there are capacitive contact sensors that respond to the human touch, such as the touch screen that is used on smart phones [77].

Proximity Sensor: Unlike the contact sensor, the proximity sensor does not need physical contact to react; however, this sensor works by determining the existence of objects within specifed distances [77]. There are many types of proximity sensors, such as infrared (IR) transceivers, ultrasonic sensors, and photoresistors [77].

Distance Sensor: They are also called range sensors. Indeed, the proximity sensor could be used as a distance sensor. There are many types of distance sensors, such as ultrasonic distance sensors, infrared distance sensors, laser range sensors, encoders, and stereo cameras [77].

Pressure Sensor: This is used for measuring the pressure, such as measur- ing the amount of the pressure that is required to grip an object [77].

29 Tilt Sensor: This sensor is used for measure the tilt of the objects [77].

Navigation and Positioning Sensor: These sensors, such as Global Posi- tioning System (GPS), digital magnetic compass, and localization, are used to defne the approximate position of robots and allow robots to move around. However, some of them are used for indoor positioning and some for outdoor positioning [77].

Acceleration Sensor: This sensor is used for measuring the tilt and the ac- celeration, and that could be afected by the static force or dynamic force [77].

Gyroscope Sensor (Gyro): This is used for measuring the “rate of rotation around a particular axis” [77].

Inertial Measurement Units (IMU): Many properties of other sensors could be combined with this unit [77].

Voltage Sensor: This sensor is able to convert the voltages from low to high and vice versa [77].

However, there are many other sensors that could be used on robots, but it would be difcult to list all of them in this research.

2.5 Robot Communications

Due to the fact that robots have become more intelligent, mobile, and inter- active, communication protocols and channels have become required by most of the robots’ applications in order to perform some process. Examples are transmit- ting a stream of video or audio, receiving and sending data and commands, and confguring and controlling packets [90].

30 Diferent communication capabilities could be used among diferent types of robots. Indeed, the robots can use wired and wireless communication protocols and channels. In fact, diferent types of wired communication channels, such as Ethernet, serial, and USB, and various types of wireless communication protocols, such as Wi-Fi, Wireless LAN, Zigbee, Bluetooth, Infrared, and 3G are used by robots [90, 32]. Based on the development of the robot wireless communication, a large number of robots used ”infrared” technology because it has low cost [103]. The problems with this technology were that the wave of the infrared could not pass via obstacles, such as a wall, the poor quality, and the poor rate of communication. Thus, the designers of robots preferred to use the Radio frequency (RF) technology for the connections of mobile robots. Then the “spread spectrum communication,” which uses point to point or broadcast communications, was used. After that, the evolution of the robots’ communication shows that the “spread spectrum,” which uses a high data rate, a long distance, and multiple network structures was used [103]. However, generally, the stationary robots could use a wired communication channel, and the mobile robot could also use a wired communication channel for diagnostic, confguration, or software development [90]. However, using wireless communications has become a practical scheme for mobile robots [104]. The dif- ferent use of these types of wireless communication protocols is based on the range of operation that is required by robots to perform their tasks successfully or by other factors, such as speed and reliability [90, 52]. For example, Wi-Fi could be used within one hundred meters of range [52] while Bluetooth and infrared could be used within a short range, such as ten meters [52].

31 Indeed, connecting to Wi-Fi could be established in two diferent ways. First, the robots could set up an “access point” to allow the operator to connect. Second, it could allow the robot to connect to an existing Wi-Fi for increasing its range. However, the robots that required long range used cellular, satellite networks [90, 52]. In contrast, the teleoperated robots that require very long distance could use the Internet. Recently, some robots could use a combination of connections, such as Wi-Fi and LTE so that they could switch between them [90].

2.6 Summary

In this Chapter, we provided and discussed the general background of robots that could assist in understanding the content of this research. We started the Chapter by explaining the concept of ”robot,” so we explained what they are and how they started. In addition, we mentioned the history of robots and how they have developed gradually throughout history. Furthermore, we explained the classifcation of robots. Finally, we illustrated the sensors that are used by robots and how they communicate. In the following Chapter, because the research is about privacy sensitive robots, we discuss the threat of robots and mention the privacy issues related to robotics. We explain in detail the users’ privacy concerns when using robots, or cameras, and explain the main factors on robots that could afect the users’ privacy. Indeed, for each factor, we give an overview, discuss the related work on that factor, and explain the limitations.

32 Chapter 3

Literature Review

3.1 Overview

After providing the general background on robots and after showing the signif- cant development of robot science, which caused the emergence of some cybersecu- rity problems related to these robots, it becomes necessary to review the previous studies and research papers that assist in addressing these problems. However, in this research, we focus on the privacy issues of robots. Thus, we should review the progress in the privacy-sensitive robot research area in order to discover the gaps in this research area and solve the limitations that are related to the privacy-sensitive robotics or propose new solutions.

3.2 Threat on Robotics

When designing and manufacturing robots, the cybersecurity is not extremely prioritized or is sometimes a forgotten preference [16]. Indeed, as other advanced

33 systems, the development costs, speed to market, and functionality have a higher priority [16]. As we saw, robots with all of their sizes and types can be found everywhere in peoples’ lives. With the great capabilities that robots have and with the direct communication between robots and humans, there are many security and privacy issues that could be raised when using robots. In addition, because of the weakness of the robots’ authentication system, it becomes easy to access the robots’ system without using valid authorization methods. Furthermore, the lack of appropriate encryption techniques could assist in violating privacy. Moreover, because most of the robots’ features are programmable and reachable, the weak confguration of robots could be easily accessed and changed. In fact, there are many layers and factors that could be exploited to cause security and privacy issues, such as the robot’s hardware that could be exploited during the production time or the use of robots. In addition, another layer that could be exploited is the Firmware/OS layer. Moreover, another layer that could be used inappropriately to cause security and privacy problems is the application layer [16]. Indeed, various technical capabilities can pose a threat to privacy, such as the communication protocols that could be exploited to snif data and make other attacks that could violate users’ privacy [90]. Additionally, the audio and video recording techniques that could collect and record a lot of information due to the ability of robots to move. The robots could actually be “self-moving spying de- vices” that could form great privacy risks to their users. Furthermore, because the robots can move, the robots can be forced to leave a secure area that is controlled and move to another area and then carry out some physical attack or theft [90].

34 Indeed, there are many diferent types of threat; in this research, we cover some of them. Focusing on the factors that cause the privacy issues of robots and proposing solutions that could assist in mitigating the violation of privacy is a signifcant step toward producing privacy-sensitive robots that use techniques that match the users’ preferences and that increase the users’ confdence and convenience. Before starting to discuss the privacy issues of robots, we briefy explain the meaning of privacy.

3.2.1 Meaning of Privacy

Many research papers defne the meaning of privacy and have studied privacy in many diferent aspects [47]. In fact, even with the diferences in the meaning of privacy or the norms for privacy that make this word very complicated, the importance of privacy could be noticed in every human culture [87]. This means that “privacy” is a word that can have numerous meanings and defnitions, and that could be used by persons or cultures diferently. Indeed, because there are many diferent defnitions of privacy, it could be chal- lenging to defne privacy briefy or choose one defnition and describe it generally. However, the person’s ability to be with himself only without any interruption could be considered as a simple explanation for this word ”privacy” [105]. In fact, the researchers provided numerous defnitions of privacy from various perspectives. They tried to defne the meaning of privacy that is used most fre- quently [63]. However, Newell mentioned that the researchers and theorists did not agree on the concept of privacy. Thus, the researchers were not able to agree if privacy is considered as an ”attitude,” ”beliefs,” behavior,” ”treatment,” ”objec-

35 tives,” ”phenomenal state,” or something diferent [63]. Consequently, this clarifes that privacy could be difcult to defne from one perspective. Privacy scholars use a privacy taxonomy in order to divide “privacy” into com- ponent constructs and ideas [47]. The researchers stated that there are many various taxonomies of privacy constructs. The most common one has the follow- ing: informational, physical, psychological, and social privacy, which also have other divisions [87]. Thus, privacy could be defned as a set of constructs that are related to “perceived control over informational, physical, psychological, and social aspects of one’s life” [47].

3.3 Privacy Issues in Robotics

Humans have changed their thinking about personal privacy since the existence of the Internet, webcams, and mobile devices [87]. Today, many other advanced technologies and devices can move around the world and do more, which could increase humans’ concerns and change their thinking completely. Webcams are equipped on the computer statically, and mobile devices could be moved but only when humans carry them. However, robots nowadays have very advanced sensors that could assist in collecting and recording private information and have a high level of autonomy [17] to move by themselves, so they can collect sensitive infor- mation inappropriately and share these data inappropriately with others if they are connected to the Internet [87, 47]. In fact, the studies stated that many social robots that interact with humans rely on a network when sending the data from their sensors to other computers for data processing.

36 The Wall Street Journal stated that after an investigation of the Cayla doll, Federal Network of Germany had issued a demand for all parents that had this doll in their houses to destroy it [40, 54]. They stated that parents who ignored the demand to destroy this doll would have to pay a fne that could be up to 25,000 Euro and they would have to stay up to 2 years in prison. In fact, this doll, Cayla, is only one example of a spy doll. In addition, Denning et al. demonstrated that there are many security holes in many diferent “teleoperated robot toys” equipped with video cameras [47]. Nowadays, there are many other robots that are used for many diferent purposes. The most dangerous type of robot in the researchers’ point of view is the interactive social robots because of their advanced sensors, cameras, microphones, and mobility [40, 54]. Therefore, people have become increasingly concerned about their privacy when they start using robots or see robots around them. However, as we mentioned be- fore, there are many factors that could assist in increasing the privacy risks when using robots. Some studies and research papers strived to study some of these factors and provide the appropriate solutions that could assist in mitigating pri- vacy violations or prevent it. Indeed, the researchers tried to understand humans’ privacy concerns when robots are around in order to develop the appropriate tech- niques and solutions that could assist in protecting users’ privacy. To the best of our knowledge, the following mentions the studies that examined the human’s privacy concerns during the existence of robots, and the studies that covered some of these robots’ features and capabilities that could pose risks to privacy.

37 3.3.1 Privacy Concerns

3.3.1.1 Overview

People everywhere are concerned about their privacy, but the level of their privacy concerns could be diferent or similar according to specifc information, objects, situations, and locations. Thus, the researchers need frst to understand what people car the most about the existence of robots in terms of privacy in order to fnd techniques that could assist in protecting their privacy. There are limited studies that focus on discovering the users’ privacy concerns related to the existence of robots. There are diferent ways that assist the researchers in discovering the users’ privacy concerns when they use robots. In each of these ways, there are some benefts and some limitations.

3.3.1.2 Related Works

Experiments could be one of the best ways that allow the participants to live in the moment and discover their concerns, even if this way has limitations. The limitations could be the sample’s size, the limited types of scenarios for the experi- ments, the availability of robots, and the availability of some software or hardware that could be required for doing the experiments. Thus, some researchers, such as Lee, M. K., Tang, K. P., Forlizzi, J., and Kiesler, S. in [53] did an experiment to study the privacy perception of users in “human-robot interaction” through conducting interviews about robots that exist in a workplace community. The researchers stated that the need for understanding the privacy perceptions of users is signifcant when designing “privacy-sensitive” structures. The researchers conducted an interview in two parts: one is for studying

38 the users’ understanding of robots’ ability to record data, and the other is for studying the users’ attitudes toward the data that is recorded. Ten participants were interviewed to determine their understanding of privacy concepts regarding the “Snackbot” robot that appears in the private and public workplace areas and that is equipped with high standard sensors. According to the users’ understanding of the robot’s ability to record data, the results demonstrated that a limited number of participants understand the robot’s ability to collect data. Furthermore, none of the participants were concerned about privacy of the data collecting, and they did not recognize the risk of this technology. According to the users’ attitudes toward the data that was recorded, all of them agreed that if they had been informed earlier about the recording features, they would not have been concerned or worried at all about this recording. Related to “accidental recordings,” some of the participants asserted that if there were any sign of such a capability, they would be more comfortable with this situation. In addition, some of the participants were concerned about the robot’s ability to use some cues from their interaction in order to determine some information about them. In addition, Beer, J., and Takayama, L. in [5] strived to understand the percep- tions of older adults regarding the concerns, benefts, and adoption norms for the mobile remote presence systems (MRP). The researchers conducted an experiment: one part allowing the participants to meet a visitor by using the MRP system and the other enabling the participants to drive the MRP system and visit a person. The results showed that according to the benefts of using the MRP system, the older adults stated that there were several advantages of using this system. How- ever, they also stated that there were many concerns of using the MRP system, such as ethics of using the MRP system, personal privacy, the excessive use of the

39 system, and the absence of face-to-face communication. Moreover, Caine, K., Sabanovic,ˇ S., and Carter, M. in [13] evaluated the older adults’ perceptions of privacy and their behaviors when they are monitored by using “a camera, a stationary robot, and a mobile robot,” and then compared the three conditions. Thus, they used a real experiment to discover users’ privacy concerns. The results indicated that older adults are concerned about their privacy, and these concerns would lead them to change their behavior at their home when they were monitored by embodied robots or a camera. In fact, the researchers found that the condition in which the older enhanced their reaction was the camera. Furthermore, Zhang, G., Liang, H. N., and Yue, Y. in [109] examined the usage of fying robots, such as drones in public areas. The researchers strived to see how people accepted the existence of robots in public areas. The researchers conducted two diferent experimental studies. The results demonstrated that people have acceptability of robots, but they suggested that they could be more comfortable to have these robots around them if they have safety mechanisms and if their data could be protected. Additionally, Syrdal, D. S., Walters, M. L., Otero, N., Koay, K. L., and Daut- enhahn, K. in [97] conducted an “exploratory” study by doing an experiment. They used a scenario with two diferent examples that aimed to discover the users’ perceptions regarding privacy issues while using social robots. In addition, the re- searchers strived to discover the users’ suggestions on the optimal ways that should be performed to balance between the robots’ needs for private information to be able to provide services and the risk of leakage of such information. The result demonstrated that no one of the participants was totally comfortable with storing their personal information, their behaviors, their personality, or their psychological

40 characteristics on the social robots. Additionally, the participants stated that they need the trade-of between privacy and utility. Using described or watched scenarios could be an appropriate way if it were difcult to conduct a real experiment. Indeed, with the scenarios, the researchers could use more than one scenario and could have more participants than in an experiment because it could be done online with no need for real robots or other hardware or software. In fact, there are some studies that believe that using scenarios could assist in discovering new privacy concerns and could assist the participants in feeling that they are living in the moment as they are in those scenarios. Some researchers, such as Krupp, M. M., Rueben, M., Grimm, C. M., and Smart, W. D. in [47], discovered the types of users’ privacy concerns and the other related topics that could interest users when using robots. The researchers chose three groups and three diferent scenarios that included a robot. In the frst scenario, the robot acted as a tele-maid that works at home. In the sec- ond scenario, the robot was used to allow a boss to attend a meeting. Finally, the robot acted as a medical robot. The researchers stated that their study dis- covered new privacy concerns that were not mentioned in the literature, such as theft, marketing, embarrassment, and hackers. Additionally, the results illustrated that the fnancial documents, medical information, passwords, and identifcation information are considered the information that the participants cared most about in terms of privacy. Moreover, the results showed that the bedrooms and the bath- rooms are found as the most personal areas. Thus, the researchers of the article believed that with knowing many privacy categories and concerns, the researchers who are interested in studying privacy-sensitive robotics could have a complete

41 list to measure. In addition, the researchers of privacy-sensitive robotics could have new questions to address and further suggestions for improving the design of privacy-sensitive robots to produce robots that avoid the violations of privacy. In addition, other researchers, such as Rueben, M., Bernieri, F. J., Grimm, C. M., and Smart, W. D. in [86], used the scenarios but used contextual frames to display the following scenarios: in the frst scenario, the robot was considered as a stranger, and in the second scenario, the robot was regarded as a close confdante. The researchers studied the users’ privacy concerns while the telepresence robot was in a home by using diferent contextual frames. The researchers used an online questionnaire in order to gather responses after the participants watched animated videos about the telepresence robot that acts one time as a close confdante and the other time as a stranger. The results of four diferent surveys demonstrated that there are widely diferent efects when the robot’s identity changes from a stranger to a close confdante. However, the scenarios could be limited in their types, which could prevent the researchers from discovering other specifc types of privacy concerns and certain information, object, or location. Some researchers tried to discover the privacy concerns by asking the participants several clear and direct questions via using a survey, which could assist in collecting many clear responses that could cover many types of privacy concerns. Indeed, Butler, D. J., Huang, J., Roesner, F., and Cakmak, M. in [11] discovered the privacy concerns from three diferent dimensions, which were locations, objects, and information by conducting a “web-based user survey.” The results showed that the participants were very concerned about their privacy and physical harm. In addition, the results demonstrated that the respondents were concerned about

42 their privacy regarding certain locations, information, and objects that could be exploited to afect their privacy negatively. Moreover, the results indicated that the participants might not expect any threats that could be exploited to harm them. However, some other researchers conducted a discussion about privacy con- cerns. Krupp, M. M., Rueben, M., Grimm, C. M., and Smart, W. D. in [46] conducted a study that intended to discuss the privacy concerns when the telep- resence robots are present, and the appropriate solutions that should be applied in the future to those robots to mitigate the privacy violation. The results demon- strated that the hacking problem was the biggest concern when using telepresence robots. In addition, the results indicated that the security threats that are related to private information and identity were the other concerns that the participants took into consideration. Moreover, the results showed that participants have a signifcant concern in marketing when robots could know more about them. In addition, Carnevale, A. in [15] highlighted the robots’ infuences on humans’ ideas of privacy in the future. The researcher stated that the impacts of robots could reach the whole structure of society, not only people and their rights. The researcher asserted that in the future, the relationship between robotics and privacy should be regulated.

3.3.1.3 Limitations

The benefts of these studies showed that the clear understanding of the users’ privacy concerns when using robots could assist the privacy-sensitive robot re- searchers and designers in developing appropriate techniques that could not guar- antee the protection of privacy, but that could assist in mitigating privacy viola-

43 tions. However, there are needs for studies that cover most of the information, objects, situations, and locations about which people could be concerned.

3.3.2 Shape of Robots

3.3.2.1 Overview

The appearance of robots could afect the users’ awareness about the robots’ capabilities, the users’ behaviors toward the robots, and the users’ confdence. This includes shape and size. Thus, an appropriate design of robots could assist in playing a signifcant role in the users’ understanding of the robots’ capabilities, the users’ behaviors, and the users’ confdence that could then assist in protecting the users’ privacy.

3.3.2.2 Related Works

There are many research papers that studied the relationship between the ap- pearance of the robots and the users’ confdence. Robinette, P., Wagner, A. R., and Howard, A. M. in [82] studied the efects of a robot’s appearance on a user’s trust. The researchers conducted an experiment in order to discover the dynamic of trust between afraid evacuees and the robot guides. Their goal of designing robots is to generate trust and maintain this trust over the interaction between robots and evacuees. The researchers used two diferent robot designs and appearances, and examined which one of those robots would get the trust of evacuees to follow the robot’s instructions. The results demonstrated that the evacuees followed the guidance robot that was described in the frst scenario. This means designing a robot with a certain shape and appearance could afect the users’ confdence and

44 trust. However, some other researchers studied the relationship between the users’ behaviors and the robots’ appearance. Austermann, A., Yamada, S., Funakoshi, K., and Nakano, M. in [4] studied the users’ behaviors toward diferent types of robots that have diferent appearances. The researchers compared the users’ ways of giving commands and feedback to a “humanoid robot and a dog-shaped pet- robot.” The results indicated that the way of giving commands did not rely on the robots’ appearance, but it relied on personal preference. However, the way of giving feedback depended on the robots’ appearance. Thus, the users rewarded a pet robot in the same way they rewarded a real dog, such as touching his body and saying “well done” or “that was right,” while with the humanoid robot, the users only used words, such as “thank you” without any touching. This means the users could depend on the appearance of a robot for their interaction. Moreover, Kanda, T., Miyashita, T., Osada, T., Haikawa, Y., and Ishiguro, H. in [39] conducted an experiment with two humanoid robots that have diferent appearances and examined the users’ behaviors toward that. The results illustrated that the verbal behavior of the participants did not change according to the robots’ appearance while the non-verbal behavior of the participants, such as the distance and the time of response, was afected by the robots’ appearance. Furthermore, de Kleijn, R., van Es, L., Kachergis, G., and Hommel, B. in [18] investigated the people’s behaviors, such as fairness preference, as well as strategic and altruistic behaviors toward diferent types of robots, such as “a human, a semi-humanoid and a spider-like robot, and a laptop” in two diferent games. The results demonstrated that the users’ behaviors, such as fairness preference and strategy were not afected by the robots’ appearance, but by the person’s

45 propensity to anthropomorphize others. However, altruistic behavior, was afected by the robots’ appearance. Additionally, Walters, M. L., Syrdal, D. S., Dautenhahn, K., Te Boekhorst, R., and Koay, K. L. in [102] studied the perceptions of people and their behaviors toward various robots’ appearance. The result demonstrated that the participants preferred the robots with the “human-like” characteristics and appearance. Goetz, J., Kiesler, S., and Powers, A. in [78] conducted an experiment to study the relationship between the robots’ appearance and the humans’ acceptance and behavior toward robots. The result showed that the participants preferred the humanoid robots that could be needed for jobs. Indeed, the results indicated that the appropriate design of robots’ appearance and communication behaviors could play a signifcant role in producing efective and desired social robots. There are a few research papers that studied the users’ perception of privacy and the users’ awareness regarding robot appearance. As mentioned, Lee, M. K., Tang, K. P., Forlizzi, J., and Kiesler, S. in [53] conducted interviews about social robots that provided services in the workplace and that were equipped with high standard sensors. The researchers stated that the “privacy-sensitive designs” that could assist in increasing the awareness and understanding of humans toward privacy are strongly needed with the new technologies. There are many previous research studies that have succeeded in promoting humans’ privacy in diferent areas. However, the researchers in this article stated that designing a structure of privacy for robots needs a certain process and steps for several reasons. First of all, robots are very complicated with advanced technology because they have highly developed sensors. Second, the nature of robots that are movable and autonomous can confuse the normal boundaries between the public and private areas. In fact,

46 the rules that should be applied to robots are still not clear. Third, the robots that have a human appearance and use the human language in their communication with users could lead users to trust these robots and share with them their private information. Finally, the operating cost of these robots is high, which leads users to share the use of those robots. In addition, these robots can move from one place to another, so they might collect the bystanders’ data as well, which could not be required. The results demonstrated that few participants recognized the robots’ sensors. This is because the robots’ design was not clear enough to the users. However, even after an explanation that the sensors that a robot has, the participants were surprised that the robot’s camera was omnidirectional, which it could rotate 360 degrees, and the robot could see behind its back without moving its head. This misunderstanding might be because the robot was like a human in his shape, so the users did not recognize that the robot could have extra capabilities than a human and could move its head like that. In addition, the camera could be considered as the humans’ eyes, so there could be no one who could imagine that a robot that is similar to a human could have eyes in its back. Additionally, few participants knew about the robot’s video and audio recording features for security and performance purposes, while others had no idea about the recording at all. This misunderstanding could also be because the robot’s appearance was similar to a human, so the users could imagine that the robot could have the same capabilities as a human. The researchers found that the participants were not able to infer the sensing capabilities of the robot from its appearance. This means the appropriate design could assist the users in being aware and comfortable. The researchers concluded by clarifying that the capabilities of robots

47 in data collection and information construction should be known. In addition, the capabilities of robots could be misunderstood when the robot seems human-like. In fact, a robot that looks like a human might seem more skilled at some tasks than a machine-like robot, such as a bathing task, but if the users value their privacy, they would prefer a robot that has a machine look to be taking care of their personal tasks. However, when working with a robot for tasks, such as a personal banker or game partner, the users would prefer a robot that looks like a human [51]. Indeed, Calo, M. R. in [14] stated that people would disclose less to a robot that appears like a human. However, an expert on privacy, law, and the internet, Mireille Hildebrandt, stated that if the designers of robots make the robots look like humans, the users might trust them in a very risky way [107]. In addition, the more the robot looks like a real human being, the more the discrimination between the machines and humans become unclear [51, 1]. All of those researchers stated that the appearance of robots could play an important role. Thus, understanding exactly how to design the robots’ appearance is a signifcant step in producing privacy-sensitive robots. The main problem in this feld is to discover the appropriate mix of the “machinelike and humanlike” appearance in order to meet the people’s needs and goals and to provide a proper functionality.

3.3.2.3 Limitations

There are few researchers who studied the efect of robots’ appearance on pri- vacy. Therefore, there is a need for research papers that study the relationships between robots’ appearance and privacy protection in order to design robots that

48 become appropriate for protecting users’ privacy. There is a need to ask people what is the perfect design for robots that could assist in protecting their privacy.

3.3.3 Robots’ Perception (Camera and Microphone)

3.3.3.1 Overview

Many robots are equipped with many types of sensors. Indeed, there are some types of robots that can see by using a camera, hear by using a microphone, or feel by touching, which could violate the users’ privacy. However, the robots’ camera and microphone could afect the users’ privacy in many diferent ways. Indeed, the cameras act like the robots’ eyes, so the robots can see everything and record what they can see. In addition, the microphone acts as the robots’ ear, so the robots can hear everything and record what they can hear. Thus, the researchers started to study ways that could be used to constrain what the robots can see and hear in order to protect the users’ privacy.

3.3.3.2 Related Works

There are diferent studies that focused only on protecting users’ privacy from the cameras. Indeed, many researchers focused their studies on fnding ways to protect sensitive information and private situations that the camera can record. Nowadays, many nakedness detectors have been used by many diferent robots to protect users’ privacy. In fact, Fleck, M. M., Forsyth, D. A., and Bregler, C. in [25] produced a strategy that can be used to detect naked people who appear in the view and scene. Thus, the naked person who would be in the image would be covered with a mask that matches the person’s skin color.

49 In addition, another study assisted in protecting a nakedness. Fernandes, F. E., Yang, G., Do, H. M., and Sheng, W. in [24] proposed a way that allowed robots to detect the nakedness situation and immediately turn away in order to protect this private situation. The method relies on Convolutional Neural Networks (CNN), which were trained with a database that consisted of more than 2900 pictures that were collected from the Internet and the researchers’ “ASCC Smart Home.” In the experiments, when the robot detected a sensitive situation, it then turned away and announced its purpose verbally. The result demonstrated that humans preferred that robots were avoiding privacy-sensitive situations by taking some action. There are many diferent types of image manipulation techniques that could be used to protect privacy, such as pixelation, blur, abstraction, redaction, and replacement. In fact, by reducing the resolution of an image, the image could be modifed to protect privacy. This method is known as pixelation [38]. On the other hand, redaction, which is the technique that works by removing pixels from an unwanted part of an image to produce a black box over that part of the image could be used. However, the blur technique works by allowing the value of certain pixels to be afected by the values of the surrounding pixels (smooth the image). This technique is considered as a common approach for image manipulation [34, 38]. Moreover, abstraction works by “applying a combination of bilateral and mean shift fltering” [34]. However, the Replacement flter is similar to redaction, but in replacement, the unwanted area is covered by the background (background color) that is behind the deleted object [34, 38]. Indeed, there are a few works that used some of these techniques for privacy protection. Ryoo, M. S., Rothrock, B., Fleming, C., and Yang, H. J. in [88] they aimed to fnd solutions to protect humans’ privacy when there is a robot around

50 them by making those robots that have the ability to recognize the activities of people, and that assist them in many diferent aspects in their lives to not record- ing any video that could attack these humans’ privacy. The researchers introduced a technique, inverse super-resolution, which addresses the privacy violation prob- lem by generating very low-resolution videos. The experiment demonstrated that robots with the inverse super-resolution technique were able to recognize the ac- tivities with the very “low-resolution” videos. However, some researchers tried to use more than one technique in their studies to fgure out the level of privacy protection that could be obtained when using those flters, and they strived to evaluate the results of these flters to fgure out which one could provide more privacy. Boyle, M., Edwards, C., and Greenberg, S. in [8] used two types of flter, blur, and a pixelized flter, with a video to protect privacy and to analyze their efects on awareness. The two flters ranged from “heavily applied flter” levels that protect all information, to “lightly applied flter” levels that disclose the complete information in the video, and there are other seven levels between those. The results illustrated that the blur flter could protect privacy and provide awareness information more than the pixelized flter. In addition, Zhao, Q., and Stasko, J. in [110] used several types of flter tech- niques, which are pixelization, detection of an edge, shadow-view, and live-shadow. They evaluated the results of these flters according to the efectiveness and utility of those flters when they were used to protect privacy. The results showed that the identifcation of an actor could be difcult to be identifed, but the user’s be- haviors, activities, clothes color and hairstyle cannot be challenging to be used as a source to recognize a user.

51 There are some other researchers who continued to use more than one technique to evaluate their results and analyze the trade-of between protection of privacy that is provided by using these flters and utilities. Korshunov, P., Cai, S., and Ebrahimi, T. in [44] used a crowdsourcing approach in order to evaluate the re- sults of the following flters: blurring, pixelization, and masking, in the context of privacy and analyzed the balance between privacy and clarity of actions. The results illustrated that the pixelization flter provided the best performance of bal- ancing between utilities and privacy, while the low intelligibility and high privacy protection were provided by the masking flter. In terms of trading of between clarity and privacy protection, the results demonstrated that pixelation was the best. However, regarding privacy, privacy was protected the most by the masking flter, but with the highest obscurity, followed by pixelation that was able to bal- ance between the two aspects. Lastly, the blur flter provided the highest clarity and lowest privacy protection. In addition, Erdelyi, A., Barat, T., Valet, P., Winkler, T., and Rinner, B. in [22] proposed a cartooning flter that deletes the privacy details from the image. The paper evaluated three types of flters, cartooning, blur, and pixelation, according to privacy protection and utility. The cartooning flter can be applied to the whole image or only a certain private area of the image. The results demonstrated that cartooning could protect privacy with an acceptable level and could preserve utility. However, the pixelation flter provided the lowest utility and more privacy. Indeed, the result of the blur flter was in between those flters. Indeed, the previously listed studies used techniques that illustrated the ma- nipulation clearly on the images and videos. However, there are some other re- searchers who focused their studies on protecting the users’ appearance, especially

52 users’ faces, and tried new types of techniques for that purpose without the need to play with the image quality or delete the facial expression that sometimes could be needed to provide meaning for the image. Nakashima, Y., Koyama, T., Yokoya, N., and Babaguchi, N. in [62] proposed a method that changed the facial areas in a very professional way without changing the facial expression. The results indicated that the suggested techniques succeeded in protecting privacy and preserving facial expression. In addition, Korshunov, P., and Ebrahimi, T. in [45] strived to change the basic faces with other similar faces while maintaining the similarity between the basic face and the result of that face, but without the ability to distinguish and know the identity of the frst face. Thus, they used a “morphing-based privacy protection” technique. Then, the researchers used a “FERET” dataset to test their method to morph faces. After that, with the resulting image, the researchers would use the “face detection and recognition” mechanism to make sure that the altered faces preserved the face similarity but without the ability to recognize the real face, which means protecting privacy. Some researchers strived to use techniques that also hide as much manipulation as possible but with more objects than users’ faces only. Thus, they used strategies, such as “inpainting” or “image completion” that aim to hide the target objects and complete the background that is behind the object. This also is considered as a replacement flter, but the inpainting could be more accurate. However, applying this technique needs a complete knowledge of information that is behind the object that needs to be deleted, which could sometime be impossible [38].

53 However, there are some research papers that strived to address some aspects of privacy protection on robotics, and to trade of between privacy and utility while using diferent techniques. Jana, S., Narayanan, A., and Shmatikov, V. in [37] used DARKLY, which is a privacy protection system. The researchers evaluated DARKLY on 20 difer- ent “perceptual applications” designed for recognition, tracking, surveillance, and detection. The results demonstrated that the system was able to protect privacy without reducing the functionality or the accuracy of the applications. In addition, the results confrmed the system’s ability to balance between privacy and utility. Furthermore, the results illustrated that even with a high level of privacy protec- tion, the efectiveness and usefulness of the applications remained acceptable. In addition, as mentioned, Butler, D. J., Huang, J., Roesner, F., and Cakmak, M. in [11] examined the “privacy-utility tradeof” for remotely operated robots by decreasing the visibility of visual data that could be captured by these robots in order to protect privacy of users while taking into consideration the robots’ needs for completing the tasks successfully. The researchers conducted two surveys to provide a privacy framework, and they initiated a user study to examine the efects of flters on visual data and the efects of flters on robots to complete their tasks. After collecting information about the privacy concerns from the frst survey, the researchers continued their study by applying four flter techniques on robots in order to protect privacy. Then, the researchers conducted the fnal survey in order to get the participants’ feedback about the results of the flters toward their privacy concerns. The results demonstrated that the most preferred flter for protecting users’ privacy was the “superpixel” flter. In addition, the results showed that the participants were more concerned about their privacy after seeing the results of

54 the fltering techniques. Additionally, the results indicated that the participants needed to have a balance between utility achieved and privacy loss in some context. However, these fndings were derived from the users’ point of view. To complete the study, the researchers tried to fnd a flter that could protect the users’ privacy and improve the robot’s performance upon completing the tasks successfully. The authors completed two tasks, one to measure the robot’s utility while performing the flter and the other to examine the privacy protection. The results indicated that in the case of utility, completing the task with the obscured view had a similar result to complete the task with the clear view, but the obscured view was better in protecting privacy. Moreover, the results proved that the practice could assist in improving the performance of the robot’s view. Furthermore, collecting the data could decrease privacy protection, but using the flters could assist in mitigating this problem. As a result, this study proved that the flters could improve privacy while balancing the utility. Klow, J., Proby, J., Rueben, M., Sowell, R. T., Grimm, C. M., and Smart, W. D. in [43] examined the abilities of a remote operator to discover some detail while completing the required task while applying diferent techniques. Those techniques were monochrome (no flter), depth image flter, Sobel flter over RGB, and combined Sobel image flter. The researchers provided a user study in order to observe the impact of some flters on privacy protection and usability of the robot in completing its task successfully. The researchers provided three hypotheses. The frst one was that protecting privacy would be accomplished when applying flters on the robot. The flters worked by decreasing the ability of the operator to recognize objects in the surrounding environment. Secondly, the operator would be able to complete the required tasks even with these flters. Finally, because these

55 flters reduced the clarity of the image, the concentration of the operator would be increased in order to accomplish the required task. Thus, privacy protection would be achieved. Indeed, the researchers applied one flter and then asked the participants, who were 19 participants, to use the remote operator robot in order to complete a task, which was moving the robot around four environments, that were messy with some objects and books. The participants were asked to fnd certain balls and then return to the initial point. In addition, the authors asked the participant to explain what they noticed during this experiment. The researchers were observing what the participants were doing during the experiment. After the experiment, the researchers interviewed the participants to allow them to watch their recording and try to focus on the objects that would appear in the recording video instead of concentrating on driving the robot. They then interviewed them to get the feedback about what they saw in the recording. However, according to privacy protection, the results indicated that all types of flters increased the level of privacy protection, which supported the researchers’ frst hypothesis. The results showed that when the participants were controlling the robot while applying any type of flter, they noticed fewer objects by 1.57 and identifed fewer objects by 0.87. While watching the record, the efects were more noticeable with ”1.77 fewer objects noticed and 1.00 fewer identifcations.” The results for each flter demonstrated that the best flter for protecting privacy was Depth while the weakest one was the Sobel flter. On the other hand, ac- cording to utility achievements, completing the required task while applying the flters consumed more time than without using these flters. Finally, according to cognitive load, the impact of the cognitive load on privacy-protecting was minimal.

56 Moreover, Hubers, A., Andrulis, E., Smart, W. D., Scott, L., Stirrat, T., Tran, D., ... and Grimm, C. in [34] strived to protect users’ privacy by manipulating the data on videos by conducting two diferent studies. The researchers asserted that diferent types of manipulation methods could provide diferent levels of privacy. The flters that were used by the researchers were the redact, blur, replace, and abstract. In addition, the researchers examined the robot’s performance while ap- plying the flters to provide privacy protection. The participants in the frst study were asked to watch three video clips taken by a robot that was exploring an of- fce and to respond to diferent questions that asked the participants to recognize diferent objects in the ofce environment. In the second study, the participants were requested to operate a movable robot through a strange home environment remotely and to complete the survey questions that were asking about identify- ing certain objects. The results indicated that using diferent flters can provide diferent levels of privacy protection. In addition, the results asserted that using those techniques was efective in protecting users’ privacy and completing the tasks without afecting the performance of the tasks. Additionally, Rueben, M., Bernieri, F. J., Grimm, C. M., and Smart, W. D. in [85] examined three various interfaces, point-and-click GUI, markers on the ob- jects, and tool point to the objects, according to their efciency and usability to specify the objects that are needed to protect their privacy in an ofce environ- ment. The results demonstrated that the markers and the tools, which are physical methods, consumed less time when each private object could be reached by hand whereas the graphical interface, point-and-click GUI, consumed more time. How- ever, the point-and-click GUI could also be more efective if the robots are able to see all the private objects in front of his eyes.

57 3.3.3.3 Limitations

In fact, using techniques for constraining what the robots can see could draw attention to the targeted objects, situations, or areas, which could cause many other problems related to privacy. Indeed, most of the aforementioned studies used techniques that draw attention to private objects, situations, or areas. Thus, a new technique that could prevent this problem is needed, and it must be tested to see if it draws attention or not. If yes, then it would need to be improved or removed.

3.3.3.4 Related Works

Unauthorized listening to any private conversation leads to violating personal privacy. To the best of our knowledge, there are few studies that mentioned the privacy violation that could occur from using the robots’ microphone and rare studies that strived to protect privacy that could be violated from using micro- phones. However, there are many other researchers from another feld, such as smart speakers, who studied privacy violation that could occur from using a mi- crophone. Denning, T., Matuszek, C., Koscher, K., Smith, J. R., and Kohno, T. in [19] asserted that most robots used unencrypted audio that could lead to using the audio capabilities of robots to collect private data. There are diferent ways to protect private audio. Those methods were studied by some researchers in felds other than robotics. Some smart speaker providers, such as Amazon, used “wake up word” with the microphone in order to activate the microphone and turn it on. They used encryption methods to encrypt every word that the smart speaker can hear until it hears its wake word, such as its name

58 then starts to respond. If the user stops saying anything, the smart speaker will wait for several seconds and then start to use the encryption method again [2].

3.3.3.5 Limitations

The lack of research papers that strived to constrain what the robots can hear and protect audio privacy shows that there are many audio privacy problems in robotics that need to be addressed.

3.3.4 Robots Navigation (Movement)

3.3.4.1 Overview

Most social robots have the ability to move from one place to another, which could violate the users’ privacy. Thus, to restrict the movement of robots, the robots’ navigation should be constrained [38]. There are several techniques that are used to limit this capability. Indeed, many researchers have studied those techniques.

3.3.4.2 Related Works

The motion-planning algorithm and obstacles are considered as one of the robots’ navigation constraints [38]. Indeed, the obstacles are a constrained method that is used to protect the private object or area. In other words, the privacy- sensitive robots that use this method could see the private object or space as an obstacle. In fact, those obstacles could be temporary, which means a temporal dimension could be added to the robots’ map. This could control the movement of the robots, so the robots could be prevented from entering a certain room at some

59 time while allowing them in other times [38]. In fact, through a graphical user interface, the users could control those private areas, objects, or time intervals. However, there are some papers that studied this type of technique [38]. LaValle, S. M.in [50] studied the planning algorithms and proposed a constraint technique that works with private areas or objects. Indeed, the obstacles can be created when the user desire to protect an object or space. Another way for constraining the robots’ navigation is by using a semantic map that uses both of metric measurements and conceptual data [38]. In this method, the robots can use those efective labels for controlling their movements and taking a decision [38]. In addition, the users can use labels, such as “if bed, then bedroom, then do not enter,” or “bedroom, do not enter” [38]. There are some papers that studied this type of technique [38]. Galindo, C., Fern´andez-Madrigal,J. A., Gonz´alez,J., and Safotti, A. in [28] studied the semantic knowledge that is used by robots for task planning. The researchers asserted that the experiments of task planning for movable robots de- pend on information and semantic knowledge. Indeed, this type of map allows the planer to make diferent efective labels and to enhance the planning task. In fact, this method assists in performing a task without humans’ intervention. In addition, in [27] C. Galindo, A. Safotti, S. Coradeschi, P. Buschka, J. A. Fernandez-Madrigal, and J. Gonzalez produced “a multi-hierarchical semantic map” to allow the mobile robots to get semantic data from their sensors, and then use these data for navigation. The researchers conducted experiments on a real movable robot, and the results illustrated that the robot was able to use and con- clude new semantic information from the environment in which it was.

60 On the other hand, the researchers studied diferent constraint techniques and rules for robots’ movements when those robots move between people in order to protect the personal space with respect to proximity to humans [38]. Butler, J. T., and Agah, A. in [12] studied the psychological impacts of patterns of robots’ behaviors, such as approaching a user, avoiding a user passing through, and car- rying out non-interactive tasks while the existence of humans. The researchers studied the levels of humans’ comfort with the robots’ behaviors, speed, distance, and body design. In addition, Takayama, L., and Pantofaru, C. in [98] investigated the problems that are related to the personal space of humans during the existence of robots. In several situations, the researchers investigated the sources that could afect proximity behavior when there is a robot. Furthermore, Mumm, J., and Mutlu, B. in [61] explained the “physical and psychological” distance that people formed from robots. The results regarding physical distancing illustrated that the participants who did not like the robots maintained a more signifcant distance from the robots. The fndings regarding the psychological distance showed that the participants who did not like the robots expressed little to those robots. Moreover, Okita, S. Y., Ng-Thow-Hing, V., and Sarvadevabhatla, R. K. in [67] examined if the physical distance that is formed between the robots and users could be afected regarding diferent factors. The results demonstrated that “both verbal and non-verbal prompting” afect the physical distance.

61 3.3.4.3 Limitations

Most of these techniques could succeed in protecting privacy. However, with the great development of sensors that are distributed in environments, and with the increased usage of these sensors, constraining robots’ movement by connecting the robots’ sensor with those sensors that are found in the human environments could be more efective and practical. From the aforementioned studies on con- straining the robots’ movements and to the best of our knowledge, it seems that there is no study that examined that idea of connecting the robots’ sensors to the environments’ sensors in order to restrict the robots’ movements.

3.3.5 Authentications on Robots

3.3.5.1 Overview

User authentication is a signifcant mechanism for protecting robots from unau- thorized users [36]. Thus, the authentication could assist in distinguishing between authorized and unauthorized users. An authentication mechanism is used in order to authenticate the identity of users who try to access a robot. There are diferent methods of authentication [36]. The most common methods are:

• Something the users know, which is known as a “knowledge-based authen- tication” method, is the frst common method. In this method, the system validates the users based on users’ knowledge, such as a password.

• Something the users have, which is known as a “token-based authentication” method, is another common method. In this method, the system validates the users based on the users’ token, such as a credit card.

62 • Something the users are, which is known as a “biometric-based authenti- cation” method, is the last common method. In this method, the system validates the users based on biological or behavioral characteristics, such as a user face, voice, fngerprint, and signature [36].

Combining diferent methods could produce multi-factor authentication or multi- modal authentication. In the multi-factor authentication, the combination could be between two or more of those three diferent authentication methods, such as combining a credit card with a PIN, so something the users have with something the users know. However, in the multi-modal authentication, the combination could be between two or more factors of one type of these methods, such a com- bining face with a voice, so something the users are with something the users are as well [36]. Applying the authentication process can be one time when logging in, which is known as point-of-entry authentication, and continuously. Both of these processes can be used with the robots’ environments; however, in robots’ environments, it would be better to use continuous authentication to recognize users due to the fact that social robots interact with humans continuously [36]. Thus, for the intelligent social privacy-sensitive robots, it is signifcant to have an authentication system that recognizes users to provide more security, protect privacy, and provide suitable services to each authenticated user.

3.3.5.2 Related Works

There are some researchers who studied the authentication process in the robots’ environments by using biometric-based or semi-biometric based methods.

63 The researchers stated that there is a memory unit in the robots that saves infor- mation that is then used for users’ identifcation and diferentiation [41]. Kim, D., Lee, J., Yoon, H. S., and Cha, E. Y. in [42] proposed a user authentica- tion method for unconscious users of robots or of cameras in order to be recognized by robots. The authentication system combines both methods of biometric charac- teristics, such as face and semi-biometrics, such as body height and clothes color. The results illustrated that the authentication system was able to maintain the user status data continuously, which assisted in recognizing and fnding users. Moreover, Purohit, K. G., and Bhiwani, R. J. in [79] proposed an authentication system that used fngerprint and voice characteristics, which is a multi-modal authentication system. The system has two stages of authentication. In the frst stage, the system identifes a user according to his or her fngerprint traits. In the second stage, if the authorized user is identifed by the robot, then the robot uses voice recognition and receives the commands from the authorized user. The results demonstrated that the system was efcient and it can be used efectively. However, some other researchers focused only on one method. Liu, J. N., Wang, M., and Feng, B. in [55] proposed “iBotGuard,” which is an “Internet-based intelligent robot security system.” The system uses a face recognition method in order to detect intruders who enter into a secure space. Furthermore, Yun, W. H., Kim, D., and Yoon, H. S. in [108] asserted that when the social robots provide diferentiated services regarding members, the robots, in this case, need an appropriate and fast authentication process to verify if users are a member of the family or a non-family member. The researchers in this paper developed a group authentication system based on a face recognition method for providing the appropriate services. The results illustrated that the system was

64 accurate and fast, which is appropriate for social robots that serve in homes. Additionally, the author, Kwak, K. C., in [49] proposed “Incremental Tensor Subspace Analysis” (ITSA) for a user’s face enrollment and recognition via a robot camera while performing the customized service in the user’s home. The author used a database of faces for the experiment. The results of the experiment showed that the system provided good performance when comparing it with the “well- known methods.” The researcher, Aryananda, L. in [3] asserted that it is necessary for the robots to recognize humans and learn more about their characteristics via embodied social communication. Therefore, the researcher suggested the implementation of an online face authentication system for the robot, Kismet. The results demonstrated that the system was able to learn and recognize a few persons who were interacting with the Kismet robot. Ramey, A., and Salichs, M. A. [80] proposed a user recognition algorithm that worked by estimating the gender of a user relying on the morphological shape. The system specifcally focused on the vertical outline of a user’s breast for estimating the gender. The results indicated that the estimation was fast, useful, and did not need extra computation resources. However, there are some researchers who used external hardware with the robots in order to apply the authentication methods. The researchers studied the possibility of brainwaves in providing security at a high level. Kumari, P., and Vaish, A. in [48] used signals from an EEG, Electroencephalograms, which are produced from visual stimuli as a password and as a way of authentication. The researchers used a digital flter, Butterworth, that handles the EEG signals that are produced from the EEG device. After that, the DWT, wavelet transforms, was

65 used for extracting the features used for the authentication process. The results illustrated that EEG-based authentication provided a high level of security.

3.3.5.3 Limitations

It seems that the authentication process is signifcant in providing security, preserving privacy, and providing services regarding users’ preferences. To the best of our knowledge, most of the studies were focused on using face recognition. However, the need to study more about user authentication methods on robots is obvious. Using a multi-factor authentication method that combines the voice, face, and password could be easy and appropriate for all social robot users of diferent ages.

3.3.6 Robot Warning System

3.3.6.1 Overview

In fact, “technology becomes invisible, embedded, and is enabled by simple interactions, attuned to all our senses and adaptive to users and contexts” [9]. Thus, there is a need for a warning system that assists the users to be more aware and that assists the robots to be more transparent about their capabilities in order to refect that to the users. There are a few papers that studied the robots’ transparency.

3.3.6.2 Related Works

The researchers, Vitale, J., Tonkin, M., Herse, S., Ojha, S., Clark, J., Williams, M. A., ... and Judge, W. in [101] studied privacy protection regarding an embodied

66 humanoid robot and disembodied robots. The researchers found that the capac- ity of the humanoid robot to collect more private data exceeds the capacity of the disembodied robot. In addition, the researchers investigated the infuence of robots’ transparency on users’ privacy attentions and their awareness. The results demonstrated that a transparent robot interface does not afect the users’ privacy attentions, but the transparent robot interface has a signifcant efect on users’ awareness. Thus, the users’ experience and awareness could be enhanced when providing users with transparent data. Schulz, T., Herstad, J., and Holone, H. in [89] asserted that when designing robots, the robots’ sensors and their actions must be obvious to the users when those sensors are in use. Furthermore, Lutz, C., and Tam`o,A. in [57] asserted that there are signifcant concerns regarding robots, which are the extensive use of robots, the absence of users’ awareness about robot capabilities, and users’ knowledge about the robots’ work. The researchers proposed the “interdisciplinary and collaborative” method that addressed those points. As mentioned in [53] the researchers asserted that participants were unaware of the robots’ capabilities and contents. In addition, the researchers emphasized that the participants clarifed that the existence of any sign that assists in understanding the robots’ capabilities and works and that assists in informing users about the current actions of robots, could make them more comfortable and aware.

3.3.6.3 Limitations

There are few papers that studied the transparency of robots and the warning systems on robots. In addition, to the best of our knowledge, most of the robots do not use a warning system for security and privacy purposes.

67 3.3.7 Robots’ Application’s Characteristics

3.3.7.1 Overview

It seems that with every new technology, there are some applications that are used for controlling this technology. People are using their smartphones in many diferent aspects. They can manage many tasks via using these smartphones. In addition, they can control other devices by using their phones. Indeed, there are a variety of applications that can be installed on smartphones to assist users in many diferent aspects of their lives. Because many robots of diferent sizes and types have become widespread in homes, ofces, and factories recently, the need for ways that could assist in con- trolling the robots in an easy and comfortable way is required. There are great eforts in developing applications that could be used to control the robots and to manage the robots’ movements [100]. Indeed, there are some robots that have an application that is used to con- trol the general setting of those robots, such as control the audio volume, speech volume, speech-language, system reboot, and move the robots.

3.3.7.2 Related Works

There are several applications for smartphones and tablets that are designed for controlling the robots easily [59]. Indeed, there is an application that allows users to control robots by telling them what they should do and where they should go [59]. Furthermore, researchers have developed software that could allow the users to control humanoid robots by using smartphones [100]. Moreover, some vendors of new robots have issued a simple smartphone application that controls

68 the robot, such as an “iRobot Roomba smartphone” application that assists in setting a start time for the robot [100].

3.3.7.3 Limitations

However, there are no papers that study the efects of adding security and privacy settings to those applications in order to control the security and privacy on robots or that study the users’ preferences and needs regarding robots’ applications. In the future, there could be many diferent types of applications that control and manage the robots. In addition, many robots then could be controlled by many diferent operators to accomplish many diferent tasks [100].

3.4 Summary

In this Chapter, we discussed the research progress in privacy-sensitive robots and discovered the gaps in this research area by studying the research papers that are related to the privacy concerns, shape of robots, robots’ perception and movement as well as the authentication on robots, the robots’ warning system, and the robots’ application’s characteristics. We covered the signifcant features on robots that could afect the users’ privacy while using those social robots. In this next Chapter, we explain the proposed solutions that could solve the limitations that are related to privacy-sensitive robots or that could provide new solutions. In fact, the proposed solutions could assist in mitigating the problem of privacy violation.

69 Chapter 4

Proposed Privacy Sensitive Robots

4.1 Overview

As we mentioned before, there are diferent factors, such as the robots’ ap- pearance, camera, microphone, and movement as well as the lack of users’ authen- tication, the lack of warning system, and the lack of applications that assist in controlling the privacy setting on robots, could result in violating users’ privacy. Therefore, controlling and constraining those factors by using diferent techniques that are preferred by robot users could assist in mitigating the privacy violation and reaching a great level of user comfort. The research’s goals are to assist in developing privacy-sensitive robots that use techniques that work to protect users’ privacy in many diferent ways and that are preferred by the users of robots. The literature reviews have refected the needs and limitations that are related to privacy-sensitive robots. The following section explains the proposed solutions

70 regarding each factor, and we demonstrate if those solutions are comfortable and preferred by the social robots’ users through conducting surveys and an experi- ment that are explained in the next Chapter. However, before we proposed the following solutions, we strived to understand the users’ privacy concerns in order to meet the users’ needs and preferences.

4.2 Shape of Robots

As we noticed from the previous works, there is a relationship between the robots’ appearance and users’ confdence, trust, behaviors, perception of privacy, and awareness. The researchers asserted that when the robots seem like a human in its appearance, many privacy violations could occur. For example, the studies proved that users could trust the robots in a wrong and risky way, so they could forget that they are machines and their abilities surpass humans’ abilities. In addition, the studies proved that when the robots seem like humans the users could recognize that their senses are placed in the same places as humans and can work as a humans only, such as the camera placed in the eyes, not in the back, the head of robots can move as human only, but not 360 degrees, and the robots’ eyes (camera) and ears (microphone) cannot record. On the other hand, the results of some previous surveys asserted that many people did not prefer a social robot that works on their houses, or that act as their pets or as their friends to be like a machine only. See Figure 4.1 [73].

71 Figure 4.1: The results of people’s opinions on the outer shape of robots [73]

However, other studies proved that some robots are equipped with additional sensors that could not be needed by social robots. Thus, these results asserted that an appropriate design for privacy-sensitive social robots is required. This research studies the social robots that interact with humans and provide services to them at homes or workplaces, and that have to protect users’ privacy. Thus, after reading many papers and websites that studied and reviewed the users’ preferences on robots’ appearance and getting some knowledge about this feld, we propose a general design for social robots. We assume that this design could pro- duce privacy-sensitive robots that could be preferred by users. As we mentioned, we try to prove our proposed design via conducting surveys and an experiment.

72 From the diferent types of robots that exist, we can notice that most of the social robots seem to be like social actors or agents, which means they were treated like humans or animals [74, 72]. In addition, the studies’ results that are mentioned previously asserted that the appropriate mix of the “machine-like and human- like or animal-like” appearance could be a frst step to balance between humans’ preferences, robots’ functionality, and privacy protection. Thus, it would be better if we design the social robots by mixing the outer appearance of machine and humans, such as the NAO, Pepper, ROMEO robots. Moreover, it could be better if we design social robots with appropriate sizes that could make the users more comfortable and that could assist the robots in per- forming their tasks efciently. In addition, to provide a better design, we would equip the robots with sensors that the robots need only and place those sensors in appropriate places, such as put the cameras on the robot’s eyes, the microphone on the robot’s ears, and the speakers on the robot’s mouth. This is because when the outer shape of robots is close to the outer shape of humans or animals, it is the nature of humans to deal with those robots as if they are real and consider that the robots can see via their eyes, hear via their ears, and so on. However, if there is a need for an extra camera in some other area on robots for certain purposes, such as on the robots’ back or near the robots’ legs, then the camera should be designed and placed to be clearly noticed by the users. For example, it could come out when it is needed and hide inside the robot when the robot does not need it. In addition, it could be more secure if we design automatic covers that are placed on robots’ cameras to work as human eyelids, and on the robots’ micro- phone. Therefore, those covers could be used when needed to protect users’ privacy during the robots’ working period to perform tasks and after the robots have fn-

73 ished their tasks. It seems like there are some robots that have covers on their eyes that are similar to human eyelids, but those covers do not protect users’ privacy and are only designed to make robots look like humans. See Figure 4.2.

Figure 4.2: iCub, Flobi, Dreamer, Simon

From the fgure, we can also notice that the cover is divided into two parts: one moves from the top to the center, and the other moves from the bottom to the middle; however, our idea on the covers is aimed to use only one part that moves automatically from top to bottom as the eyelids of humans precisely to avoid any gaps, which would work for protecting the users’ privacy. In fact, the idea of the covers appeared long ago when the need of protecting privacy arose, and the use of camera covers increased when the webcam appeared on users’ laptops and smart devices. Indeed, the statistics of June 2015 showed that 20% of users of the internet from all over the world used covers to cover their webcam in order to protect their privacy [92]. We saw how the idea of robots came from science fction to reality; in the near future, the robots could be designed to wear eyeglasses, as the smart eyeglasses that exist nowadays. These could be equipped with many diferent types of software, such as flters and sensors, and they could be used as covers. See Figure 4.3.

74 Figure 4.3: Robots eyeglasses

This means the users could be more comfortable regarding their privacy protec- tion when using the covers that cover cameras and microphones. As we mentioned, the survey that we conducted refects if the users prefer using covers on their social robots or not. Furthermore, the robots can be equipped with a small touch screen that could be hidden inside the robots’ body if it is not be needed by users, or that can be added as an additional object on robots and removed if users do not need it. The users can use the smart screen for controlling the robots, adjust the setting of robots, or perform many other functions. In addition, the robots can use this screen to display information to their users, provide services to users, or perform other services. There are some social robots that are already equipped with a screen. See Figure 4.4.

75 Figure 4.4: Pepper robot equipped with screen

Scenario example to explain the tradeof between utility and privacy when applying our proposed technique: The meaning of tradeof between utility and privacy is to allow the robots to complete their tasks perfectly even if there are rules or techniques that are used to protect the users’ privacy. Thus, with our proposed design, the robots would still be able to complete their tasks. In this factor, we proposed three diferent solutions which are covers, screen, and outer shape design. In this scenario, we ignore the cover and screen because they are clear, and we focus only on the outer shape design. In this scenario, the robot has to monitor three babies who are playing in a room, so the robot needs to see from its back as well. However, in our design, we recommend having cameras on the robots’ eyes, but we mentioned that

76 the robots can also have a camera that could come out. This means the camera can be hidden in the robot back and when the robots need it, it can come out from the robots’ body; otherwise, it will be hidden inside the robots’ body. Indeed, the meaning of this explanation is that if the robots have some similarity to the humans’ outer shape, then it could be better to have the camera and microphone on the robots’ eyes and ears to protect privacy because naturally, the users will consider the camera and microphone are placed on eyes and ears. Indeed, in order not to restrict the robots’ capabilities, we can place another camera and microphone in diferent areas, but those additional cameras can appear only when they are needed in an obvious way to be noticeable to users.

4.3 Constraining Robots’ Perception (Camera and

Microphone)

4.3.1 Constraining Robots’ Camera

We can notice from the previous works that the techniques that were used by the researchers for constraining what the robots can see could draw attention in diferent levels and lead the watcher to ask questions about the hidden object and information. This could cause another type of risk to the users. Indeed, the techniques, such as abstraction, blurring, pixelating, and redacting could draw a high level of attention. See Figure 4.5. Indeed, in Figure 4.6 you can see the use of these flters with a particular object from previous studies [34].

77 Figure 4.5: 1) abstraction, 2) blurring, 3) pixelating, and 4) redacting

Figure 4.6: Abstraction, blur, and redact flters used by a previous study [34]

In addition, other techniques have been reported, such as “inpainting” or “im- age completion,” which are also known as replacing. These types of flter also draw attention, but they could draw a medium level of attention compared with the other techniques. See Figure 4.7. Figure 4.8 shows the efect of the flter that was used by a previous study [34].

78 Figure 4.7: Replacing flter

Figure 4.8: Replacing flter used by a previous study [34]

Indeed, the replacement flter could provide a high level of attention if there was an object placed above the target object that we aim to protect. For example, if there was a handbag above the money safe box that the robot intends to hide by using a replacing flter, the watcher can notice that manipulation because it does not make sense to see a handbag in the air and not standing on a base. See Figure 4.9.

79 Figure 4.9: Replacing flter applied on money safe box

Furthermore, the other techniques, such as morphing and image melding that were used by other research papers, proved that there could be no attention that can be drawn when using those techniques. Those techniques are applied and used only to protect users’ faces. Figure 4.10 illustrates those techniques that were used in previous studies [62].

80 Figure 4.10: Morphing and image melding [62]

However, most of the aforementioned techniques have a problem, which is draw- ing attention to the hidden objects Information, and areas, or which is restricted only to faces. Thus, we proposed new methods that could prevent this type of problem and that could increase the level of privacy protection. As we mentioned, we conducted surveys in order to prove if our solutions could work and be preferred by robots’ users. The frst technique, as we mentioned earlier, is to use the automatic covers, as humans’ eyelids, during the robots’ working period and when the robots fnish their work. During the robots’ working time, the users can tell the robots to close their eyes (camera) when they want that from the robots by using words, such as “close your eyes (cameras).” In addition, the robots can use those covers when

81 they are programmed to avoid certain objects, areas, or situations, such as when they are detecting a naked person. The robots could cover their eyes (camera) temporarily, as human blinking, until they go far away from the situation or object that they have to avoid the object or until the users ask them to reopen their eyes (cameras) again by using words, such as “open your eyes (cameras).” The objects, areas, and situations that the robots have to avoid to protect users’ privacy will be programmed earlier by allowing robots to use a particular database or network. The covers can also be used when the robots fnish their tasks, turned of, and go to rest. In fact, the covers could be the best and the more comfortable solution for users who are obsessed with privacy protection. Since the camera will be covered, if the camera is turned on accidentally, blackness will be the only thing that can be seen and recorded. The second technique is using our proposed flter, which is the adaptive “delete and replace” flter that works by deleting the target object and replacing it with another nearest object that looks like it. The target objects and the objects that are similar to the target objects could be stored on a database as the simplest way of linking the target objects with the other objects that could be used instead of the original ones. For example, the flter could delete the credit card object and replace it with any business or restaurant card. See Figure 4.11.

82 Figure 4.11: The left image is the original image and the second image is our proposed adaptive flter “delete and replace”

In addition, this technique could work with the target object that has another object placed above it. See Figure 4.12.

Figure 4.12: The left image is the original image and the second image is our proposed adaptive flter “delete and replace”

83 Our proposed technique can solve the problem of drawing attention that oc- curred when using the other techniques because the adaptive “delete and replace” technique uses another object that is similar to the object that was deleted and provides an image/scene that seems real and not modifed. The adaptive flter means that the flter could be adjusted according to certain information, object, location, situation, user, or time. In Figure 4.13, the images on the left and the right, from top to bottom, show the original image, abstract, blur, redact, replace, and our proposed flter. Indeed, the images on the right are taken from the previous study except for the last one, which is our proposed flter that we added for comparison [34]. This fgure can show how our proposed flter can solve the problem of drawing extra attention to the protected object. In fact, the robots can use diferent types of flters, and those flters can be adaptive. For example, the robot can use a blur flter for faces and ”delete and replace” flter for other objects. Thus, for constraining what the robots can see, we suggested that the robots use both the automatic covers and the adaptive flters to provide a high level of privacy protection and users’ comfort. This would provide more flter options for users on their robots. In addition, adding extra features by allowing the users to adjust the setting of the flters that they want their robots to use could make the users more comfortable regarding their privacy when using social robots.

84 Figure 4.13: The images on the left and on the right, from the top to the bottom, show the original image, abstract, blur, redact, replace, and our proposed flter

85 Scenario example to explain the trade-of between utility and privacy when applying our proposed technique: In this scenario, the robots’ mission is to assist in cleaning the home and or- ganizing it, so if the robots see cards everywhere, such as a credit card, a business card, and so on, the robots will collect them and put them in a particular box. At the same time, the privacy of the credit card should be preserved and not observed by the robots. By applying our proposed flter, the cards, such as credit cards that are programmed to be hidden objects will appear as business or restaurant cards, which will not draw attention when they are collected by the robots. In addition, using the automatic cover to avoid a private object will not limit the robots from doing their tasks because the cover could be temporary until the robots go far from the object.

4.3.2 Constraining the Robots’ Microphone

Violating personal privacy via hearing private information could cause a prob- lem. Because social robots have the ability to listen to and record, the need for protecting the privacy of audio is considered as a signifcant step to protect users’ privacy. To the best of our knowledge, there are few studies that strived to protect users’ privacy that could be violated by using the robots’ microphones. Thus, we proposed diferent techniques that could be used on the robots’ mi- crophone, and that could assist in mitigating the violation of users’ privacy. As we mentioned, we conducted surveys in order to prove if our solutions could work and be preferred by robots’ users.

86 The frst technique is to use the automatic covers, which are similar to the covers that could be used on the robots’ camera. The covers can also be used while the robots are working and after they have completed all of their tasks and gone to rest. During the working time, the users can tell the robots to close their ears (microphones) when they want that from the robots by using words, such as “close your ears (microphones).” In addition, the robots can use those covers when they are programmed to avoid certain situations, such as when they are detecting a person calling via phone or talking with others; however, before the robots cover their microphones, the robots could frst inform the users about that action to make sure that the users agree and they do not need to use the robots’ microphone at that time. Indeed, the robots can use the automatic covers that cover the microphone completely when the robots do not require the microphone to complete a task. Then the robots can open the covers automatically when the users do a specifc action, such as send an alert to the robots to turn the microphone on or reopen the cover via using a robots’ application. The covers can also be used after the robots have completed their tasks, turned of, and gone to rest. As we mentioned before, this method could provide more comfort to the users who are obsessed with protecting their privacy. In fact, because many studies asserted that most robots used unencrypted audio, this audio could have private data that could be collected and recorded, which would violate users’ privacy. For this problem, we suggested the second technique, which is to use an encryption mechanism that is similar to those that are used with smart speakers to protect users’ privacy. The encryption mechanism is to encrypt all the words that a robot can hear until the robot hears its wake up word, such as its name. Then the robot would respond to the voice. If the users

87 stop saying anything, the robot could wait for several seconds and then start to use the encryption method again. Scenario example to explain the tradeof between utility and privacy when applying our proposed technique: In this scenario, the robot’s mission is to clean the home, and at the same time, the robot has to respect the user’s privacy. Thus, if the robot recognizes that there are two persons who are talking to each other, then the robot will do the following: frst of all, the robot will send an alert to the users to inform them that it can hear their conversation. Then if the users respond by ”that is ok,” ”close your ears,” or ”please go away,” the robot will obey their response. Otherwise, the robot will cover its microphone as soon as it comes close to them or has the ability to hear them clearly. However, to balance between utility and privacy, if the users need to contact the robots via voice, then the users can send an alert to the robot system to inform the robot to open the lid from its microphone. This scenario could also be applied to the workplace if the robot was working as a waiter to serve the employees.

4.4 Constraining Robots’ Navigation (Movement)

Diferent techniques are used to constrain the robots’ movements, and most of them are promising. Indeed, because it seems like many people are using many various sensors in their environments, such as at their houses and workplaces for many diferent purposes, such as facility, safety, and security, we proposed other techniques for constraining the robots’ movements. The frst technique is to con- nect the robots’ sensors with the sensors in the robots’ environment. For example,

88 the robot’s sensor could be connected with the movement sensor and infrared sen- sor, so if a user does not want the robot to enter a place where he/she is, the robot then will check the movement sensor with the infrared sensors, and if the robot detects that there is a person inside that place, the robot will not enter. Furthermore, the robots can be programmed to respond to the request and command, such as “do not enter,” and “go away,” and the robots can also be programmed to use words, such as “can I enter?” Indeed, using multiple techniques would be the solution that the users would prefer to constrain the robots’ movements. As we mentioned before, we conducted a survey to fgure out the users’ preferences regarding the most comfortable tech- niques that could be used to constrain the robots’ movements to protect privacy. Scenario example to explain the tradeof between utility and privacy when applying our proposed technique: In this scenario, a user who could be an old person has a privacy-sensitive social robot that assists him in his daily life tasks, such as cooking, cleaning, and taking care of him. One of the rules that the robot has to obey in order to protect user privacy is that it cannot enter the bathroom when there is someone inside the bathroom, such as someone taking a shower. However, at the same time, the robot must take care of the person to make sure that the person is safe inside the bathroom, and he is not injured accidentally, so the robot needs to trade of between the utility, which is taking care of the person, and protecting the user’s privacy in a certain situation, which is being naked. Indeed, to achieve those goals, there will be two sensors inside the bathroom, which are the infrared and the motion sensors. The infrared sensor is used to indicate if there is a person (body) or not, and the motion or movement sensor is

89 used to indicate if the person (body) moves or not. In the scenario, the robot wants to complete some tasks inside the bathroom, so before entering the bathroom, the robot has to check if there is a person inside the bathroom or not by using the infrared sensor; if yes, then the robot will check if that person is moving or not via using the movement sensor. If there is no one inside the bathroom, then the robot will enter to complete its task. However, if there is someone inside and moving, then the robot will not enter and will return at another time. However, if there is someone inside, but he does not move, then the robot, frst of all, will ask if everything is OK If the person responds, then the robot can go and come back again at another time; but if the person does not answer, then the robot will ask again. If there is no response, then the robot will activate the flter to avoid the nakedness situation and enter the bathroom to see if the person is injured or needs help, so the robot could then call someone or call 911 for help. See Figure 4.14.

90 Figure 4.14: The process of the scenario

4.5 Users’ Authentications on Robots

As we mentioned before, the authentication system is signifcant in order to provide security, preserve privacy, and provide services for each user according to their preferences. The preferences for each user are stored on their profles and could be used after authenticating the user and opening his or her profle. To the best of our knowledge, most of the social robots could only have a static authentication system that uses the common method of authentication, such as a password that is used only when logging in. This means that those robots do

91 not have a continuous authentication system that could provide more reliability to users to protect their privacy. In fact, using multi-factor authentication methods that combine the voice, face, and password could be appropriate for all users. The users could enter the password via the robots’ program or via using their voices to active the robots, and then the robots will use the voice or face for the continuous authentication process. In addition, the robots can build a unique profle for each user that stores all the user’s preferences and the services, and the robots can change that regarding each user’s voice. The previous studies on smart speakers proved that a device could recognize the users regarding their voices. Scenario example to explain the tradeof between utility and privacy when applying our proposed technique: In this scenario, the robot works at a home that has many users. Each user prefers a specifc method of alert that could be performed when using the warning system. For example, user A prefers that the robot issues a sound to indicate its presence in a place when user A enters that place. However, user B could prefer that the robot turns on a light to indicate its presence when user B enters that place. To balance between the utility and privacy, the robot can use the voice as an authentication method that also assists in creating a profle for each user that contains all the users’ preferences.

4.6 Robot Warning System

As we mentioned earlier, the robots have abilities that exceed the users’ expec- tations, and the robots can do many diferent actions that could not be recognizable

92 by users, such as recording. In addition, from the previous studies, it seems that many users preferred providing the robots with a warning system that could assist in informing them about the robots’ capabilities and actions, which means makes the robots transparent [53]. Thus, using a warning system that assists in refecting the actions of robots and making those robots more transparent is signifcant and could be required by the social robots’ users. This system could increase the users’ awareness about the robots’ actions to be aware and cautious about the risk of privacy violations. The warning system that we proposed could use more than one method to make users more aware of their robots and more aware of protecting their privacy. Thus, we suggested that the robots’ factory must provide a booklet that contains guidance and instructions regarding each robot’s capabilities and features in order to give the users complete and detailed knowledge about their robots, how they work, and what they can do. That information could be, for example, the robot’s ability to record information, to see from their backs, to save information, to use the Internet, the robots’ sensors, and so on. That information and those instructions could also be saved and displayed on the robot’s small screen, on the robot’s program, on its application, on the device that comes with the robot, or could be said verbally by the robot itself. In addition, as another warning system method that could assist in making the robots more transparent, we can use diferent colors of lights for each capability of the robot, so we can have a ”color language” that could be used to make the users more aware of their robots and more cautious to protect their privacy. For example, we can use a color of light that appears statically around the robots’ eyes (cameras) without moving when robots turn the camera on and use a color of light

93 that appears and moves circularly when the robots record video with their cameras. Moreover, we can use a color of light that appears statically around the robots’ ears (microphones) when the robots turn the microphone on and that appears and moves circularly when the robots record sound with their microphones. In addition, we can use a light when someone enters a room where there are robots. Moreover, we can use a color of light on the robot’s head when the robot is connected to the Internet. Finally, we can use a color of light when the robots are on or of. Indeed, we can use a color of light when any sensors of robots are turned on. See Figure 4.15 as an example. Indeed, the lights that appear on the robot, in Figure 4.15, are not used for warning.

Figure 4.15: Examples of the proposed warning system

The other method of warning system is allowing the robots who could be far away from their users to alert them by using a sound, which means informing their users’ verbally (loudly). For example, the robots can alert their users verbally when they move from one room to another or when those robots can hear their users from far away.

94 Explaining the tradeof between utility and privacy when applying our proposed technique: Using the proposed technique would not limit the robots’ abilities in many diferent scenarios because the lights will be around the camera or the microphone, which would not afect their work at all. Thus, this technique could allow robots to protect users’ privacy and complete their tasks.

4.7 Robots Application’s Characteristics

As we mentioned before, to the best of our knowledge, there are some appli- cations that are used only for controlling the robots, their functions, and their general settings, such as controlling the audio volume, speech volume, speech- language, system reboot, and moving the robots. Thus, it could be more efective, comfortable, and secure if we can use those applications to control, manage, and adjust the security and privacy setting on a robot’s system. As we mentioned, the robots could provide a profle for each robot via using the voice authentication method, so with the privacy setting, the users could adjust the unique privacy techniques that they prefer. For example, some users would prefer using diferent types of flter techniques as the blur flter, while others would prefer the ”delete and replace” flter. In addition, some users would prefer some warning system techniques, such as a certain color of light while others would prefer something else on the warning system, such as issuing sound, so via the application and by using the profling technique, the users could be able to adjust all of that according to their preferences.

95 In addition, the users could control the robots’ permissions, such as allowing the robots to access their contact or their emails on their phones. Moreover, by using those applications, the users could handle the updates and could connect the robots with other home devices and manage that. Furthermore, via those applications, the users could receive the warning alerts from robots for protecting privacy, and those applications could be used to display the warning instructions that are related to the robots’ capabilities. Figure 4.16 illustrates the existing applications that lack the security and privacy setting.

Figure 4.16: Example of the existing applications of robots, NAO and Pepper

96 4.8 Summary

In this Chapter, we explained the proposed solutions regarding each factor that could assist in mitigating the problem of privacy-violating. However, in the fol- lowing Chapter, we describe the methodologies that were used in order to gather diferent information and that were used to prove if those solutions could be com- fortable and preferred by the social robots’ users.

97 Chapter 5

Methodology

5.1 Overview

There are many diferent ways that could be used in order to achieve the study goal. Discovering and using the appropriate methods could assist in providing useful and complete results.

5.2 The Goals of the Study

As we mentioned before, the thesis aims to provide diferent types of techniques that are used to produce privacy-sensitive robots and that are comfortable and preferred by the social robots’ users. The proposed solutions that are explained in the previous Chapters could assist in covering the gaps that exist in the research papers and solving the limitations that are related to privacy-sensitive robotics. In addition, the solutions could assist in mitigating the violation of users’ privacy, increasing the users’ awareness, and trading of between the utility achieved and

98 privacy loss. Indeed, the main goal of this study is to provide solutions that could assist in producing privacy-sensitive social robots that users preferred regarding security and privacy aspects.

5.3 The Methods of the Study

In order to learn more about users’ privacy concerns and examine our proposed techniques to determine if they are comfortable, trusted, and preferred by the social robots’ users or not, we conducted three diferent surveys and one experiment. The goals of those surveys and the experiment were to gather information about the users’ privacy concerns, users’ opinions toward the techniques for protecting privacy, users’ opinions towards features of robots relevant to privacy, and users’ opinions towards techniques of robots for increasing people’s awareness regarding privacy.

5.3.1 Surveys

For the surveys, we used an online survey tool, which is Google Form, provided in Appendix B. Before sending the surveys, we requested the Research Involving Human Participants (IRB) approval, which is necessary for conducting a survey or an experiment, provided in Appendix A. After we got the IRB approval, we sent the surveys through emails and social media. We distributed the questionnaires to the list of the Florida Institute of Technology (FIT) graduate students at the Computer Engineering and Sciences Department and to some employees. The survey links were also distributed at the library of FIT via a fyer and via contacting participants on the campus of Florida Institute of Technology.

99 Those surveys showed the informed consent as the frst part. After the partic- ipants agreed to participate, those surveys asked the participants diferent ques- tions. As we mentioned, we conducted three diferent surveys. The ”Cameras’ Covers” survey, we asked the participants only one question to fgure out if they use any cover to cover their laptop’s, smartphone’s, or tablet’s camera. From this survey, we can fgure out if the idea of using a cover to protect privacy is acceptable to people or not. The ”Filters’ Efects” survey aimed to fgure out if the proposed flter ”delete and replace” could solve the problem of drawing attention or not, and if it would be preferred by users or not. The survey showed the participants diferent pictures that had an object that we were trying to hide. Those pictures refect the manip- ulations that were used to protect the privacy of the object, but at diferent levels. Indeed, some pictures seem real and there was no manipulation. In this survey, we asked the participants to choose the picture that most showed that there were no manipulations and made sense. In addition, the survey asked participants which flter that was applied to those pictures they would prefer to use for protecting their privacy. However, in the main survey of the study, we asked various questions that could assist in understanding the users’ preferences and needs toward developing social robots that could protect their privacy. From this survey, we can fgure out the percentage of the participants who would like to own a social robot and where they would want to have the robot. In addition, we can learn the interest in various types of robots and types of tasks and services with which robots are assumed to assist. Furthermore, from this study, we can fgure out the users’ concerns

100 about social robots and the users’ privacy concerns regarding diferent objects, information, locations, and situations. Additionally, we discovered the preferred techniques of mitigating violations of privacy that can be used on robots in order to develop privacy-sensitive social robotics. Besides, we learned the percentage of users who would allow other members at home or in the workplace to share their robots, and methods robot co-owners can use to protect their privacy from each other. In addition, the survey can assist in fguring out if the users prefer a warning system on social robots that could help them to increase their awareness regarding privacy and the preferred techniques that could be used on that warning system. The frst part of the survey includes demographic questions.

5.3.2 Experiment

Regarding understanding the users’ preferences toward the outer appearance of social robots that could assist in protecting the users’ privacy and increasing their awareness regarding privacy, we conducted an experiment via contacting some Florida Institute of Technology graduate students. In the experiment, we used a doll as a robot to perform the scenarios. Indeed, as with the surveys, we had the same IRB approval to conduct the experiment. In addition, we showed the participants the informed consent, and then we asked those participants who agreed to participate only one question: ”what will you do with the robot, the doll, when you want to change your clothes or to have a private call? By acting” and we observed their interactions. In fact, we told the participants to assume that the social robot could record and store video and audio records, which could leak out, or assume that the data is being streamed in a control center.

101 5.4 The Participants of the Study

The participants could be experts in robotics or not. In addition, the partic- ipants could be professors or employees at FIT or FIT students, but we focused more on those graduate’s students of the Computer Engineering and Science Col- lege, especially those who specialize in Information Assurance and Cybersecurity, friends, and colleagues. For this study, the number of participants was 150, of ages above 18. The participants were from both genders. In addition, the participants were from diferent nationalities. In fact, we targeted participants from diferent nationalities, but we focused more on American, Japanese, Koreans, Chinese because they are the most frequent users of robots, and Saudis because they are planning to use many diferent types of robots in 2030, in the Neom project. In addition, we targeted students and workers because they may not have time for home care and could need robots. Moreover, we focused on employees because robots can be used by companies for assistance or providing services to their staf.

5.5 Summary

In this Chapter, we explained the goal of the study again, discussed the two main methods that were used, which were the online surveys, and the experiment. In addition, we mentioned some details about the participants of the study. In the next Chapter, we illustrate the results of the surveys and the experiment and discuss every question.

102 Chapter 6

Study Analysis and Results

6.1 Overview

As we mentioned before, we conducted surveys and did an experiment in or- der to fgure out the most comfortable techniques that are preferred by social robot users in order to protect their privacy and increase their awareness toward associated privacy risks while using a social robot. Indeed, the goal is to have privacy-sensitive robots that are using techniques that are preferred by their users. By using the methodologies that were mentioned in the previous Chapter, the total number of participants who were recruited in this user study reached 150. Indeed, 82 of them participated in the main survey, 40 of them in the Cameras’ Covers survey, 20 of them in the Filters’ Efects survey, and eight of them in the experiment. In this Chapter, the details of all of those surveys and the experiment are explained.

103 6.2 Surveys Results

Each survey asked diferent questions, but all of them aim to study the privacy protection aspect. The next part explains the surveys’ results in detail. All the surveys’ questions can be found in Appendix B.

6.2.1 The ”Cameras’ Covers” Survey

In order to fgure out if people use any diferent types of covers to cover their cameras, such as their webcam, smartphone’s camera, or other cameras of their devices, we distributed a survey that asked the participants ”Do you use any cover (e.g., sticker, webcam cover) to cover any of your laptop’s, smartphone’s, or tablet’s camera?” Indeed, 40 participants responded to this question. The result illustrated that there were a total of 26 participants (65%) who use a cover for protecting their privacy. See Figure 6.1. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Yes=1 and No=2. The results demonstrated that the mean was 1.35, showing that the participants were more positive and used a cover for protecting their privacy. The standard deviation was 0.48 while the mode, the value that appears most often, was 1 (Yes), which means that they use a cover.

104 Figure 6.1: The number of participants who use or do not use covers to protect their privacy

This means that there are some users who accept the idea of using a cover to protect their privacy by covering the main factors that could assist in a violation of privacy, such as cameras and microphones. In fact, from this survey, even if the numbers of participants were insufcient to generalize the result, there might great interest in using such covers to cover the robots’ cameras and microphones for security and privacy purposes; in fact, this is what the main survey discovered in this research.

6.2.2 The ”Filters’ Efects” Survey

In order to discover if the proposed flter, which is ”delete and replace,” can solve the problem of drawing attention and that could be preferred by users to protect their privacy, we conducted this survey. At the beginning of the survey, we informed the participants that in all the following pictures, there is an object that we were trying to hide. However, there were some pictures that refect the manipulation and some others that seem real as if nothing had been done to them.

105 Then, we used two diferent scenarios that show two diferent groups of pictures. In the frst scenario, we tried to hide a credit card by using diferent flters in each picture. For example, in the frst picture, we used the abstraction flter; in the second picture, we used the replace flter; in the third picture, we used the redact; in the fourth picture, we used the blur flter; and in the last picture, we used our proposed flter, which is the ”delete and replace” flter. However, in the second scenario, we tried to hide the money safe box by using the same flters with the same order of the pictures. Indeed, before each scenario that had a group of pictures containing each of the types of flters as mentioned, we asked the following: ”Do any of the following pictures shows that there is no manipulation and makes sense?” The results indicated that in the frst scenario, the credit card, most partic- ipants, 15 out of 20, chose picture number 5, ”delete and replace” flter, as the picture that shows that there is no manipulation and that seem real. However, (20%) of the participants chose picture number 4, blur flter, as the picture that does not show manipulation, while only one participant chose picture number 1, the abstraction flter. See Figure 6.2.

106 Figure 6.2: The picture that shows there is no manipulation and make sense: picture 1 shows abstraction, picture 2 shows replace, picture 3 shows redact, picture 4 shows blur, and picture 5 shows ”delete and replace”

In addition, in the second scenario, the money safe box, most participants, 17 out of 20, chose picture number 5, ”delete and replace” flter, as the picture that does not draw attention. However, two out of 20 participants selected picture number 3. In addition, there was only one participant who chose picture number 4. See Figure 6.3.

Figure 6.3: The picture that shows there is no manipulation and makes sense: picture 1 shows abstraction, picture 2 shows replace, picture 3 shows redact, picture 4 shows blur, and picture 5 shows ”delete and replace”

107 This means that the problem of drawing attention that appeared with the other flters could be solved by using the proposed flter which is ”delete and replace,” because when applying the ”delete and replace” flter on an image, it will seem like a real image and no change has been made on it. Indeed, from those answers we notice that one of our hypotheses has been supported.

In this survey, we also asked the participants the following: ”Which flter that you noticed from the previous images, do you prefer for protecting your privacy?” to fgure out if the ”delete and replace” flter is preferred by the users or not. In fact, only 13 participants answered that question. Indeed, the results showed that most participants (61.5%) preferred the ”delete and replace” flter to be used in order to protect their privacy while (30.8%) of the participants preferred the redact flter. Indeed, only one participant preferred option number one, the abstraction flter. See Figure 6.4.

Figure 6.4: The preferred flters for protecting privacy

108 From this question, we can fgure out that each user could prefer using diferent types of flters. Thus, providing the robots’ system with diferent types of flters and giving the users the ability to choose the preferred flter for their robots to protect their privacy via the robots’ system and to adjust the privacy setting could be the best option to satisfy most users. In addition, it seems that many users prefer to apply a flter that draws less attention in order to achieve a high level of protection. Indeed, this has supported one of our hypotheses. From this survey and regarding the answers, our hypothesis that stated that the proposed flter ”delete and replace” would solve the problem of drawing attention and would be the most preferred flter has been supported.

6.2.3 The Main Survey

In this survey, as we mentioned before, we asked many diferent detailed ques- tions that are concerned with users’ opinions toward the protection techniques that could be used on social robots to provide privacy-sensitive social robots. Some of the questions required only choosing one option while some of them allow the participants to choose more than one option.

6.2.3.1 Demographics Information

We started with the demographic questions that could be signifcant in this survey due to the fact that all people are concerned about privacy, but the level of concerns and the norms of privacy vary according to the cultures, age, gender, and education.

109 As we mentioned earlier, 82 participants were involved in this survey. The majority of those respondents were females (52.4%). Most of those participants (63.4%) were aged between 25 and 34 years old. The participants’ countries were various; however, the majority of them were from Saudi Arabia (54.87%) followed by the United States of America (41.46%). While most of the participants had a Master’s degree, 40 out of 82, 28 of them had a Bachelor’s degree, and a few of them who had a higher than the Master’s degree or less than a Bachelor’s degree. Regarding the employment status, we let the participants choose more than one option, and it seems that most of the participants were students and employees for wages (43.9%) equally. The demographics of the participants are available in Table 6.1.

110 Demographics # % Male 39 47.6 Gender Female 43 52.4 18-24 years old 16 19.5 25-34 years old 52 63.4 35-44 years old 7 8.5 Age 45-54 years old 4 4.9 55-64 years old 3 3.7 65 years or older 0 0 2 2.43 KSA 45 54.87 Countries USA 34 41.46 India 1 1.21 Less than high school 1 1.2 High school, diploma or equivalent 8 9.8 Some college, no degree 2 2.4 Associate degree (e.g. AA, AS) 1 1.2 Education Bachelor’s degree (e.g. BA, BS) 28 34.1 Master’s degree (e.g. MA, MS, MEd) 40 48.8 Professional degree (e.g. MD, DDS,DVM) 1 1.2 Doctorate (e.g. PhD, EdD) 1 1.2 Employed for wages 36 43.9 Self-employed 7 8.5 Out of work and looking for work 6 7.3 Out of work but not looking for work now 4 4.9 Employment Status A student 36 43.9 Military 0 0 Retired 0 0 Unable to work 0 0

Table 6.1: Demographic information

6.2.3.2 General Background and Concerns

After the demographic questions, the participants were asked to answer difer- ent questions that asked about the general background of the participants. Thus, the participants were asked to choose which smart devices that they have in order to fgure out if the participants are interested in technology and smart devices or

111 not. The results indicated that most of the participants have smartphones (92.7%) while tablets were the second most popular device participants have (45.1%). In addition, the results showed that (20.7%) of participants own smart speaker de- vices, and (19.5%) of them have smart security and monitoring cameras. Indeed, the results illustrated that there were some participants, four out of 82, who own vacuuming robots, and two of them who have social robots. Moreover, the results stated that there were some participants who have some other devices, such as smartwatches and Nest devices. See Figure 6.5 for more details.

Figure 6.5: Smart devices that are owned by participants

From these results, we can fgure out that the participants like to have smart devices and keep up with technology. In general, the results indicated that most of the participants use smartphones while a few of them use robots, and the median devices that were used by participants were the smart speakers and monitoring cameras.

112 In addition, in order to know exactly if the participants have social robots around them, we asked the following: ” Is there a social robot at your home, at your workplace, or near you (e.g., street, neighbor)?” Indeed, each answer to this question led the participants to diferent questions regarding their answers. However, the results demonstrated that most of the participants (85.4%) did not have a social robot or did not see a social robot around them while (14.6%) of participants have a social robot at their home, workplace, or around them. See Figure 6.6. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Yes=1 and No=2. The results demonstrated that the mean was 1.85, showing that the majority of the participants did not have social robots around them. The standard deviation was 0.35 while the mode which indicated the value that appears more often was 2 (No), which means that no social robots were around them. The Confdence Level (95.0%) was 0.078.

Figure 6.6: Number of participants who have or do not have a social robot

113 However, the participants who have a social robot, 12 out of 82, were asked to identify where they use the social robot. The participants were able to choose more than one option. The results revealed that ten of the participants have a social robot at home while two of them have it at their workplace, and two of the participants mentioned that they had it at their school when they chose option ”other.” See Figure 6.7.

Figure 6.7: The places that participants have the social robots

However, the participants who do not have a social robot, 70 out of 82, were asked if they wanted to have a social robot or not. Thus, from this question, we can fgure out the number of participants who were interested in owning social robots. The results indicated that most of the participants (40%) said that they might plan to own a robot, and the second large group (37.1%) confrmed their desire to own a robot. However, a few (8.6%) said that they do not know yet if they want to have a social robot or not, and the rest of participants (14.3%) reported that they do not want to have a social robot. See Figure 6.8.

114 In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Yes=3, Perhaps & Don’t Know=2, and No=1. The results showed that the mean was 2.22, showing that the participants were not sure about their desire in owning a social robot. The standard deviation was 0.68 while the mode which indicated the value that appears more often was 2 (Perhaps or Don’t Know), which means that the participants were not sure if they want to have a social robot or not. The Confdence Level (95.0%) was 0.16.

Figure 6.8: Participants’ desire to own a social robot

Indeed, the participants who do not want to own a robot mentioned in their answers the following: the place that they could have the robots at was the work- place, and the robots’ tasks that they would need the robots to assist them with were industrial tasks, tasks at the hospital, baby monitoring, eldercare, people with special need care, teaching, workstation tasks, and driving a car. Those could clar- ify that they might not want to own the robots at home, but they could have them at other places, or they might not want to see them anywhere in order to protect privacy because they were very concerned about their privacy in their answers. We kept those participants because we want to fgure out the preferred techniques that

115 they could need. Thus, if the new generation of robots could have the techniques that they want, they could change their minds about owning a robot or at least be comfortable if they have to deal with a robot in their environments, such as workplaces, schools, or hospitals. In fact, they were not willing to own a robot due to some of the privacy concerns.

In addition, those participants who do not have a social robot and want to have one were also asked when they have a social robot, where do they want to use it? In this question, the participants were also able to choose more than one place. However, the results demonstrated that most of the participants (78.6%) want to own a social robot at their home while (42.9%) of participants want to have a social robot at their workplace. However, four out of 70 participants went with the option ”other.” Although one of those wanted to use the social robot at the kids’ places to monitor them, the rest mentioned they did not want to use the social robot anywhere for protection purposes. See Figure 6.9.

Figure 6.9: The places that participants want to have the social robots

116 However, all the participants 82 who have or do not have a social robot were combined again to answer all the other questions remaining in this survey. The participants were asked to answer the following: ”If you have or could have a social robot, what types of tasks do you want the robot to assist you with?”. The participants were able to choose more than one option. The results indicated that most of the participants, 55 out of 82 want the social robots to assist them with the daily life tasks, such as vacuuming, cooking, cleaning, and waking a person up, while the next largest group of participants, 34 out of 82, want the social robots for entertainment. In fact, many participants, 28 out of 82, prefer using social robots for driving cars and baby monitoring. However, 26 out of 82 participants want to use social robots to care for people with special needs while 25 out of 82 need it for childcare. Some participants, 21 out of 82, prefer using social robots in teaching while 20 of them want it for eldercare. However, a few participants want social robots for other tasks, such as workstation tasks, industrial tasks, hospital tasks, and military. See Figure 6.10.

Figure 6.10: The tasks that the participants want to use the social robot for

117 From this question, we can notice that most of the tasks with which participants need the robot to assist them were daily life tasks, routine tasks, and home care tasks, such as cooking, cleaning, baby monitoring, entertaining, and driving a car. However, the middle tasks, with which the participants need their robots to aid them, were tasks involving special purposes and taking care of people, such as eldercare, childcare, special needs care, and teaching. However, the fewer tasks with which participants need their robots to help them were tasks outside homes, such as workstation tasks, hospital tasks, industrial tasks, and military. From all of the previous questions, we can notice that most of the participants were interested in technology and they could have the desire to own new technol- ogy and devices, such as robots that could assist them in their daily life. However, participants could have some concerns that could lead to their unwillingness to own technology, such as robots, or some concerns when they have those social robots around them.

Indeed, in order to fgure out the users’ concerns about the social robots, we asked the participants the following: ”If you have or could have a social robot in your home, work, or near your environment (e.g., street, neighborhood), what are your most general concerns about the social robot assuming that the social robot could record and store video and audio records or data being streamed in a control center (e.g., stored on a remote cloud or robot system)?” The results indicated that most of the participants (59.8%) were concerned most about people who could hack social robots, so the hackers then could do harm to or spy on those users. The second majority concern was the video or audio recording ability of the social robots that could be misused or leak. In fact,

118 the third largest group of participants (51.2%) were concerned about the leakage of sensitive information or data theft while (31.7%) of participants were worried about the leakage of embarrassing information. In addition, (46.3%) of participants were concerned about home security issues, such as theft or break-in. See Figure 6.11. Those results showed that most participants were concerned about their privacy and security while using social robots or when those robots were around them. However, there were some other participants were concerned about the robot’s ability to do harm to people or property and inability to perform tasks well; the numbers of participants in percentages (32.9%) and (32.9%) are given respectively. However, there were some other few participants were concerned about targeted advertising and responsibility for damage or harm; the numbers of participants in percentages (25.6%) and (19.5%) are given respectively. Indeed, from all of those participants, only one of them indicated that he is not concerned while two of the participants mentioned that they do not seem concerned about any of the listed options, which could mean that there could be other concerns that were not listed on the options about which they worry. See Figure 6.11. In general, the most concern was the hackers; the next was the robots’ ability to do harm to people or property, and the robots inability to perform tasks well, and the last concern was the responsibility for damage or harm.

119 Figure 6.11: The users’ concerns about the social robots

From this question and regarding the answers, our hypothesis that stated that violating privacy by recording private information or leaking sensitive information would be the most users’ concerns when using social robots, more than utility achievement was supported. In fact, we notice that the most concerns were related to security and privacy, such as hacking the robot to violate the users’ security and privacy, recording infor- mation, leaking information, and breaking into home security. However, the con- cerns that were chosen by some of the participants were related to utility achieved, such as harm to people or property, and the inability to perform tasks well. How- ever, participants were concerned a little about responsibility for damage or harm and targeted advertising.

After knowing that the most common concerns for users about the social robots were that their privacy and security could be violated, we strived to know more about the level of signifcance of privacy for those users at their home and work- place. Thus, we used a ”scaling approach” (Extremely Important, Important, Nei-

120 ther Important nor Unimportant, Unimportant, and Extremely Unimportant) to determine these levels of signifcance of privacy to the following questions: ”Privacy is important to me at home,” and ”Privacy is important to me at the workplace.” The results demonstrated that 62 out of 82 participants were extremely con- cerned about their privacy at their home and 43 out of 82 participants were ex- tremely concerned about their privacy at their workplace. However, only two out of 82 participants were extremely unconcerned about their privacy at their home, and only three out of 82 participants were extremely unconcerned about their privacy at their workplace. Figure 6.12 shows privacy is important at home and workplace, but it is more important at home than in the workplace. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Extremely Important=1, Important=2, Neither Important nor Unimportant=3, Unimportant=4, and Extremely Unimportant=5. For the importance of privacy at home, the results showed that the mean was 1.37, showing that privacy is very important to users at their homes, and the standard deviation was 0.82 while the mode which indicated the value that appears more often was 1 (Extremely Important), which means that the participants were extremely concerned about their privacy at their homes. For the importance of privacy at the workplace, the results illustrated that the mean was 1.74, showing that privacy is important to users at their workplaces, and the standard deviation was 1.00 while the mode which indicated the value that appears more often was 1 (Extremely Important), which means that the participants were extremely concerned about their privacy at their workplaces.

121 Figure 6.12: Level of signifcance of privacy for users at their home and workplace

In addition, we sought to know more about the privacy concerns for users re- garding certain objects, information, locations, and situations. Thus, we also used the ”scaling approach” (Extremely Concerned, Concerned, Neither Concerned nor Unconcerned, Unconcerned, and Extremely Unconcerned), and we asked the par- ticipants the following: ”How concerned are you about privacy related to the following: Objects, Information, Locations, and Situations?” Indeed, the partici- pants were required to rank every object and location while they had the choice to rank every bit of information or some, and every situation or some. Regarding the objects, the results illustrate that the majority of participants were extremely concerned about their credit cards, bank statements, money safe box, and the confdential or private documents and papers; the participants’ num- bers out of 82 were set as 65, 64, 59, and 56, respectively. However, a few partici- pants were extremely concerned about the prescription medications, illegal drugs, and weapons; the participants’ numbers out of 82 were set as 26, 26, and 29, respectively. In fact, some participants between 35 and 55 were concerned about

122 their home security devices, photographs, jewelry, mail, keys, and intimate apparel, respectively. Indeed, the most objects that participants were either extremely concerned or normally concerned about were the confdential or private documents and papers, followed by their credit card and bank statement. The most objects that the participants were neither concerned nor unconcerned about were the prescription medications, followed by the weapons, and the most objects that the participants were either extremely unconcerned or normally unconcerned about were illegal drugs. See Figure 6.13 for more details. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Extremely Concerned=1, Concerned=2, Neither Concerned nor Unconcerned=3, Unconcerned=4, and Extremely Unconcerned=5. Table 6.2 shows descriptive statistics for each object in the order of the objects that users were most concerned about regarding their privacy.

123 Figure 6.13: Participants’ rating regarding the objects that they are concerned about regarding privacy

Objects Mean Median Mode Std. Deviation Credit Card 1.39 1 1 0.93 Bank Statement 1.43 1 1 0.98 Confdential Documents & Papers 1.43 1 1 0.81 Money Safer 1.57 1 1 1.07 Photographs 1.57 1 1 0.90 Home Security Devices 1.60 1 1 0.93 Mail 1.74 1 1 0.96 Key 1.79 1 1 1.03 Jewelry 1.84 1 1 1.15 Intimate Apparel 2.02 2 1 1.21 Prescription Medications 2.25 2 1 1.10 Weapons 2.28 2 1 1.24 Illegal drugs 2.47 2 1 1.35

Table 6.2: Descriptive statistics for objects

124 Regarding the information, the results demonstrated that the majority of par- ticipants were extremely concerned about their fnancial information, passwords, and personal identifcation; the participants’ numbers were set as 62, 61, and 56, respectively. However, a few participants were extremely concerned about the leak- ing of the employment names, which means their positions at work, and personal habits; the participants’ numbers were set as 17 and 18, respectively. In fact, there was a number of participants between 20 and 30 who were concerned about their life or work plans, health information, physical appearance, and political views, respectively. See Figure 6.14 for more details. In addition, the results showed that the information that most participants were normally concerned about were their health information, personal habits, political views, and physical appearance. See Figure 6.14 Indeed, the most information that participants were either extremely concerned or normally concerned about were their passwords, followed by their fnancial in- formation, and personal identifcation. The information that the participants were neither concerned nor unconcerned about were employment names, followed by their personal habits, and the information that the participants were either ex- tremely unconcerned or normally unconcerned about was the employment names. See Figure 6.14 for more details. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Extremely Concerned=1, Concerned=2, Neither Concerned nor Unconcerned=3, Unconcerned=4, and Extremely Unconcerned=5. Table 6.3 shows descriptive statistics for each information in the order of the in- formation that users were most concerned about regarding their privacy.

125 Figure 6.14: Participants’ rating regarding the information that they are concerned about regarding privacy

Information Mean Median Mode Std. Deviation Password 1.37 1 1 0.83 Financial Information 1.41 1 1 0.89 Personal Identifcation 1.45 1 1 0.85 Health Information 2.05 2 2 1.01 Life or Work Plans 2.12 2 2 1.03 Physical Appearance 2.15 2 2 1.05 Political Views 2.16 2 2 0.99 Personal Habits 2.25 2 2 0.94 Employment Names 2.41 2 3 1.03

Table 6.3: Descriptive statistics for information

126 Regarding the locations, the results indicated that the majority of participants were extremely concerned about their bedrooms, followed by their bathrooms; the participants’ numbers were set as 62 and 58, respectively. However, a few partic- ipants were extremely concerned about their kitchen, followed by their workplace meeting room; the participants’ numbers were set as 18 and 19, respectively. In fact, between 20 and 25 participants who were concerned about their living room, the lab at work, and workplace ofce, respectively. See Figure 6.15 for more details. Indeed, the locations that participants were either extremely concerned or nor- mally concerned about were their bedrooms, followed by their bathroom while the location that the participants were neither concerned nor unconcerned about was their kitchen. The location that the participants were either extremely uncon- cerned or normally unconcerned about was their lab at work. See Figure 6.15 for more details. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Extremely Concerned=1, Concerned=2, Neither Concerned nor Unconcerned=3, Unconcerned=4, and Extremely Unconcerned=5. Table 6.4 shows descriptive statistics for each location in the order of the areas that users were most concerned about regarding their privacy.

127 Figure 6.15: Participants’ rating regarding the locations that they are concerned about regarding privacy

Locations Mean Median Mode Std. Deviation Bedroom 1.37 1 1 0.79 Bathroom 1.45 1 1 0.83 Living Room 2.06 2 2 0.93 Workplace Ofce 2.26 2 2 1.00 Lab at Work 2.32 2 1 1.11 Workplace Meeting Room 2.34 2 2 0.99 Kitchen 2.40 2 3 1.01

Table 6.4: Descriptive statistics for locations

Regarding the situations, the results revealed that the majority of participants were extremely concerned about the following situations: nakedness, intimate re- lationships, private meetings or conversations, and taking a shower. The par- ticipants’ numbers were set as 63, 62, 59, and 59, respectively. However, a few participants were extremely concerned about the following situations: watching

128 television and playing with a baby; the participants’ numbers were set as 12 and 20, respectively. In fact, some participants between 45 and 50 were concerned about the following situations: invading private time and while working on a com- puter. See Figure 6.16 for more details. Indeed, the most situations that participants were either extremely concerned or normally concerned about were their private meetings or conversations, followed by the nakedness situations, and their intimate relationship. The most situations that the participants were neither concerned nor unconcerned about were playing with a baby and watching television, and the situation that the participants were either extremely unconcerned or normally unconcerned about was playing with a baby. See Figure 6.16 for more details. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Extremely Concerned=1, Concerned=2, Neither Concerned nor Unconcerned=3, Unconcerned=4, and Extremely Unconcerned=5. Table 6.5 shows descriptive statistics for each situation in the order of the situations that users were most concerned about regarding their privacy.

129 Figure 6.16: Participants’ rating regarding the situations that they are concerned about regarding privacy

Situations Mean Median Mode Std. Deviation Nakedness 1.37 1 1 0.86 Private Meeting or Conversation 1.41 1 1 0.87 An Intimate Relationship 1.43 1 1 0.93 Taking Shower 1.50 1 1 1.00 Working on Computer 1.58 1 1 0.84 Invading Private Time 1.58 1 1 0.92 Playing With a Baby 2.48 2 3 1.20 Watching Television 2.64 3 3 0.97

Table 6.5: Descriptive statistics for situations

In summary, participants were most concerned about privacy related to certain objects, information, locations, and situations. However, we can notice that the objects that most of the participants were worried about were their credit cards, bank statements, and money safe box. In addition, the information that most

130 participants were concerned about was their fnancial information and passwords. Moreover, the locations that participants were most concerned about were their bedrooms and bathrooms. Furthermore, the situations that most participants were concerned about were nakedness, intimate relationships, private meetings or conversations, and taking a shower. However, the objects that most participants were not very concerned about were weapons, illegal drugs, and prescription medications. In addition, the infor- mation that most participants were not very concerned about was the employment names and their personal habits. Furthermore, the location that most participants were not very concerned about was their kitchen. Moreover, the situations that most participants were not very concerned about were watching television and playing with a baby. Thus, we can fgure out that the participants were most concerned about any aspects that could be related to money, secrets, such as passwords or conversations, and any situations that could normally occur in the bedroom or bathroom, such as nakedness and private relationships. Comparing our results with the results from the previous studies, in our results we were able to discover new privacy concerns for the users, such as the confdential or private documents and papers, intimate apparel, weapons, money safe box, life or work plans, workplace ofce, workplace meeting room, intimate relationships, private meetings or conversations, working on a computer, and invading private time.

131 From those questions and regarding the answers, our hypothesis that stated that the users are more concerned about privacy related to fnancial objects or information, bedroom and bathroom area, and the situations where the users are naked has been supported. In fact, when looking at the descriptive statistics tables of those objects, information, locations, and situations, we notice that our hypothesis was supported.

6.2.3.3 The Possible Techniques that Could Assist in Mitigating Pri- vacy Violation

After knowing the general background of participants and their privacy con- cerns in general and during the existence of social robots around them, we de- scribed the possible techniques that could be used on social robots to protect privacy. Then, we asked the participants to choose the most preferred techniques that they want to use on their social robots in order to have privacy-sensitive social robots. Indeed, the participants were given a list of choices to choose only one of them. The goal from the following part is to understand the users’ preferences regarding diferent techniques that could be used on the social robots to mitigate privacy violations in order to provide the users with privacy-sensitive social robots that they prefer and need. Thus, we asked the participants about the most preferred techniques that could be used on robots’ cameras during the robots’ working period and when its working time is fnished. In addition, we asked the participants about the most preferred techniques that could be used on robots’ microphones during their working period and when the working time is fnished. Finally, we asked the participants about the most preferred techniques that could be used in order to limit the robot’s

132 movements while performing a task. Regarding the robots’ cameras, we asked the participants the following: ”What are the most trusted and comfortable ways that could be used with the robot’s camera while performing a task to provide privacy (e.g., credit card).” The results showed that half of the participants, 41 out of 82, preferred using both the adaptive flters and the automatic hardware covers. As we mentioned before, the adaptive flter means using more than one flter, especially our proposed flter ”delete and replace” that aimed to be adaptive and adjusted according to certain information, object, location, situation, user, or time. See Figure 6.17. However, a few participants, nine out of 82, preferred using a type of flter, such as blurring, pixelation, redacting, or replacing that changes the appearance of only the private part of the image/scene (only the credit card) that is seen by the robot. In addition, two other groups of participants with the same number of participants, nine out of 82, for each group preferred two diferent techniques. The frst group preferred using adaptive flters that could be adjusted according to certain information, object, location, situation, user, or time to provide an im- age/scene that seems real and not modifed. The other group chose the automatic hardware cover, such as a human eyelid, that covers the camera temporarily until the robot moves away, such as a human blinking to avoid private areas, informa- tion, or objects. Indeed, the rest of the participants, 14 out of 82, chose the idea of applying a type of flter, such as blurring, pixelation, redacting, or replacing that changes the appearance of the entire image/scene that is seen by the robot (whole image with the credit card). See Figure 6.17. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Applying a type of flter for the entire image=1,

133 Using a type of flter only for the private part of the image=2, Using adaptive flters=3, Using an automatic hardware cover=4, and Using both adaptive flters and automatic hardware covers=5. Table 6.6 shows descriptive statistics for each of those choices.

Figure 6.17: Participants’ preferences regarding the ways that could be used with the robot’s camera while performing a task to provide privacy

Choices Mean Median Mode Std. Deviation Variance Confdence (95.0%) 1 0.2 0 0 0.37 0.14 0.08 2 0.109 0 0 0.31 0.09 0.06 3 0.109 0 0 0.31 0.09 0.06 4 0.109 0 0 0.31 0.09 0.06 5 0.5 0.5 1 0.50 0.25 0.11

Table 6.6: Descriptive statistics for the question’s choices

The results illustrated that participants generally have diferent preferences. Thus, using a variety of techniques that could be adaptive and adjusted by users with those social robots could be the best idea to produce privacy-sensitive robots which users want and become very comfortable with. However, as we noticed from the results, the choice that most of the participants preferred is using multiple

134 techniques. This asserted that providing more options to the users to control their robots could be better. Indeed, most of the participants could know that using only the automatic cover would not be a great idea because the robots could some- times need to see while performing their tasks. Thus, having the flter techniques besides the cover could assist more in achieving the goal of protecting the users’ privacy.

In addition, we asked the participants the following: ”What are the most trusted and comfortable ways that could be used with the robot’s camera while performing a task to protect private situations (e.g., changing clothes)?” The results demonstrated that most of the participants (51.2%) preferred using multiple techniques, which were mentioned on the other choices. Only (9.8%) preferred using only one type of flter, such as blurring, pixelation, redacting, or replacing that changes the appearance of the image/scene that is seen by the robot. A few participants (2.4%) wanted their robots to automatically turn and walk away when detecting a private-sensitive situation, while a few others (7.3%) preferred using the technique that works by telling the robot specifc words, such as “turn your camera of,” ”close your eyes,” or “go away.” See Figure 6.18. Indeed, the second preferred technique was using adaptive flters that could be adjusted according to certain information, object, location, situation, user, or time. This method was attractive to (15.9%) of the participants, followed by the technique of using an automatic cover, such as a human eyelid that covers the camera temporarily until the robot moves away. This is like a human blinking in order to avoid private-sensitive situations. This method was chosen by (13.4%) of the participants. See Figure 6.18.

135 In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Using one type of flter for the entire image=1, Using adaptive flters=2, Using an automatic hardware cover=3, The robot au- tomatically turns and walks away when detecting a private-sensitive situation=4, By telling the robot specifc words=5, and Using multiple techniques=6. Table 6.7 shows descriptive statistics for each of those choices.

Figure 6.18: Participants’ preferences regarding the ways that could be used with the robot’s camera while performing a task to protect private situations

Choices Mean Median Mode Std. Deviation Variance Confdence (95.0%) 1 0.09 0 0 0.29 0.08 0.06 2 0.15 0 0 0.36 0.13 0.08 3 0.13 0 0 0.34 0.11 0.07 4 0.02 0 0 0.15 0.02 0.03 5 0.07 0 0 0.26 0.06 0.05 6 0.51 1 1 0.50 0.25 0.11

Table 6.7: Descriptive statistics for the question’s choices

136 The results clarifed that participants preferred using a variety of techniques with those social robots, which also asserted that the aforementioned idea of using multiple techniques could be the best idea for producing privacy-sensitive robots. In fact, from the results, we can fgure out that the most preferred techniques in this question were the techniques that can be controlled by the social robots themselves, such as the flters and covers, but not the techniques that would need actions from users, such as telling the robots what to do. See Figure 6.18. From those questions and regarding the answers, our hypothesis that stated that during the working time, users prefer their robots to use both an adaptive flter and an automatic cover was supported. Furthermore, regarding the robots’ cameras, we asked the participants this last question, which was ”what are the most trusted and comfortable ways that could be used to disable the camera (turn the camera of) after fnishing tasks (when the robot does not work)?” The results indicated that most participants (35.4%) preferred using the auto- matic cover that covers the robot’s camera completely after fnishing tasks. How- ever, a few participants (14.6%) chose to turn the connection to the Internet of in order to protect their privacy. Indeed, the second large group of the participants (30.5%) preferred using the on/of button that was placed on the robots while (19.5%) preferred using the robot’s program to turn the camera of. See Figure 6.19. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Use the on/of button=1, Use the robot’s pro- gram=2, Turn the connection to the Internet of=3, and Use an automatic cover=4. Table 6.8 shows descriptive statistics for each of those choices.

137 Figure 6.19: Participants’ preferences regarding the ways that could be used with the robot’s camera to disable the camera after fnishing tasks

Choices Mean Median Mode Std. Deviation Variance Confdence (95.0%) 1 0.30 0 0 0.46 0.21 0.10 2 0.19 0 0 0.39 0.15 0.08 3 0.14 0 0 0.35 0.12 0.07 4 0.35 0 0 0.48 0.23 0.10

Table 6.8: Descriptive statistics for the question’s choices

The results confrmed that the participants would be more comfortable when using the covers even if the Internet is of, so if the Internet was turned on, nothing would appear except darkness. From this question and regarding the answers, our hypothesis that stated that after fnishing tasks, users prefer their robots to use the automatic cover has been supported. Regarding the robots’ microphones, we asked the participants the following: ”What are the most trusted and comfortable ways that could be used to disable the microphone while performing a task?”

138 The results illustrated that most participants (37.8%) preferred turning the microphone of by using the on/of button for the microphone to protect privacy. However, the second large group of participants (31.7%) preferred using the auto- matic cover that covers the microphone completely when the robot does not require the microphone to complete the task and that opens automatically when the user does a specifc action, such as send an alert to the robot to turn the microphone on. See Figure 6.20. These participants above preferred the techniques that could disable the micro- phone. In other words, those participants preferred the methods that do not allow the microphones to receive any sounds. However, the other two groups of par- ticipants preferred using techniques that allow the microphones to receive sounds until the users prevent that. The frst group of those participants (20.7%) pre- ferred techniques that work by telling robot words, such as “turn your microphone of” or ”close your ears.” The second group (9.8%) preferred using the encryption method that encrypts all words that the robot can hear until the robot hears its wake up word, such as its name; then the robot starts to respond, but if the user stops saying anything, the robot will wait for several seconds and then start to use the encryption method again. See Figure 6.20. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Use the on/of button=1, Using an automatic cover=2, By telling robot words=3, and Using the encryption method=4. Table 6.9 shows descriptive statistics for each of those choices.

139 Figure 6.20: Participants’ preferences regarding the ways that could be used with the robot’s microphone while performing a task to protect privacy

Choices Mean Median Mode Std. Deviation Variance Confdence (95.0%) 1 0.37 0 0 0.48 0.23 0.10 2 0.31 0 0 0.46 0.21 0.10 3 0.20 0 0 0.40 0.16 0.08 4 0.09 0 0 0.29 0.08 0.06

Table 6.9: Descriptive statistics for the question’s choices

The results confrmed that the participants could be more comfortable with preventing the microphones from receiving any sound even if the sound could be encrypted. This refects that users could need a high level of protection. From this question and regarding the answers, our hypothesis that stated that during the working time, users prefer their robots to use the encryption method was not supported. In addition, we asked the participants the following: ”What are the most trusted and comfortable ways that could be used to secure the microphone af- ter fnishing tasks?”

140 The results showed that most of the participants (34.1%) preferred using the automatic cover that covers the microphone completely; however, a few partici- pants (15.9%) preferred turning the connection to the Internet of. This showed that most of the participants could not trust that the Internet would be of all since it could be turned on by someone else, so they could be more comfortable with the cover for protecting their privacy. See Figure 6.21. However, the rest of the participants preferred either turning the microphone of by using the switch on/of button for the microphone or turning the microphone of by using the robot’s program. The numbers of participants in percentages were (31.7%) and (18.3%), respectively. See Figure 6.21. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Use the on/of button=1, Turn the connection to the Internet of=2, Turn the microphone of by using the robot’s program=3, and Use an automatic cover=4. Table 6.10 shows descriptive statistics for each of those choices.

Figure 6.21: Participants’ preferences regarding the ways that could be used with the robot’s microphone after fnishing tasks to protect their privacy

141 Choices Mean Median Mode Std. Deviation Variance Confdence (95.0%) 1 0.31 0 0 0.46 0.21 0.10 2 0.15 0 0 0.36 0.13 0.08 3 0.18 0 0 0.38 0.15 0.08 4 0.34 0 0 0.47 0.22 0.10

Table 6.10: Descriptive statistics for the question’s choices

From this question and regarding the answers, our hypothesis that stated that after fnishing tasks, users prefer their robots to use the automatic cover has been supported. Regarding the robots’ movements, we asked the participants the following: ”What are the most trusted and comfortable ways that could be used to limit the robot’s movement while performing a task to protect private areas?” The results revealed that most participants (45.1%) preferred using multiple techniques, which were mentioned on the other choices. The second large group of participants (22%) preferred to tell the robot specifc words, such as “go away,” or “do not enter” in order to limit their movements. However, a few participants (13.4%) preferred that the robots use navigation, such as obstacles or semantic mapping to limit the movements, while (19.5%) of the participants preferred con- necting the robot with some environment sensors, such as an infrared sensor, so if the robot detects there is someone in a private area (from the infrared), the robot will not enter. See Figure 6.22. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: By using navigation=1, By telling the robot specifc words=2, By connecting the robot with some environment sensors=3, and By using multiple techniques=4. Table 6.11 shows descriptive statistics for each

142 of those choices.

Figure 6.22: Participants’ preferences regarding the ways that could be used to limit the robot’s movement while performing a task to protect private areas

Choices Mean Median Mode Std. Deviation Variance Confdence (95.0%) 1 0.13 0 0 0.34 0.11 0.07 2 0.21 0 0 0.41 0.17 0.09 3 0.19 0 0 0.39 0.15 0.08 4 0.45 0 0 0.50 0.25 0.11

Table 6.11: Descriptive statistics for the question’s choices

From this question and regarding the answers, our hypothesis that stated that during the working time, more than half of the users prefer their robots to use multiple techniques as a frst option, then the other group prefer connecting robots with the environment sensors as a second option was not supported. In fact, the multiple techniques was the most preferred option as we assumed, but the second preferred option was not as we assumed. In summary, from the results, and regarding the robots’ cameras, microphones, and movements, we can conclude that most of the participants preferred to have a high level of protection, so they would be more comfortable with using multiple

143 techniques of protection. In fact, the results refected that regarding the robots’ cameras, the most preferred techniques were using the adaptive flters and the au- tomatic covers, which means the participants want the robots to use the cameras only when they need to complete their tasks. Furthermore, regarding the robots’ microphones, the most preferred techniques were to turn the microphones of by using the on/of button or using the automatic covers, which also confrmed that the users preferred their robots to use their microphones only when those micro- phones are needed. Moreover, regarding the robots’ movements, the participants preferred using many diferent techniques to limit the robots’ movements.

6.2.3.4 Additional Features of the Social Robot for Privacy Protection

After the preferred techniques part, we tried to describe more features of social robots in order to know more about the users’ preferences regarding those features. Thus, we asked the participants the following: ”If you have a social robot at home or workplace, do you want to allow every household member or workplace member to use the robot, in other words, share the use of the robot?” and we allowed the participants to choose only one answer. The results indicated that most of the participants (41.5%) said that they might allow every member to share the use of their robots, and a few participants (8.5%) did not know if they really wanted to allow sharing the use of their robots or not. See Figure 6.23. However, the second large group of participants (26.8%) were sure and said they do not want to share the use of their robots with the members while (23.2%) of the participants really want to share the use of their robots with the other members. See Figure 6.23.

144 Figure 6.23: Users’ desire to allow sharing the use of their social robots

After that question, we asked the participants that ”if other members use your social robot, do you prefer to add an authentication feature on it?” In this question, we want to know if the idea of using an authentication method that could allow the users to create diferent user profles would be comfortable to the users or not. Knowing the answers to this question is signifcant because in general, most of the social robots that are used to serve diferent employees at the workplace are designed to be used by every workplace members. The same could be with the social robots that are used to serve diferent members at home. The results indicated that most of the participants, 50 out of 82, preferred to use an authentication method on their social robots; however, a few partici- pants, fve out of 82, did not want to use the authentication method. There were some participants who were not sure if they wanted to add this feature or not. In this case, (25.6%) of participants said they might prefer to add the authentication method while (7.3%) of the participants did not know if they want this feature or not. See Figure 6.24.

145 In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Yes=3, Perhaps & Don’t Know=2, and No=1. The results showed that the mean was 2.54, showing that participants tended to add this feature on their robots more. The standard deviation was 0.61 while the mode which indicated the value that appears most often was 3 (Yes), which means that participants want to have the authentication method when their robots are shred by many members. The Confdence Level (95.0%) was 0.13.

Figure 6.24: Users’ desire to add an authentication method on the social robots that are used by diferent members

The results of those two questions clarifed that the majority of participants did not want to allow diferent members to use their robots, and they wanted to use an authentication method if their social robots have to be used by every member. These preferences are because the users want to have a high level of privacy and security protection. From this question and regarding the answers and the statistical analysis of this question, our hypothesis that stated that the users want to use an authentication method if every household or workplace member uses the robot has been supported.

146 Because there are many diferent authentication methods that could be applied to robots’ systems, we decided to ask the participants about the methods that they prefer. Thus, we asked them the following: ”What are the most appropriate authentication methods that the social robot can use to authenticate the authorized user?” and we gave them the ability to choose more than one method. The results demonstrated that most of the participants (62.2%) wanted to use the face recognition method, followed by the password (58.5%), and lastly, the voice recognition method (56.1%). However, a few of the participants mentioned other types of authentication methods, such as fngerprint, multi-factors methods, and NFC. The numbers of participants in percentages were (4.8%), (6%), and (1.2%), respectively. See Figure 6.25.

Figure 6.25: The preferred authentication methods

147 The results showed that it would be better to provide the robots with difer- ent types of authentication methods and give the users the ability to choose and activate the more comfortable and suitable one for them. From this question and regarding the answers, our hypothesis that stated that the face or voice recognition would be the most preferred authentication method for users has been supported.

In addition, regarding the users’ profle, we asked the participants the following: ”If the social robot that you have is used by other members, do you prefer to have the ability to adjust the robot’s confguration to suit each user?” in order to know if the users would like the idea of creating a separate profle for each user or not. Thus, when each user has a profle, that would allow each user to be able to adjust the robots’ setting regarding their preferences. The results illustrated that the majority of participants (56.1%) preferred to have the ability to adjust the robot’s confguration to suit each user while a few of them either did not prefer the adjusting feature or did not know if they wanted this feature or not; the numbers of participants in percentages for each choice was the same, (6.1%). However, (31.7%) of the participants might want the feature of having the ability to adjust the robot’s confguration regarding each user. See Figure 6.26. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Yes=3, Perhaps & Don’t Know=2, and No=1. The results showed that the mean was 2.5, showing that the participants tended to have the ability to adjust the robot’s confguration more. The standard deviation was 0.61 while the mode which indicated the value that appears most often was 3

148 (Yes), which means that the participants want to be able to adjust their robots’ privacy setting. The Confdence Level (95.0%) was 0.13.

Figure 6.26: Users’ desire to adjust the robots’ confgurations

In fact, to allow the users to have the ability to adjust the robot’s confguration to suit each user, the users would need a system or application to do that. Thus, we asked the participants the following: ”Do you prefer to have an application to manage and control social robots?” The results showed that the majority of participants (72%) preferred to have an application that allows them to manage their social robots and that allows them to set their preferences. However, only two out of 82 participants said they did not prefer having an application for their social robots. The participants who may want to have the application were (22%) while (3.7%) of the participants were still not sure if they wanted the application or not. See Figure 6.27. In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Yes=3, Perhaps & Don’t Know=2, and No=1. The results indicated that the mean was 2.6, showing that the participants tended to have an application that they can use to manage and control their robots.

149 The standard deviation was 0.51 while the mode which indicated the value that appears most often was 3 (Yes), which means that the participants want to have the application. The Confdence Level (95.0%) was 0.11.

Figure 6.27: Users’ desire to have an application to manage their social robots

The previous results asserted that designing an application that allows the users to manage and control their robots, and that allows them to adjust the setting of their robots could assist in making the users feel more comfortable and safer. From this question and regarding the answers and the statistical analysis of this question, our hypothesis that stated that users would be more confdent if their robots have an application that allows them to create their user account, which would allow them to control and manage their privacy setting and policies regarding their preferences, has been supported.

After those questions related to the application, we wanted to know more about the appropriate place for users to install the application. Thus, we asked the users the following: ”If the robot is being controlled and managed by using an application, where is the best place to install that application?” and we let the

150 participants choose only one answer. The results indicated that most of the participants (74.4%) preferred installing the application on their smartphones. Those participants would prefer to have all the applications that they need in one device that could be with them most of the time, like smartphones. However, some participants (14.6%) preferred installing the application on the device that comes with the robot while others (8.5%) wanted to install the application on the robot’s system and manage it via the small screen that is a part of the robot’s body. Only two out of 82 of the participants chose none of the above choices. See Figure 6.28.

Figure 6.28: Places that users preferred to install the application that is used to manage their robots

From this question and regarding the answers, our hypothesis that stated that the best place for users to install the robots’ application would be in the external device that comes with their robots was not supported because most of our partic- ipants chose the smartphones as the best place for installing the application. They might choose the smartphone because it is the device that is most used by them.

151 In addition, in order to know if the users prefer managing and controlling the policies of their social robots by themselves, we asked the participants the following: ”Do you think having policies to control the social robots’ permissions and limitations, such as controlling the permission policies to access the contacts, locations, or photos could assist in mitigating the problem of privacy violation?” The results illustrated that the majority of participants (58.5%) believed that having policies to control the social robots’ permissions and limitations could assist in mitigating the problem of privacy violation. However, a few participants (2.4%) did not believe this feature could assist in mitigating the privacy violation. The second group of participants (32.9%) said having policies to control the social robots’ permissions and limitations may assist in protecting their privacy while (6.1%) of them do not know if this feature could solve the problem of violating the users’ privacy or not. See Figure 6.29. In this question, to analyze data by using descriptive statistics, we gave each an- swer a code like the following: Yes=3, Perhaps & Don’t Know=2, and No=1. The results illustrated that the mean was 2.56, showing that the participants tended more to have policies to control their social robots’ permissions and limitations to protect their privacy. In addition, the standard deviation was 0.54 while the mode which indicated the value that appears more often was 3 (Yes), which means that the participants want to control the policies on their robots. The Confdence Level (95.0%) was 0.12.

152 Figure 6.29: Participants’ opinions regarding the policies for controlling the so- cial robots’ permissions and limitations and if they could assist in mitigating the problem of privacy violation

6.2.3.5 Awareness/ Warning System

From the results of the previous part, we can notice that the participants welcomed the idea of adding features that could assist in increasing the level of privacy protection. In the last part of the survey, we asked the participants several questions regarding the warning system. We started this part with the following question: ”Do you believe that having a warning system that refects the robot transparency could assist in mitigating a privacy violation?” The results demonstrated that the majority of participants (51.2%) believed that having a warning system on their social robots could assist in mitigating the privacy violation while a few participants (4.9%) did not believe that. However, the second large group of participants (36.6%) said that having a warning system may assist in mitigating the problem of a privacy violation while (7.3%) of the participants were not sure if this system could help in solving this problem or not. See Figure 6.30.

153 In this question, to analyze data by using descriptive statistics, we gave each answer a code like the following: Yes=3, Perhaps & Don’t Know=2, and No=1. The results demonstrated that the mean was 2.46, showing that the participants were more agree about the idea of having a warning system could assist in in- creasing the users’ awareness about the robots’ abilities and actions. In addition, the standard deviation was 0.59 while the mode which indicated the value that appears more often was 3 (Yes), which means that the participants believe that the warning system could make a signifcant diference in protecting their privacy. The Confdence Level (95.0%) was 0.13.

Figure 6.30: Participants’ opinions regarding whether or not the warning system could help in mitigating a privacy violation

From this question and regarding the answers and the statistical analysis of this question, our hypothesis that stated that the users would believe that using a warning system could assist in mitigating the privacy violation problem has been supported.

154 In order to know more about the warning system that users would prefer, we asked them several questions. We started with the following: ”Do you prefer that the social robot send you alerts when it moves toward you?” The results revealed that (41.5%) of the participants preferred that their social robot send them alerts when they are moving toward them while (13.4%) did not want to receive any alerts from their social robots when those robots move toward them. The second large group of participants (40.2%) were not sure and they clarifed that they might prefer their social robots to send them alerts, while (4.9%) of those participants did not know if they wanted that or not. See Figure 6.31.

Figure 6.31: Users’ preferences for receiving an alert from the social robots when they are moving toward them

In addition, we asked the participants if they could have that warning method and their social robots have to send them alerts when they are moving toward them, how do they prefer the robot send them the alerts? We let them choose more than one answer.

155 The results indicated that most of the participants (45.1%) preferred that their robots inform them about moving toward them verbally (loudly). However, the second large group of participants (40.2%) preferred that their robots send a mes- sage to their smartphones to inform them. Others (19.5%) also preferred their robots to send them a message, but to the device that comes with the robot. How- ever, (20.7%) of the participants chose none of the above as their answer. See Figure 6.32.

Figure 6.32: Preferred method to send an alerts when their robots are moving toward them

From this question and regarding the answers, our hypothesis that stated that users would prefer their robots to alert them verbally (loudly) when those robots are far away and either move toward them or hear them has been supported.

Furthermore, we asked the participants how they prefer the social robot alert users who do not notice its existence, such as if the robot is hidden behind a sofa so the user cannot see it? We let them choose more than one answer.

156 The results illustrated that the most preferred method was issuing a sound, which was preferred by (64.4%) of the participants, while the second preferred method was using a light, which was preferred by (42.7%) of the participants. The other participants preferred their robots to send a message either to their smartphones or the device that comes with the robots. The numbers of participants in percentages were (23.2%) and (9.8%), respectively. Indeed, only (4.9%) of the participants chose none of the above. See Figure 6.33.

Figure 6.33: Preferred warning methods to alert users who do not notice the robot’s existence

From this question and regarding the answers, our hypothesis that stated that when the robots and users are in the same place, users would prefer the robots to alert users who do not notice their existence by using light was not supported.

Moreover, we asked the participants the following: ”Do you prefer that the social robot turn on a specifc color of light around its camera and microphone or other sensors to show when these sensors are on, of, or when they are recording?”

157 The results showed that most of the participants (73.2%) preferred that the social robot turns on a specifc color of light around its camera and microphone or other sensors to show when these sensors are on, of, or when they are recording while only a few (2.4%) did not prefer that. However, (20.7%) of the participants were not sure and said they may want their social robot to turn on a specifc color of light around its camera and microphone or other sensors for warning while a few of them (3.7%) said they did not know. See Figure 6.34. In this question, to analyze data by using descriptive statistics, we gave each an- swer a code like the following: Yes=3, Perhaps & Don’t Know=2, and No=1. The results illustrated that the mean was 2.70, showing that the participants tended to have a social robot that turns on specifc colors of lights around its sensors more. In addition, the standard deviation was 0.50 while the mode which indicated the value that appears more often was 3 (Yes), which means that the participants preferred that the social robot turns on specifc colors of lights around its sensors to show their status. The Confdence Level (95.0%) was 0.11.

Figure 6.34: Users’ desire to be warned by using colors of light

158 From this question and regarding the answers and the statistical analysis of this question, our hypothesis that stated that users would prefer that the social robot turn on a specifc color of light around its camera and microphone or other sensors to show when these sensors are on, of, or when they are recording has been supported.

Additionally, we asked the participants the following: ”If the social robot exists in another room and it can hear you, how do you want the robot to inform you that it can hear you?” We gave them the chance to choose more than one answer. The results demonstrated that half of the participants (50%) preferred that their social robots inform them verbally (loudly) when those robots can hear them from another room. However, other participants preferred that their social robots send an alert message either to their smartphones or to the device that comes with the robot. The numbers of the participants in percentages were (41.5%) and (24.4%), respectively. See Figure 6.35.

Figure 6.35: Users’ preferred ways to be informed about the robots’ ability to hear them from far a way

159 From this question and regarding the answers, our hypothesis that stated that users would prefer their robots to alert them verbally (loudly) when those robots are far away and either move toward them or hear them has been supported.

Lastly, we asked the participants the following: ”How do you prefer the social robot to announce its capability (see, record, listen, etc.)?” We allowed them to choose only one answer. The result indicated that most of the participants (32.9%) preferred that their robots inform them about their capabilities verbally. The second large group (26.8%) preferred their robots to use a specifc color of light for each capability while (17.1%) of the participants wanted to be informed about the robot’s capa- bilities via the robot’s small screen that is a part of the robot’s body. However, other participants preferred to receive a message either to their smartphones or the device that comes with the robots. The percentages of the participants were (19.5%) and (3.7%), respectively. See Figure 6.36.

Figure 6.36: The preferences methods of users to be informed about the robots capabilities

160 From this question and regarding the answers, our hypothesis that stated that users prefer their robots to announce their capability by using sound has been supported.

From the last part of the survey, we can notice that the most preferred ways that could be used to alert users were by using sounds either by speaking to users or issuing any sounds, and then by using colors. Indeed, we can notice that even if the preferred methods of the warning system for each participant were diferent, the participants believed that having a warning system could assist in mitigating the problem of privacy violation. Thus, allowing the users’ to set the methods that they prefer for their warning system could provide the users with convenience in protecting their privacy.

6.3 Experiment Results

As we mentioned before, we conducted an experiment with eight participants and we asked them the following: ”what will you do with the robot, the doll, when you want to change your clothes or conduct a private call?” We told them to assume that the social robot could record and store video and audio records, which could leak out, or assume that the data is being streamed in a control center. Regarding the changing clothes scenario, the results showed that three of the participants said that they would take the doll outside the room while three of them said they would cover the doll’s eyes. Only two of them said that they would turn the doll’s body over.

161 However, regarding the call scenario, the results illustrated that fve of the participants said that they would take the doll outside the room while only one of them said he would leave the doll in the same place, but he would lower his voice. However, the other two participants said that they would go outside the room. From those results, we can notice that most of the participants treated with the doll as if it were a real living being. Thus, with the robots that are similar to humans in their appearance, it would be more comfortable to the users to use an automatic cover and to place the robots’ sensors in similar places to humans, such as the cameras in the eyes and the mi- crophones in the ears to increase the users’ awareness. This what we had already assumed and it has been supported regarding the users’ answers.

6.4 Summary

From all of the previous results, we can notice that most of the participants were interested in technology and owning social robots at their homes to assist them in their daily life tasks, driving a car, childcare, people with special needs care, and entertainment. However, the participants were afraid that their privacy could be violated when using social robots, such as leakage of sensitive informa- tion, recording, and hackers. Thus, the participants desired the highest level of protection, especially in their homes. Indeed, the participants preferred having social robots that have an appropriate and clear outer shape that could assist them in understanding those robots’ abilities and that could help them in protecting their privacy.

162 In addition, the protection methods that could assist in mitigating the privacy violations are diferent, and the users’ preferences regarding those methods are diferent. However, regarding the constraint techniques, for the camera, the users preferred our proposed adaptive flters and the automatic covers. For the micro- phone, they preferred using the on/of button and the automatic covers. For the movements, they preferred using multiple techniques. Moreover, the participants preferred using an authentication method, such as face recognition to protect their privacy when their robots are shared in use. Fur- thermore, they preferred having a smartphone application that assists them in creating a private account, which then could allow them to manage and control their robots. Finally, most of the participants preferred using a warning system that uses both sounds and colors to alert the users to assist in mitigating the privacy violation and help in increasing the users’ awareness. Thus, providing the robots with multiple techniques and allowing the users to manage and control their preferences regarding those methods and the privacy set- ting could assist in achieving the goal of providing the users with privacy sensitive robots.

163 Chapter 7

Discussions and Future Work

7.1 Limitations

We believe that there are no absolute and complete security and privacy tech- niques that could satisfy all users. In fact, while we were able to focus only on one factor of the robots during the study, we covered many diferent factors on the robots: the shape of robots, the perception of robots (camera and microphone), the navigation of robots (movement), the authentications on robots, the warning system of robots, and the applications of robots. In fact, we believe that studying the most signifcant aspects and factors could provide signifcant results. We acknowledge the limitations in our study, such as the size of the sample and the participants’ explanations regarding their choices. In addition, in this study, we have limitations in applying the proposed solutions practically and testing their efectiveness. The principal limitations are caused by the lack of time.

164 7.2 Future Work

In future work, we believe that recruiting more participants could be much better to gather the required information. We also believe that the implementation of those solutions and the test of their results need to be investigated further. Our proposed solutions can be applied to many diferent social robots to provide privacy-sensitive robots, but the research on privacy-sensitive robots is still needed. In addition, research papers on methods to increase the users’ awareness regarding protecting their privacy while using robots are still a critical point in mitigating privacy violations.

165 Chapter 8

Conclusion

Robots have evolved very quickly and have become a very advanced form of technology that is equipped with sophisticated features. Indeed, some of those robots simulate living organisms in their tasks or have some of those living organ- isms’ properties, especially humans. Nowadays, robots can be found everywhere. In fact, some of those robots are used in houses as if they were a member of the household. Indeed, the social robots that have the ability to communicate with humans to sense, hear, watch, process, and record all of their environments def- nitely contain a lot of their private information. Thus, they could violate the users’ privacy even with the great benefts ofered by these social robots. In fact, there are many diferent sources that cause this violation, such as the robots’ cameras and microphones, the outer shape of the robots, the robots’ movements, the lack of the reliable authentication system, the lack of robots’ warning system, and the characteristics of the application that can be used for controlling and management.

166 Thus, using social robots, which have all of those advanced sensors or which lack some essential features which could assist in mitigating the privacy violation, could cause privacy concerns. In this research, we studied all of those factors to fgure out the issues and the limitations regarding protecting users’ privacy. Then we proposed appropriate solutions that could assist in solving the problem of pri- vacy violation and producing privacy-sensitive robots that use privacy protection techniques that would be preferred by the users of social robots. In order to in- vestigate the most trusted, comfortable, and usable techniques that could assist in protecting users’ privacy while using social robots, increasing users’ awareness toward privacy risks, balancing between the utilities achieved and privacy loss, three diferent surveys and one experiment conducted. We highlight our results regarding the interests of the participants, their privacy concerns, and their preferences regarding the privacy protection techniques. The results can determine the participants’ awareness regarding the privacy risks that could arise from using social robots.

167 Bibliography

[1] Evan Ackerman. Study: Nobody wants social robots that look like humans because they threaten our identity.

https://spectrum.ieee.org/automaton/robotics/humanoids/ study-nobody-wants-social-robots-that-look-like-humans, 2016.

[2] Amazon. Alexa for business faqs. https://aws.amazon.com/ alexaforbusiness/faqs/, 2019.

[3] Lijin Aryananda. Recognizing and remembering individuals: Online and un- supervised face recognition for humanoid robot. volume 2, pages 1202–1207, 2002.

[4] Anja Austermann, Seiji Yamada, Kotaro Funakoshi, and Mikio Nakano. Does the appearance of a robot afect users’ ways of giving commands and feedback? In 19th International Symposium in Robot and Human Interactive Communi- cation, pages 234–239. IEEE, 2010.

[5] Jenay M Beer and Leila Takayama. Mobile remote presence systems for older adults: acceptance, benefts, and concerns. In 2011 6th ACM/IEEE Inter- national Conference on Human-Robot Interaction (HRI), pages 19–26. IEEE, 2011.

168 [6] Sven Behnke. Humanoid robots-from fction to reality? KI, 22(4):5–9, 2008.

[7] Niels Bohr. Brainy quote. https://www.brainyquote.com/topics/ technology, 2018.

[8] M. Boyle, C. Edwards, and S. Greenberg. The efects of fltered video on awareness and privacy. pages 1–10, 2000.

[9] J. Bravo, R. Hervas, C. Sanchez, G. Chavira, and S. Nava. Touch-based inter- action: an approach through nfc. pages 440–446, Stevenage, 2007. IET.

[10] Sharan Burrow. Brainyquote. https://www.brainyquote.com/topics/ technology, 2018.

[11] Daniel J Butler, Huang, Franziska Roesner, and Maya Cakmak. The privacy-utility tradeof for remotely teleoperated robots. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interac- tion, pages 27–34. ACM, 2015.

[12] John T. Butler and Arvin Agah. Psychological efects of behavior patterns of a mobile personal robot. Autonomous Robots, 10(2):185–202, 2001.

[13] Kelly Caine, Selma Sabanovic,ˇ and Mary Carter. The efect of monitoring by cameras and robots on the privacy enhancing behaviors of older adults. pages 343–350. ACM, 2012.

[14] M. R. Calo. 12 robots and privacy. page 187. Robot ethics: The ethical and social implications of robotics, 2011.

[15] Antonio Carnevale. Will robots know us better than we know ourselves? Robotics and Autonomous Systems, 86:144–151, 2016.

169 [16] W Clark, Michael V Doran, and Todd R Andel. Cybersecurity issues in robotics. In 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), pages 1–5. IEEE, 2017.

[17] International Privacy Conference. Artifcial intelligence, robotics, privacy and

data protection. https://edps.europa.eu/sites/edp/files/publication/ 16-10-19_marrakesh_ai_paper_en.pdf, 2016.

[18] Roy de Kleijn, Lisa van Es, George Kachergis, and Bernhard Hommel. Anthro- pomorphization of artifcial agents leads to fair and strategic, but not altruistic behavior. International Journal of Human - Computer Studies, 122:168–173, 2019.

[19] Tamara Denning, Cynthia Matuszek, Karl Koscher, Joshua Smith, and Ta- dayoshi Kohno. A spotlight on security and privacy risks with future household robots: attacks and lessons. pages 105–114. ACM, 2009.

[20] Andreea Dobra. General classifcation of robots. size criteria. In 2014 23rd International Conference on Robotics in Alpe-Adria-Danube Region (RAAD), pages 1–6. IEEE, 2014.

[21] EPSON. The ultraminiature robot that propelled itself into the

guinness book. https://global.epson.com/company/corporate_history/ milestone_products/23_monsieur.html, 2018.

[22] Adam Erdelyi, Tibor Barat, Patrick Valet, Thomas Winkler, and Bernhard Rinner. Adaptive cartooning for privacy protection in camera networks. pages 44–49. IEEE, 2014.

170 [23] Gianluca Paravati Fabrizio Lamberti and Andrea Di Salvo. Ser-

vice robotics. https://www.computer.org/web/computingnow/archive/ service-robots-october-2016, 2016.

[24] Francisco E. Fernandes, Guanci Yang, Ha M. Do, and Weihua Sheng. Detec- tion of privacy-sensitive situations for social robots in smart homes. volume 2016-, pages 727–732. IEEE, 2016.

[25] Margaret M. Fleck, David A. Forsyth, and Chris Bregler. Finding naked people. volume 1065, pages 594–602, 1996.

[26] Clark Fouraker. Robotic security forces on patrol in nyc prompt pri-

vacy concerns for some. https://newyork.cbslocal.com/2018/10/16/ knightscope-robot-security-patrol/, 2018.

[27] C. Galindo, A. Safotti, S. Coradeschi, P. Buschka, J. A. Fernandez-Madrigal, and J. Gonzalez. Multi-hierarchical semantic maps for mobile robotics. pages 2278–2283. IEEE, 2005.

[28] Cipriano Galindo, Juan-Antonio Fern´andez-Madrigal, Javier Gonz´alez, Alessandro Safotti, Orebro¨ universitet, and Akademin f¨ornaturvetenskap och teknik. Robot task planning using semantic maps. Robotics and Autonomous Systems, 56(11):955–966, 2008.

[29] Bill Gates. A robot in every home. https://www.scientificamerican.com/ article/a-robot-in-every-home/, 2018.

171 [30] Jennifer Goetz, Sara Kiesler, and Aaron Powers. Matching robot appear- ance and behavior to tasks to improve human-robot cooperation. In The 12th IEEE International Workshop on Robot and Human Interactive Communica- tion, 2003. Proceedings. ROMAN 2003., pages 55–60. Ieee, 2003.

[31] Christopher Hagis. History of robots, 2003.

[32] Mohammad Ashri Abu Hassan. A review of wireless technology usage for mo- bile robot controller. In Proceeding of the International Conference on System Engineering and Modeling (ICSEM 2012), pages 7–12, 2012.

[33] History. 7 early robots and automatons. https://www.history.com/news/ 7-early-robots-and-automatons, 2018.

[34] Alexander Hubers, Emily Andrulis, William Smart, Levi Scott, Tanner Stir- rat, Duc Tran, Ruonan Zhang, Ross Sowell, and Cindy Grimm. Video manip- ulation techniques for the protection of privacy in remote presence systems. volume 2-05-, pages 59–60. ACM, 2015.

[35] RobotShop Distribution Inc. History of robotics: Timeline. https://www. robotshop.com/media/files/PDF/timeline.pdf, 2018.

[36] Anil K. Jain, Karthik Nandakumar, Arun A. Ross, and SpringerLink (Online service). Introduction to biometrics. Springer, New York, 2011.

[37] S. Jana, A. Narayanan, and V. Shmatikov. A scanner darkly: Protecting user privacy from perceptual applications. pages 349–363. IEEE, 2013.

[38] Margot E. Kaminski, Matthew Reuben, William D. Smart, and Cindy M. Grimm. Averting robot eyes. Maryland Law Review, 76(4):983, 2017.

172 [39] T. Kanda, T. Miyashita, T. Osada, Y. Haikawa, and H. Ishiguro. Analysis of humanoid appearances in human-robot interaction. IEEE Transactions on Robotics, 24(3):725–735, 2008.

[40] Jana Kasperkevic. Cayla, the connected doll, is a spy and

must be destroyed. https://www.marketplace.org/2017/04/14/world/ Cayla-connected-doll-spy-must-be-destroyed, 2017.

[41] Miyazaki T. Yoshimi T. Hirokawa J. Kawabata S., Tamura M. and Ogawa

H. Robot apparatus for executing a monitoring operation. https://patents. google.com/patent/US20050096790A1/en, 2005.

[42] DoHyung Kim, Jaeyeon Lee, Ho-Sub Yoon, and Eui-Young Cha. A non- cooperative user authentication system in robot environments. IEEE Transac- tions on Consumer Electronics, 53(2):804–811, 2007.

[43] Jefrey Klow, Jordan Proby, Matthew Rueben, Ross Sowell, Cindy Grimm, and William Smart. Privacy, utility, and cognitive load in remote presence systems. volume 126657, pages 167–168. ACM, 2017.

[44] Pavel Korshunov, Shuting Cai, and Touradj Ebrahimi. Crowdsourcing ap- proach for evaluation of privacy flters in video surveillance. pages 35–40. ACM, 2012.

[45] Pavel Korshunov and Touradj Ebrahimi. Using face morphing to protect privacy. pages 208–213. IEEE, 2013.

[46] Margaret Krupp, Matthew Rueben, Cindy Grimm, and William Smart. Pri- vacy and telepresence robotics: What do non-scientists think? pages 175–176. ACM, 2017.

173 [47] Margaret M Krupp, Matthew Rueben, Cindy M Grimm, and William D Smart. A focus group study of privacy concerns about telepresence robots. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages 1451–1458. IEEE, 2017.

[48] Pinki Kumari and Abhishek Vaish. Brainwave based user identifcation sys- tem: A pilot study in robotics environment. Robotics and Autonomous Systems, 65:15–23, 2015.

[49] Keun-Chang Kwak. Face recognition with the use of tensor representation in home robot environments. IEICE Electronics Express, 6(4):187–192, 2009.

[50] Steven M. Lavalle. Planning Algorithms, volume 9780521862059. Cambridge University Press, GB, 2006.

[51] Colin Lecher. How human do we want our robots to

look? https://www.popsci.com/article/technology/ how-human-do-we-want-our-robots-look, 2013.

[52] Jin-Shyan Lee, Yu-Wei Su, Chung-Chou Shen, et al. A comparative study of wireless protocols: Bluetooth, uwb, zigbee, and wi-f. Industrial electronics society, 5:46–51, 2007.

[53] Min Kyung Lee, Karen P Tang, Jodi Forlizzi, and Sara Kiesler. Understanding users’ perception of privacy in human-robot interaction. In Proceedings of the 6th international conference on Human-robot interaction, pages 181–182. ACM, 2011.

174 [54] Francisco J Rodr´ıguezLera, Camino Fern´andezLlamas, Angel´ Manuel Guer- rero, and Vicente Matell´anOlivera. Cybersecurity of robotics and autonomous systems: Privacy and safety. In Robotics-Legal, Ethical and Socioeconomic Impacts. IntechOpen, 2017.

[55] J. N. K. Liu, Meng Wang, and Bo Feng. ibotguard: an internet-based intel- ligent robot security system using invariant face recognition against intruder. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 35(1):97–105, 2005.

[56] Christoph Lutz and Aurelia Tam`o. Privacy and healthcare robots–an ant analysis. In We Robot 2016: the Fifth Annual Conference on Legal and Policy Issues relating to Robotics. University of Miami School of Law, 2016, Discus- sant: Matt Beane, University of California Santa Barbara, 2016.

[57] Christoph Lutz and Aurelia Tam`o. Robocode-ethicists: Privacy-friendly robots, an ethical responsibility of engineers? pages 27–28. ACM, 2015.

[58] Pranav Mistry. Brainy quote. https://www.brainyquote.com/search_ results?q=fiction+technology, 2018.

[59] Kimberley Mok. This smartphone app can control

robots with augmented reality. https://thenewstack.io/ smartphone-app-can-control-robots-augmented-reality/, 2017.

[60] MultiBrowsers. Leonardo da vinci’s world frst human robot. https://www. youtube.com/watch?v=wCRUX2Cgfa0, 2018.

175 [61] Jonathan Mumm and Bilge Mutlu. Human-robot proxemics: physical and psychological distancing in human-robot interaction. pages 331–338. ACM, 2011.

[62] Yuta Nakashima, Tatsuya Koyama, Naokazu Yokoya, and Noboru Babaguchi. Facial expression preserving privacy protection using image melding. volume 2015-, pages 1–6. IEEE, 2015.

[63] Patricia B. Newell. Perspectives on privacy. Journal of environmental psy- chology, 15(2):87–104, 1995.

[64] Tampa Bay Times NIE. Robotics timeline. https://www.nieonline.com/ tbtimes/downloads/supplements/robotics_timeline.pdf, 2018.

[65] International Federation of Robotics. 31 million robots helping in house-

holds worldwide by 2019. https://ifr.org/ifr-press-releases/news/ 31-million-robots-helping-in-households-worldwide-by-2019, 2018.

[66] International Federation of Robotics. Service robotics: Sales up 25 per-

cent - 2019 boom predicted. https://ifr.org/ifr-press-releases/news/ service-robotics, 2018.

[67] Sandra Okita, Victor Ng-Thow-Hing, and Ravi Sarvadevabhatla. Captain may i?: proxemics study examining factors that infuence distance between humanoid robots, children, and adults, during human-robot interaction. pages 203–204. ACM, 2012.

[68] OldRobot. Odex-1-functionoid-walking-robot-1983. https://www.youtube. com/watch?v=sknq2H4z0lw, 2018.

176 [69] All on Robots. Types of robots. http://www.allonrobots.com/ types-of-robots.html, 2018.

[70] Computer History Organization. Timeline of computer history. https:// www.computerhistory.org/timeline/1970/, 2018.

[71] IEEE Organization. Genghis. https://robots.ieee.org/robots/ genghis/, 2018.

[72] IEEE Organization. All robots. https://robots.ieee.org/robots/, 2019.

[73] IEEE Organization. Play face-of. https://robots.ieee.org/play/, 2019.

[74] Robotics Organization. Types of professional service robots. https://www. robotics.org/service-robots/service-robots-types, 2019.

[75] Spectrum IEEE Organization. George devol: A

life devoted to invention, and robots. https:// spectrum.ieee.org/automaton/robotics/industrial-robots/ george-devol-a-life-devoted-to-invention-and-robots, 2018.

[76] SPECIAL TO PEOPLESWORLD.ORG. Today in labor history: The

term robot is frst used. https://www.peoplesworld.org/article/ today-in-labor-history-the-term-robot-is-first-used/, 2018.

[77] Robot Platform. Types of robot sensors. http://www.robotplatform.com/ knowledge/sensors/types_of_robot_sensors.html, 2010, 2019.

[78] Aaron Powers, Sara Kiesler, and Jennifer Goetz. Matching robot appearance and behavior to tasks to improve human-robot cooperation. Technical report, Figshare, 2018.

177 [79] Komal G Purohit and Raju J Bhiwani. Biometric authenticated voice operated robot.

[80] Arnaud Ramey and Miguel Salichs. Morphological gender recognition by a social robot and privacy concerns: late breaking reports. pages 272–273. ACM, 2014.

[81] Allied Market Research. Robotics technology market. https://www. alliedmarketresearch.com/robotics-technology-market, 2018.

[82] Paul Robinette, Alan R Wagner, and Ayanna M Howard. Building and main- taining trust between humans and guidance robots in an emergency. In 2013 AAAI Spring Symposium Series, volume SS-13-07, pages 78–83, 2013.

[83] RobotPark. All types of robots - by locomotion. http://www.robotpark. com/All-Types-Of-Robots, 2016.

[84] RobotWorx. What are the main types of robots? https://www.robots.com/ faq/what-are-the-main-types-of-robots, 2018.

[85] Matthew Rueben, Frank Bernieri, Cindy Grimm, and William Smart. User feedback on physical marker interfaces for protecting visual privacy from mobile robots. volume 2016-, pages 507–508. IEEE Press, 2016.

[86] Matthew Rueben, Frank Bernieri, Cindy Grimm, and William Smart. Framing efects on privacy concerns about a home telepresence robot. volume 127194, pages 435–444. ACM, 2017.

178 [87] Matthew Rueben, Cindy M Grimm, Frank J Bernieri, and William D Smart. A taxonomy of privacy constructs for privacy-sensitive robotics. arXiv preprint arXiv:1701.00841, 2017.

[88] Michael S. Ryoo, Brandon Rothrock, Charles Fleming, and Hyun J. Yang. Privacy-preserving human activity recognition from extreme low resolution. 2016.

[89] Trenton Schulz, Jo Herstad, and Harald Holone. Privacy at home: An inquiry into sensors and robots for the stay at home elderly. volume 10927, pages 377–394, 2018.

[90] Oleksandr Shyvakov. Developing a security framework for robots. Master’s thesis, University of Twente, 2017.

[91] Dennis Spaeth. From single-task machines to backfipping robots:

The evolution of robots. https://www.ctemag.com/news/articles/ evolution-of-robots, 2018.

[92] statista. Actions to protect devices and online us- age privacy according to internet users worldwide as of

june 2015. https://www.statista.com/statistics/463380/ protection-of-devices-and-internet-privacy-worldwide/, 2018.

[93] Statista. Size of the global market for industrial and non-industrial robots

between 2017 and 2025. https://www.statista.com/statistics/760190/ worldwide-robotics-market-revenue/, 2018.

179 [94] Statista. Unit sales of social and entertainment robots worldwide

from 2015 to 2025. https://www.statista.com/statistics/755677/ social-and-entertainment-robot-sales-worldwide/, 2018.

[95] Statista. Worldwide sales of industrial robots from 2004

to 2017. https://www.statista.com/statistics/264084/ worldwide-sales-of-industrial-robots/, 2018.

[96] Ramesh Subramanian. Emergent ai, social robots and the law: Security, pri- vacy and policy issues. Subramanian, Ramesh (2017)” Emergent AI, Social Robots and the Law: Security, Privacy and Policy Issues,” Journal of Interna- tional, Technology and Information Management, 26(3), 2017.

[97] Dag S. Syrdal, Michael L. Walters, Nuno Otero, Kheng L. Koay, and Kerstin Dautenhahn. ”he knows when you are sleeping” - privacy and the personal robot companion. volume WS-07-07, pages 28–33, 2007.

[98] L. Takayama and C. Pantofaru. Infuences on proxemic behaviors in human- robot interaction. pages 5495–5502. IEEE, 2009.

[99] Jan C. Ting. Brainy quote. https://www.brainyquote.com/topics/ technology, 2018.

[100] Bob Violino. Robot control: There’s an app for that. https://www.zdnet. com/article/robot-control-theres-an-app-for-that/, 2016.

[101] Jonathan Vitale, Meg Tonkin, Sarita Herse, Suman Ojha, Jesse Clark, Mary- Anne Williams, Xun Wang, and William Judge. Be more transparent and users will like you: A robot privacy and user experience design experiment. pages 379–387. ACM, 2018.

180 [102] Michael L Walters, Dag S Syrdal, Kerstin Dautenhahn, Ren´eTe Boekhorst, and Kheng Lee Koay. Avoiding the uncanny valley: robot appearance, person- ality and consistency of behavior in an attention-seeking home scenario for a robot companion. Autonomous Robots, 24(2):159–178, 2008.

[103] Zhigang Wang, Lichuan Liu, and M Zhou. Protocols and applications of ad-hoc robot wireless communication networks: An overview. future, 10:20, 2005.

[104] Zhigang Wang, MengChu Zhou, and Nirwan Ansari. Ad-hoc robot wire- less communication. In SMC’03 Conference Proceedings. 2003 IEEE Inter- national Conference on Systems, Man and Cybernetics. Conference Theme- System Security and Assurance (Cat. No. 03CH37483), volume 4, pages 4045– 4050. IEEE, 2003.

[105] Samuel D Warren and Louis D Brandeis. The right to privacy harward law review. In Ethical issues in the use of computers, volume 4, pages 172–183. Wadsworth Publ. Co, 1890.

[106] Bob Williams. An introduction to robotics. https://www.ohio.edu/ mechanical-faculty/williams/html/PDF/IntroRob.pdf, 2019.

[107] Marcus Woo. Robots:can we trust them with our privacy. http://www.bbc. com/future/story/20140605-the-greatest-threat-of-robots, 2018.

[108] Woo-han Yun, DoHyung Kim, and Ho-Sub Yoon. Fast group verifcation sys- tem for intelligent robot service. IEEE Transactions on Consumer Electronics, 53(4):1731–1735, 2007.

181 [109] Guangda Zhang, Hai-Ning Liang, and Yong Yue. An investigation of the use of robots in public spaces. pages 850–855. IEEE, 2015.

[110] Qiang A. Zhao and John T. Stasko. Evaluating image fltering based tech- niques in media space applications. pages 11–18, 1998.

[111] Teresa Zielinska. Professional and personal service robots. International Journal of Robotics Applications and Technologies (IJRAT), 4(1):63–82, 2016.

[112] E. Ilhan˙ KONUKSEVEN. Trends in robot business and robotic research in me dept. and cad/cam/robotics center at metu., 2004.

182 Appendix A

IRB Approval Letter

Figure A.1: IRB Approval Letter

183 Appendix B

Surveys

B.1 The Informed Consent Privacy Sensitive Robots Thank you for your participation in this Survey, please read this consent document carefully before you decide to participate in this study.

Study Title: User Study of Techniques Used to Secure Interactions with Robots and Evaluate Privacy Implications. Purpose: The use of robots that process data remotely causes privacy concerns. The main goal of this project is to investigate usable, trusted, and comfortable techniques to bring security in the context of social robot utilization, to protect users’ privacy in the presence of social robots, to increase users’ awareness to- wards associated privacy risks, and to fnd trade-ofs between privacy loss and utility achieved. In addition, the project aims to increase the user confdence in the privacy guarantees available in the context of robotics. Therefore, this survey

184 gathers information about the users’ privacy concerns, users’ opinions toward the known techniques for protecting privacy, users’ opinions towards features of robots relevant to privacy, and users’ opinions towards techniques for increasing people’s awareness. Procedures: The Cameras’ Covers Survey: There will be only one question to answer. The Filters Efects Survey: There will be three questions to answer. The Main Survey: After demographic questions which are 5, the survey will start with 10/11 general background and concerns questions. Next there are 6 questions about the preferred techniques for mitigation of privacy violation that could arise from using a social robot. Subsequently, 7 questions address the addi- tional features that can be included in the social robots. Finally, the survey will end with 7 questions about warning systems that could be used on social robots. The survey could take around 15 to 17 minutes of your time. Potential Risks of Participating: There is not any potential risks associated with participating in this study. Potential Benefts of Participating: The importance of the knowledge that will result from this study is the ability to understand the users’ needs and prefer- ences toward social robots regarding privacy, and what should be done with those robots in order to develop privacy-sensitive robotics that could satisfy users and assist in increasing the use of such robots. There are be no direct benefts prepared for the participants in this study. However, if the solutions that are provided in research apply on robots, then the users could beneft from the end-result of the development of privacy-sensitive robotics. Compensation:There will be no monetary compensation for subjects.

185 Confdentiality: The identity of participants will be confdential. A participa- tion ID number will be assigned to each participant, so their data will be identifed only by the use of that ID number. The data of the survey and the participants’ responses will be saved on a computer that will be accessed only by the principal investigators. The participants’ names would not appear anywhere in the data; on the survey we will not ask the participants for their names. Indeed, the study will not involve collection of images or audio recordings of subjects. Right to withdraw from the study: The participants have the right to with- draw from the study at any time without any consequence. Whom to contact if you have questions about the study: Raniah Bamagain [email protected]

Agreement: By clicking on agree you are consenting to participate, and that you have agreed to have read the procedure described above

• I agree

• I disagree, and I would like to EXIT the survey

186 B.2 The ”Cameras’ Covers” Survey

1. Do you use any cover (e.g. sticker, webcam cover) to cover any of your laptop’s, smartphone’s, or tablet’s camera?

• Yes

• No

B.3 The ”Filters’ Efects” Survey

1. Any of the following pictures shows that there is no manipulation and make sense?

• Picture 1

• Picture 2

• Picture 3

• Picture 4

• Picture 5

Figure B.1: Picture 1

187 Figure B.2: Picture 2

Figure B.3: Picture 3

Figure B.4: Picture 4

188 Figure B.5: Picture 5

2. Any of the following pictures shows that there is no manipulation and make sense?

• Picture 1

• Picture 2

• Picture 3

• Picture 4

• Picture 5

189 Figure B.6: Picture 1

Figure B.7: Picture 2

190 Figure B.8: Picture 3

Figure B.9: Picture 4

191 Figure B.10: Picture 5

3. Which flter that you noticed from the above images you prefer for protecting your privacy?

• The flter in picture 1

• The flter in picture 2

• The flter in picture 3

• The flter in picture 4

• The flter in picture 5

192 B.4 The Main Survey (Social Robots)

Demographic Questions 1. What gender do you identify as:

• Male

• Female

• Prefer not to say

2. What is your age?

• 18-24 years old

• 25-34 years old

• 35-44 years old

• 45-54 years old

• 55-64 years old

• 65 years or older

3. What is your country? ———– 4. What is your highest education degree or level?

• Less than high school

• High school, diploma or equivalent

• Some college, no degree

• Associate degree (e.g. AA, AS)

193 • Bachelor’s degree (e.g. BA, BS)

• Master’s degree (e.g. MA, MS, MEd)

• Professional degree (e.g. MD, DDS, DVM)

• Doctorate (e.g. PhD, EdD)

• Other:

5. What is your employment status? (You can choose more than one)

□ Employed for wages

□ Self-employed

□ Out of work and looking for work

□ Out of work but not currently looking for work

□ A student

□ Military

□ Retired

□ Unable to work

□ Other:

194 General Background and Concerns In all survey questions, the social robots mean the robots that are able to interact with humans or environments, such as NAO, Pepper, Buddy, AIBO, Jibo, etc.

Figure B.11: NAO Robot

6. What types of smart devices do you have, at home or work? (You can choose more than one)

□ Smart speaker (e.g. Amazon Alexa, etc.)

□ Smart security camera or monitoring camera

□ Smart Home Assistance

□ Smartphone

□ Tablet

□ Vacuuming robot (e.g., Roomba)

□ Social robots (e.g. NAO, Pepper, Buddy, AIBO, etc.)

□ Other:

195 7. Is there a social robot at your home, at your workplace, or near you (e.g. street, neighbor, etc.)?

• Yes Go to A

• No Go to B

(A) 8. If you have a social robot where do you use it? (You can choose more than one)

□ Home

□ Work

□ Other:

(B) 8. Would you want to own a social robot?

• Yes

• Perhaps

• No

• Don’t Know

9. If you could have a social robot, where do you want to use it? (You can choose more than one)

□ Home

□ Work

196 □ Other:

All the participants who go to A section and B section were combined again to answer all the other questions remaining in this survey 9/10. If you have or could have a social robot, what types of tasks do you want the robot to assist you with (You can choose more than one)

□ Eldercare

□ People with special needs care

□ Childcare

□ Baby monitoring

□ Daily life tasks (e.g. vacuuming, cooking, cleaning, wake you up, etc.)

□ Workstation tasks

□ Industrial tasks

□ Hospital tasks

□ Military

□ Drive car

□ Teaching

□ Entertainment

□ I don’t use or would not use the robot for any of these purposes

197 11. If you have or could have a social robot in your home, work, or near your environment (e.g., street, neighborhood), what are your most general concerns about the social robot (assuming that the social robot could record and store video and audio records or data being streamed in a control center (e.g., stored on remote cloud or robot system)) (You can choose more than one)

□ Harm to people or property

□ Home security (break-in, theft)

□ Inability to perform tasks well

□ Leakage of sensitive information or data theft

□ Targeted advertising

□ Embarrassing information

□ Recording (video or audio)

□ Responsibility for damage or harm

□ Hackers

□ Not concerned

□ None of the above

198 12. Privacy is important to me:

Figure B.12: Importance

While answering the following questions (13-16), assume that the social robot could record and store video and audio records which could leak out, or assume that the data is being streamed in a control center.

199 13. How concerned are you about privacy related to the following Objects:

Figure B.13: Objects

200 14. How concerned are you about privacy related to the following Information:

Figure B.14: Information

201 15. How concerned are you about privacy related to the following Locations:

Figure B.15: Locations

202 16. How concerned are you about privacy related to the following Situations:

Figure B.16: Situations

203 The Possible Techniques that could Assist in Mitigating the Privacy Violation Defnitions and Explanations: 1. Obstacles: Protecting the private areas by designating them as obstacles. 2. Semantic Mapping: Using semantic information on the environment to build the map (e.g., using labels to describe places as “bedroom” label to indicate the bedroom location.

Figure B.17: From lift to right, showing the following flters types: Blurring, Pixelation, Redacting, and Replacing

While answering the following questions from (17-22), assume that the social robot could record and store video and audio records which could leak out, or assume that the data is being streamed in a control center. 17. What are the most trusted and comfortable ways that could be used: With the robot’s camera while performing a task to provide privacy (e.g., credit card):

• Applying a type of flter (e.g., Blurring, Pixelation, Redacting, or Replacing, etc.) that changes the appearance of the entire image/scene that is seen by the robot (whole image with the credit card)

• Using a type of flter (e.g., Blurring, Pixelation, Redacting, or Replacing, etc.) that changes the appearance of only the private part of the image/scene

204 (only the credit card) that is seen by the robot

• Using adaptive flters that could be adjusted according to certain informa- tion, object, location, situation, user or time to provide an image/scene that seems real and not modifed

• Using an automatic hardware cover (e.g., human eyelid) that covers the cam- era temporarily until the robot moves away (e.g., human blinking) to avoid private areas, information, or object

• Using both adaptive flters and automatic hardware covers

18. What are the most trusted and comfortable ways that could be used: With the robot’s camera while performing a task to protect private situations (e.g., changing clothes):

• Using one type of flter (e.g. Blurring, Pixelation, Redacting, or Replacing, etc.) that changes the appearance of the image/scene that is seen by the robot

• Using adaptive flters that could be adjusted according to certain informa- tion, object, location, situation, user or time

• Using an automatic cover (e.g. human eyelid) that covers the camera tem- porarily until the robot moves away (e.g. human blinking) to avoid private- sensitive situations

• The robot automatically turns and walks away when detecting a private- sensitive situation

205 • By telling the robot specifc words, such as “turn your camera of” (close your eyes), or “go away”, etc.

• Using multiple techniques

19. What are the most trusted and comfortable ways that could be used: To disable the microphone while performing a task:

• Turn the microphone of by using the on/of button for the microphone

• Using an automatic cover that covers the microphone completely (when the robot does not require the microphone to complete the task, and that opens automatically when the user does a specifc action (e.g., send an alert to the robot to turn the microphone on))

• By telling robot words, such as “turn your microphone of” (close your ears), etc.

• Encryption of all words that the robot hears until the robot hears its weak word (e.g., its name) then starts to respond, but if the user stops saying anything, the robot will wait for several seconds and then start to use the encryption method again

20. What are the most trusted and comfortable ways that could be used: To limit the robot’s movement while performing a task to protect private areas:

• By using navigation (e.g., obstacles, or semantic mapping, etc.)

• By telling the robot specifc words, such as “go away,” “do not enter,” etc.

206 • By connecting the robot with some environment sensors (e.g., infrared sen- sor), so if the robot detects there is someone in private area (from the in- frared), the robot will not enter, etc.)

• By using multiple techniques

21. What are the most trusted and comfortable ways that could be used: To disable the camera (turn the camera of) after fnishing tasks (when the robot does not work):

• Use the on/of button

• Use the robot’s program

• Turn the connection to the Internet of

• Use an automatic cover that covers the camera completely

22. What are the most trusted and comfortable ways that could be used: To secure the microphone after fnishing tasks:

• Turn the microphone of by using the switch on/of button for the microphone

• Use an automatic cover that covers the microphone completely

• Turn the microphone of by using the robot’s program

• Turn the connection to the Internet of

207 Additional Features of the Social Robot 23. If you have a social robot at home or at workplace, do you want to allow every household member or workplace member to use the robot (share the use of the robot)?

• Yes

• Perhaps

• No

• Don’t Know

24. If other members use your social robot, do you prefer to add an authentication feature on it?

• Yes

• Perhaps

• No

• Don’t Know

25. What are the most appropriate authentication methods that the social robot can use to authenticate authorized user? (You can choose more than one)

□ Password

□ Voice recognition

□ Face recognition

208 □ Other:

26. If the social robot that you have is used by other members, do you prefer to have the ability to adjust the robot confguration to suit each user?

• Yes

• Perhaps

• No

• Don’t Know

27. Do you prefer to have an application to manage and control social robots?

• Yes

• Perhaps

• No

• Don’t Know

28. If the robot is being controlled and managed by using an application, where is the best place to install that application?

• Smartphones

• A small screen that is part of the robot

• A device that comes with the robot

209 • None of the above

29. Do you think having policies to control the social robots’ permissions and limitations (e.g. permission to access the contact, location or photo) could assist in mitigating the problem of privacy violation?

• Yes

• Perhaps

• No

• Don’t Know Awareness/ Warning System (Robot Transparency) 30. Do you believe that having a warning system that refects the robot transparency could assist in mitigating the privacy violation?

• Yes

• Perhaps

• No

• Don’t Know

31. Do you prefer that the social robot send you alerts when it moves toward you?

• Yes

• Perhaps

• No

210 • Don’t Know

32. If yes, how do you prefer that the robot send you the alert? (You can choose more than one)

□ By informing me verbally (loudly)

□ By sending a message to my smartphone

□ By sending a message to the device that comes with the robot

□ None of the above

33. How do you prefer the social robot alert users who not notice its existence (e.g. the robot is hidden behind a sofa, so the user cannot see it)? (You can choose more than one)

□ Using the light

□ Issuing sounds

□ Sending a message to my smartphones

□ Sending a message to the device that comes with the robot

□ None of the above

34. Do you prefer that the social robot turn on a specifc color of light around its camera and microphone or other sensors to show when these sensors are on, of, or when they are recording?

• Yes

• Perhaps

211 • No

• Don’t Know

35. If the social robot exists in another room and it can hear you, how do you want from the robot to inform you that it can hear you? (You can choose more than one)

□ By sending an alert message to my smartphone

□ By sending an alert message to the device that comes with the robot

□ By informing me verbally (loudly)

□ None of the above

36. How do you prefer the social robot announces its capability (see, record, listen, etc.)?

• Verbally (e.g. I am recording, hearing, using the camera for seeing, etc.)

• Via the robot’s small screen (e.g. labels appear on the screen while the robot is working to show robot’s actions)

• By sending a message to the smartphones regularly

• By sending a message to the device that comes with the robot regularly

• By using a specifc color of light for each capability

212