DEGREE PROJECT IN MECHANICAL ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2021

The Future of Human- Interaction A socio-economic Scenario Analysis

BENEDIKT KRIEGER

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT

A socio-economic Scenario Analysis

Benedikt Krieger Supervised by Andreas Archenti (KTH) & Thomas Bohné (University of Cambridge) Abstract – English

Advancing research in an interdisciplinary field such as robotics is a complex undertaking. Seldom, it is moved beyond the scope of an individual science and the challenges from other fields of research are incorporated. Research on Human-Robot Interaction (HRI) is attributed interdisciplinarity and, thus, is a case in point. Therefore, this thesis aims to integrate both engineering, psychosocial, and socio-economic research streams. By doing so, the goal is to reveal and to identify underlying questions which are tacitly assumed by either research field, but require explicit contemplation and elaboration. The engineering community is currently focusing on collaboration and cooperation (CoCo) as it enables humans and to operate together in heterogenous teams. Human-robot teamwork, in turn, is promising to enable the integration of both a human’s flexibility, dexterity, and creative problem solving with robotic strength, precision, reliability, and efficiency. In contrast, economic considerations evolve around elaborations on technological unemployment and further macroeconomic implications. To unite these streams, this thesis conducts a scoping literature review. Through it, the fundamental design considerations necessary to achieve CoCo are laid out, while pointing towards the currently most promising research direction in each of the design aspects. Both engineering as well as psychosocial aspects are considered. Then, a scenario analysis with a socio-economic scope is conducted. This serves to widen the understanding of the embedding of HRI as a socio-technical system in socio-economic environments, i.e., companies. Finally, the design aspects trust, multimodal communication, and the human role in HRI are used to build an understanding of the relation between socio-economic developments and future scenarios with specific design aspects of HRI. It is found that all future scenarios have distinct but also partly similar implications for HRI. More profoundly though, a number of ethical and open philosophical questions arise from the scenario transfer to HRI. What happens if progress on CoCo is too slow to enable a paradigm shift away from automation through robotics? How much are we willing to subject ourselves to digital technology in order to enable natural interaction with robots? Are we sufficiently knowledgeable about prospective opportunities and risks as we move closer to being able to replicate a considerable number of uniquely human abilities? With these questions, this dissertation aims to contribute to the HRI community on wider considerations necessary for a human-centric future of HRI. Education is posited as a crucial stepping stone to enable such a future.

Abstract – Svenska

Att främja forskning inom ett tvärvetenskapligt område såsom robotik är ett komplext åtagande. Sällan förflyttas forskningen bortom ramen för en enskild vetenskaplig förgrening och utmaningar från andra forskningsområden integreras. Forskning om mänsklig robotinteraktion (HRI) tillskrivs som tvärvetenskaplig och är således ett exempel. Därför syftar denna avhandling till att integrera tekniska, psykosociala men även socioekonomiska forskningsförgreningar. Genom att göra detta är målet att avslöja underliggande frågor som i sin tystnad antas av vartdera forskningsfält, men som uttryckligen kräver kontemplation och utarbetande. Ingenjörssamhället fokuserar för närvarande på samarbete och samverkan (CoCo) eftersom det gör det möjligt för människor och robotar att arbeta tillsammans i heterogena team. Teamarbete mellan människa och robot är i sin tur en lovande möjliggörare för integrering av både människans flexibilitet, skicklighet och kreativa problemlösning med robotens styrka, precision, tillförlitlighet och effektivitet. I kontrast utvecklas ekonomiska överväganden kring utarbetande av teknisk arbetslöshet och vidare makroekonomiska konsekvenser. För att förena dessa förgreningar genomför denna avhandling en litteraturöversikt. Genom den läggs de grundläggande designbesluten som är nödvändiga för att uppnå CoCo, samtidigt som de indikerar den för närvarande mest lovande forskningsriktningen i var och en av designaspekterna. Både ingenjörsmässiga och psykosociala aspekter tas i beaktning. Därefter genomförs en scenarioanalys med en socioekonomisk omfattning. Detta bidrar till ökad förståelse för att omsluta HRI som ett socio-tekniskt system i socioekonomiska miljöer, dvs. företag. Slutligen används designaspekterna tillit, multimodal kommunikation och den mänskliga rollen i HRI för att bygga en förståelse mellan förhållandet av socioekonomisk utveckling och framtida scenarier med specifika designaspekter av HRI. Det framgår att alla framtidsscenarier har distinkta men också snarlika konsekvenser för HRI. Mer djupgående uppstår dock ett antal etiska och öppna djupgående filosofiska frågor från scenarioöverföringen till HRI. Vad händer om framstegen på CoCo är för långsam för att möjliggöra ett paradigmskifte bort från automatisering genom robotik? Hur mycket är vi villiga att exponera oss för digital teknik för att möjliggöra naturlig interaktion med robotar? Är vi tillräckligt kunniga om potentiella möjligheter och risker när vi närmar oss att kunna replikera ett stort antal unikt mänskliga förmågor? Med dessa frågor syftar denna avhandling till att bidra till intressegruppen för HRI i bredare överväganden som är nödvändiga för en människocentrerad framtid för HRI. Utbildning framställs som ett viktigt steg för att möjliggöra en sådan framtid.

Table of Contents

List of Figures

Figure 1 Initial Areas of Investigation ...... 4 Figure 2 Final Areas and Topics of Investigation ...... 5 Figure 3 The Process of Collaboration [11] ...... 9 Figure 4 The Process of Cooperation (based on [11]) ...... 10 Figure 5 Ways of communicating Intention. Implicit/Subconscious in grey [11] ...... 20 Figure 6 Interaction Settings [31] ...... 24 Figure 7 Different collaborative Safety Modes in HRI [33] ...... 25 Figure 8 Technology Acceptance Model (TAM) [84] ...... 26 Figure 9 Unified Theory of Acceptance and Use of Technology [87] ...... 27 Figure 10 Trust Cycle (based on [49]) ...... 28 Figure 11 Trust Repair Acts (based on [49]) ...... 28 Figure 12 Trust-related Themes (based on [92]) ...... 29 Figure 13 Affinity for Robots plotted on the Robots' Human Likeness [94] ...... 31 Figure 14 Baxter [107] ...... 33 Figure 15 Metrics of HRI [114] ...... 35 Figure 16 Evaluation Metrics for collaborative HRI [115] ...... 36 Figure 17 Scenario Planning Process (based on [125]) ...... 40 Figure 18 Trend Identification Process ...... 43 Figure 19 From Social Constructivism to Technological Determinism [195] ...... 64 Figure 20 Human Roles in HRI ...... 84 Figure 21 Trust Model [218] ...... 93 Figure 22 Human-Robot Trust Antecedents (based on [203]) ...... 93 Figure 23 Motivation for Multimodal Communication by Task Context ...... 120 Figure 24 Motivation for Multimodal Interaction by Human Role ...... 121

I List of Tables

Table 1 Scoping Literature Review along Taxonomy (based on [23]) ...... 4 Table 2 Human-Robot Relationships (based on [13]) ...... 8 Table 3 The "Un-Fitts" List [15] ...... 11 Table 4 Taxonomy of Levels of Automation [46]...... 13 Table 5 Levels of Robot Autonomy (H-Human, R-Robot) [47] ...... 14 Table 6 Trust Scale for HRC [92] ...... 30 Table 7 Chosen Scenario Characteristics (based on [126]) ...... 40 Table 8 Entities and the Consequences of Access to advanced Robotics [120] ...... 49 Table 9 Brainstormed Trends for Producing Industry ...... 51 Table 10 Driving Forces and associated Trends ...... 52 Table 11 Trends and Driving Forces Evaluation Result ...... 55 Table 12 SciFi Works identified as relevant ...... 59 Table 13 Robot SciFi Works Taxonomy (based on [194]) ...... 63 Table 14 Workforce Emancipation Scenario - Trend Narratives ...... 71 Table 15 Consumer Emancipation Scenario - Trend Narratives ...... 73 Table 16 Economics of Replacement Scenario - Trend Narratives ...... 75 Table 17 Theoretical Literature Review along Taxonomy (based on [23]) ...... 77 Table 18 Literature Processing Steps and related Activities (based on [24]) ...... 78 Table 19 Data Extraction Questions (based on [196]) ...... 80 Table 20 Descriptive Literature Review along Taxonomy (based on [23]) ...... 80 Table 21 Association of Trust-relevant Factors and discussed Trends and Themes .... 85 Table 22 KW-, Title- and Abstract-based Screening - Trust ...... 88 Table 23 KW-, Title- and Abstract-based Screening - Multimodal Communication 110 Table 24 Motivation for Multimodal Communication in theoretical Literature ...... 113 Table 25 Publications used in the qualitative Meta-Analysis ...... 117 Table 26 Chosen Interaction Modalities by Environment ...... 118 Table 27 Chosen Interaction Modalities by Human Role ...... 119 Table 28 Association of Trust-relevant Factors and discussed Trends and Themes .. 123

II List of Abbreviations

AI Artificial Intelligence CoCo Collaboration and Cooperation H2R Human-to-Robot HMI Human Machine Interaction HRC Human-Robot Collaboration HRI Human-Robot Interaction I40 Industry 4.0 IP Intellectual Property ISO International Organization for Standardization KW Keyword OECD Organization for Economic Cooperation and Development pHRI physical Human-Robot Interaction R2H Robot-to-Human RQ Research Question SciFi TAM Technology Acceptance Model UK United Kingdom US United States USA United States of America UTAUT Unified Theory of Acceptance and Use of Technology

III List of Appendices

Appendix A Presentation after semi-structured Interviews ...... 158 Appendix B Cross-wise Comparison Results for the Spread of HRI...... 159 Appendix C Cross-wise Comparison Results for the Design of HRI ...... 160 Appendix D Science Fiction Robots Works Goodreads Analysis ...... 161 Appendix E Notes of Interview #1 ...... 163 Appendix F Notes of Interview #2 ...... 167 Appendix G Notes of Interview #3 ...... 170 Appendix H Notes of Interview #4 ...... 175 Appendix I Notes of Interview #5 ...... 178

IV 1 Introduction

The industrial world is constantly changing. Currently, it is undergoing the next revolution. Since its first mentioning in 2011 [1], Industry 4.0 (I40) has garnered much interest in the scientific community. From the very beginning, the concept of I40 regarded the factory holistically and perceived it as a socio-technical system [1], thus, leading to the conceptualization of further components of I40 besides the factory – namely the operator and the product. Those three become “smart”, “augmented” or simply “4.0” through the implementation of I40 technologies [2, 3]. Similarly, manufacturing as particular aspect of factories is posited to become smart [4]. Smart manufacturing is described as the autonomous coordination and control of production equipment. Its goal is to achieve high efficiency by combining physical equipment with digital representations of the equipment as to gather live information, learn from data, and inform decision-making. Notwithstanding, operational implementation is posited to not yet reach the above described holisticness. One for which is stated to be the pending impact on the social dimension of such systems. Thus, given its socio-economic nature, I40 has, besides engineering, also inspired social, economic, and managerial research on the respective implications [5-7]. While already embodied in its naming, its paradigm shifting properties are contested also by the readiness of related technologies [7]. Nonetheless, expectations are high that once I40 is fully implemented significant gains in efficiency and flexibility will be achieved [8, 9]. Thus, already now, impacts on industries from technological shifts start to be reported and can be projected [7]. One such technology eliciting specific interest is robotics. In their Delphi-based scenario analysis of I40, Culot et al. [7] identify robotics as one of four key drivers. Within robotics, a topic with significant ramifications is the change of relationship between humans and robots [5]. Namely, a shift in research focus and interest from industry towards collaboration is evident [4, 10, 11]. It is also an area within future smart factories where humans are projected to continue to play a crucial role within production processes [4]. The expected benefit of collaboration is the ability to leverage robotic skills with human skills seamlessly [4, 12-14]. This notion can be traced back to two origins. Firstly, it originates in the design paradigm of Human-Robot Collaboration (HRC) which is centered around allowing the robot and human to achieve joint action [11]. Secondly, the idea goes back to Hoffman et al. [15] and Fitts et al. [16] who outline human skills, robot skills, skills for which the human needs the machine, and skills for which the machine needs the human. Thus, in the specific case of

1 humans interacting with robots, collaboration is thought to be most suited to leverage the individual skill sets [12, 17]. Due to this design however, the implementation of HRC as well as the more general Human-Robot Interaction (HRI) in the workspace of humans poses an interdisciplinary challenge [10, 11]. The demand this poses on several scientific disciplines is outlined by Ogenyi et al. [18]. The robotic system itself, sensors, actuators, task allocation algorithms and learning methodologies have to be taken into account to allow the development of collaborative HRI systems. According to the authors, further design considerations regard safety aspects, control design and human factors. Similarly, a comprehensive list of relevant ergonomic aspects of HRC is given by Rücker et al. [19] grouping these into physical, cognitive, and organizational ergonomics. Physical ergonomics refers to, among others, design aspects of the robot and safety, cognitive ergonomics include stress, demographics, and awareness, and lastly organizational ergonomics (e.g., communication, workplace layout, and mutual allocation). Thus, as to be able to appreciate the complexity of HRC, the initial part of this dissertation aims to build an understanding of the necessary design considerations when trying to operate in HRC and more fundamentally, how research on HRC is influenced by the more general research on HRI. In a subsequent step, this dissertation applies a scenario analysis methodology. Both HRI and HRC still represent an open field of research with several technical, socio-technical as well as psychosocial issues unresolved for a widespread implementation of collaboration [4, 13, 20]. It is acknowledged that the introduction of collaborative robots is accompanied by ethical and social implications [10, 21]. This is also demonstrated by the described interdisciplinary nature of HRC and the interest of the psychosocial human factors community in HRI as well as HRC. Therefore, it is deemed a promising approach to identify driving forces that shape our future and their implications on HRI. In this regard, the scenario analysis method represents an excellent tool to systematically draw various possible future scenarios for a given scope. Based on the initial review of HRC, aspects of specific interest are chosen and analyzed in light of the different possible future scenarios. In this way, this dissertation contributes to the existing body of knowledge by providing a grounded view into the future and its ramifications for the further development of HRI in industrial and manufacturing contexts. This allows for a critical reflection on technological developments pursued in the HRI and HRC research community and its possible socio-economic implications.

2 2 Scoping Literature Review 2.1 Motivation and Research Questions This initial section aims at exhibiting the research efforts undertaken and structuring the knowledge on various forms of HRI including and in particular HRC. Informed by the two heavily cited publications of Goodrich & Schultz [10] and Bauer et al. [11], the focus is on the following three Research Questions (RQ) to guide the scoping literature review: RQ1: How can robots accommodate the differences of changing human counterparts? RQ2: How can humans and robots make their intentions and demands understood by their counterpart? RQ3: How do humans perceive their robotic co-workers? These questions do not intend to delve into technical details. Rather they aim to capture the governing concepts related to answering those questions. Where applicable, this is combined with the exhibition of the most promising approach in each presented design aspect by trying to reveal the scientific community’s consent on the promise of different design approaches. Therefore, no extensive section on technical design is to be expected. More specifically, these questions aim at shedding light on investigating the functioning and consequences of collaborative HRI. The notion of their importance is raised based on the importance of human factors for previous research on human-automation interaction and now HRI [10]. Their particular relevance can be further deducted from Fletcher et al.’s [14] reflection on ethical and user-centric considerations for the implementation of HRC. They specifically mention the need for the robot to be able to accommodate differences of changing operators. Moreover, they pledge for the consideration of human factors beyond safety as to foster operational success of HRC as a new technology. Due to its design, the authors also anticipate that HRC will also affect human psychology in terms of trust and acceptance.

2.2 Methodology In order to gain insight into the field of HRI, the methodology of a scoping literature review is chosen [22]. This kind of review is suggested to serve well to gain a broad overview of a topic for further detailed investigation later on, while aiming at comprehensiveness. Due to time constraints no hard inclusion and exclusion criteria are set and applied to the literature. Nevertheless, this author aims at compensating this by a rigorous methodological approach elaborated more in detail below.

3 The scoping literature review is characterized along the taxonomy suggested by Cooper [23]. The taxonomy consists of six distinct characteristics. The chosen categories for each characteristic in this literature review are shown in Table 1.

Characteristic Chosen Categories Focus Research Outcomes Theories Goal Integration Identification of Central Issues Perspective Neutral Coverage Representative Pivotal Organization Conceptual Audience General Researchers Table 1 Scoping Literature Review along Taxonomy (based on [23]) The focus of the present scoping literature review is to include both empirical research outcomes and relevant theories, aiming to integrate both into a coherent analysis of central issues. This author takes a neutral perspective, nonetheless mentioning inconsistencies and settling for one or the other understanding of a matter where it is deemed necessary for the future elaborations. The coverage of the body of knowledge is representative while trying to include the pivotal works. The review is organized along concepts as to allow for the identification of central themes within those concepts. As this scoping literature review builds the basis for further, more detailed investigation, the audience for now are general researchers. Supported by the works of Goodrich & Schultz [10] and Bauer et al. [11] initial searches in the Google Scholar 1 directory lead to the aforementioned conclusion of HRI being an interdisciplinary field. Broad scoping employing forward and backward search based on these two publications leads to the identification of four main areas of investigation as shown in Figure 1. The identification is targeted at understanding the mechanisms and implications of interaction between robots and human in industrial operations to extend their respective limited capabilities.

Human-Machine Human Factors Interaction

Human-Robot Industry 4.0- Interaction & related Concepts Collaboration

Figure 1 Initial Areas of Investigation In a second step, the goal is to reveal the pertinent topics within these four areas. Forward and backward search is applied again in order to ensure sufficient coverage of topics [24]. Initial saturation is assumed, once new topics deemed relevant for the outlined questions stop emerging.

1 https://scholar.google.de 4 The subsequent step is geared towards connecting the topics identified under the broad research domains. Furthermore, a comprehensive picture of the current body of knowledge is drawn aimed at understanding the implications and challenges of operational interaction between humans and robots. This is also done to further validate the completeness of the previous steps. In some instances, the connection between topics is readily derivable, as for example with function allocation and task allocation. For others it is harder, as with, e.g., human-robot teams and human-robot relationships. In the latter case further forward and backward search in the Google Scholar directory is used to settle inconsistencies. Throughout reading the gathered literature and connecting the dots between it, the general aim made way for the three specific questions outlined ahead of this section. In an intermediate revision step, semi-structured interviews are conducted. Thus, especially given the third question and in response to interviews conducted, the four main areas of investigation are revised to replace Industry 4.0-related Concepts with Robotics and its sub- topics of Definition of Robot, the Uncanny Valley and Anthropomorphism. This is mainly due to get a better understanding of the human’s perception of the robot instead of understanding the robot as technological artefact within the concept of I40. The former is perceived as more relevant for further investigations. As just indicated, after an initial round of topic elaborations, the structure and contents of the scoping review are challenged with the help of four semi- structured interviews with personnel from the strategic technology innovation department of an industrial company. In the beginning, the interviewees’ understanding of HRC as well as its potential benefits and challenges is asked for. Subsequently, this author’s findings thus far are presented in a PowerPoint presentation in order to receive the interviewees’ direct response on missed areas and potentials for further investigation. The presentation is available in Appendix A. The interviews revealed a need for further concretization in terms of the definition of the concept robot, the interaction setting, and the evaluation of the interaction. Thus, the topics depicted and grouped in Figure 2 show the final topics for an investigation in order to answer the presented RQs.

Human Factors Human-Machine Interaction - Trust in Robots -Adaptable/Adaptive Automation -Technology Acceptance - Levels of Automation - Robot Ethics -Function Allocation - Evaluation of Human-Robot Interaction

Human-Robot Collaboration Robotics -Safety - Definition of Robot - Human-Robot Relationship - Anthropomorphism - Physical Human-Robot Interaction -Uncanny Valley -Human-Robot Teams - Levels of Autonomy - Task Allocation / Planning - Modes of Interaction / Communication Figure 2 Final Areas and Topics of Investigation 5 The remainder of this scoping literature review is structured in order to first delineate important concepts and terminology in section 2.3. In section 2.4 salient design considerations for HRC are described. Section 2.5 exhibits current frameworks used to evaluate HRC. Lastly, section 2.6 offers a summary by answering the three guiding questions for this literature review.

2.3 Delineations 2.3.1Robot

Bekey [25] defines a robot as “a machine that senses, thinks and acts”. As he further explicates, this definition includes the ability of the robot to move in its environment, thus, including mobility. Bekey recognizes the breadth of his definition. Bekey is also cited by Demir et al. [26], who similarly mention the vagueness of the current definitions as an indication for need to still find a “satisfactory definition” of robot. By analyzing interviews with roboticists, Cheon & Su [27] find no consistent reference to a definition among the 27 interviewees. Some of them agree with the previously mentioned definition by Bekey. Some have their own, like a “Robot doesn’t need to [...] [have] a specific shape or function; it’s [...] any motion with [a] certain level [of] autonomy, certain level of control [...] and a certain communication form [...] and mobility” (p. 379). In a work on legal implications of the widespread introduction of robots, Richards & Smart [28] define a robot as “a constructed system that displays both physical and mental agency but is not alive in the biological sense”. Entirely software-based agents are, thus, excluded by them. Previous notions of mobility, however, are not addressed by this definition. Mental agency in the authors’ definition describes the ability to sense and react upon environmental changes. The authors also note that their agency is to be understood subjectively, meaning a robot only needs to seem to exhibit agency to an outside observer. Contrarily to the previous notions of a fixed definition of robot, Wilson [29] argues for an evolving definition along with the progress of robot’s capabilities. At the time of the publication in 2015 in the suggested definition, a robot is “an artificially created system designed, built, and implemented to perform tasks or services for people”. Compared with the other definitions, it also considers mere software to be a robot since physicality is no criteria. In fact, this definition is among the broadest. The International Organization for Standardization (ISO) [30, p. 2] defines a robot as an “actuated mechanism programmable in two or more axes with a degree of autonomy, moving within its environment, to perform intended tasks”. As part of the definition, autonomy is defined by ISO as the “ability to perform intended tasks based on current state and sensing,

6 without human intervention” (p. 2). ISO adds two notes to the robot’s definition. For one, a robot is described to include a control system and an according interface. Also, a classification into industrial and service robot by ISO does not describe the platform of robot any further but rather makes a distinction based on the application. In conclusion of all the above mentioned, the working definition for this dissertation is suggested as follows: A robot is a mechanical device with both physical and mental agency, and the capability to move in its environment to execute tasks. Mental agency herein is to be understood synonymously to the concept of autonomy. Thus, a robot is delineated from machines through the aspect of mental agency and delineated from bots by its physical agency. Despite this broad definition, this dissertation nonetheless, limits itself to robots used in the industrial context of producing companies. This does not intend to limit the scope on specific technological platforms such as drones, unmanned ground/aerial vehicles or industrial cobots. Rather, the limitation focuses this dissertation on the specific socio-economic environment of factories, shop-floors, and directly associated office environments. This delineation will apply throughout the entire dissertation. 2.3.2Human-Robot Relationships

2.3.2.1 Delineation of different Relationships

There are different relationships between humans and robots, which have been conceptualized in a consecutive development of Yanco & Drury’s taxonomy [31]. Whereas most aspects of this taxonomy are relevant from a design perspective, the most influential dimensions for definitions of different relationships, as will become evident later on, are time and space, i.e., are the robot and human working together at the same time, and working together in the same location. In order to comprehensively delineate the different relationships, dimensions such as physical contact between human and robot as well as sharing of resources are added, but cannot be found in the original taxonomy. The following paragraph will, thus, introduce all dimensions used to differentiate the relationships. Recent literature with a background in assembly and manufacturing suggest the existence of five distinct relationships [13, 32, 33]. The dimensions along which these authors differentiate the relations are workspace, direct contact, working task, resource, simultaneous, and sequential process. The least interactive relation is the cell mode in which there is no direct interaction between the robot and the human [32, 33]. As there is no direct interaction, Wang et al. [13] do not include it in their differentiation of different relationships. Consequently, the most fundamental relationship in which human and robot interact without fencing or barrier is

7 the co-existence mode [13, 32, 33]. The robot is described to not share the workspace with the human directly as they work on different tasks at the same time, i.e., simultaneously. The next relation is referred to as synchronization [33] or merely interaction [13]. Despite finding it in previous work, Aaltonen et al. [32] decide to not include it in their final suggestion. Both Malik et al. [33] and Wang et al. [13], though, agree on the design of interaction/synchronization: humans and robots share their workspace with each other, direct contact between them is possible and they work on the same task but in sequential manner [13, 33]. In cooperation the workspace is shared as well, humans and robots also share the overall tasks but do not work with the same object at the same time resulting in a sequential yet simultaneous process as both can work in parallel on different objects [13, 32, 33]. The additional feature in cooperation as opposed to interaction or synchronization is explicated that the human and robot share some of their resources to support each other. Wang et al. [13] lists these resources: physical, cognitive, and computational. Lastly, in collaboration human and robot share the same workspace, there might be contact between them, they simultaneously work on the same task and share some of the named resources along the way [13, 32, 33]. As the majority of previous literature as well as the clear possibility to include an intermediate relationship between co-existence and cooperation according to Wang et al.’s differentiation table [13], here, the following relationships are suggested as can be seen in Table 2: cell, co-existence, synchronization/ interaction, cooperation and collaboration.

Synchronization/ Cell Co-existence Cooperation Collaboration Interaction Open workspace ️ ️ ️ ️ Shared workspace ️ ️ ️ Direct contact ️ ️ Shared working task ️ ️ Shared resource ️ ️ Simultaneous process ️ ️ ️ ️ Sequential process ️ ️ Table 2 Human-Robot Relationships (based on [13]) Despite varying understandings of these terms in the literature [34], in this dissertation the above proposed delineation is used due to its adequate granularity and ensuing sharpness. Contrarily, the proclaimed inaccuracy in previous literature can be argued to lie in the insufficient use of dimensions. As mentioned earlier, relationships are initially delineated only based on time and space dimensions, hence, resulting in vague delineations [34].

2.3.2.2 Definition of Human-Robot Collaboration

HRC is generally defined as the relationship between humans and robots aimed at achieving a mutual goal by sharing respective resources and intrinsic skills through joint action

8 [11, 13, 35]. Thus, in addition to elaborations made above, resulting in the description of collaboration, aspects of goal sharing and joint action are added as to specify and define HRC. To that end, in Figure 3, Bauer et al. [11] sketch the process towards reaching a joint action which originates in a mutual goal by having a joint intention, planning the action and lastly, executing an action together.

Figure 3 The Process of Collaboration [11] 2.3.2.3 Definition of Human-Robot Cooperation

From Table 2 above, cooperation is different from collaboration as the robot and human are not sharing the task and are having no direct contact. Thus, joint action exists only in collaboration. The feature of sharing a common goal, however, is also present in human-robot cooperation [32]. This point is contested by Wang et al. [13] who describe cooperation as featuring two autonomous agents with separate goals. In this dissertation, the description of Aaltonen et al. [32] is followed. Wang et al’s [13] understanding necessitates the robot and human acting jointly at all times in order to achieve a shared goal. The author of this dissertation argues that this view on sharing a goal is too narrow. Rather, sharing a goal is argued to be dependent on human and robot sharing their respective resources. The differences can be resolved by differentiating between joint effort and joint action. Whereas cooperation includes sharing a goal between agents and sharing resources but stopping at executing the action jointly, collaboration features a shared goal and sharing respective resources and eventually executing the action jointly, be it with or without contact. Hence, both HRC and human-robot cooperation are to be understood as relationships of collaborative HRI. Thus, human-robot cooperation is defined as the relationship between humans and robots aimed at achieving a mutual goal by sharing respective resources and intrinsic skills through joint effort. Figure 4 depicts the process to reach joint effort which is the same as reaching joint action. However, as will be elaborated more in detail later, this has several design implications for the interaction.

9

Figure 4 The Process of Cooperation (based on [11]) 2.3.3Physical Human-Robot Interaction

With robots working alongside humans physically as seen in all modes other than the cell mode, the research area of physical Human-Robot Interaction (pHRI) mainly focuses on aspects relevant for the physical interaction between humans and robots [36]. Given its overlap with interaction relationships described above it also holds relevance for them. Despite the lack of a comprehensive review of pHRI in recent years, the work by De Santis et al. [36] can be understood as a key contribution in the field of pHRI as evidenced by its 600+ citations. Thus, the main concern and simultaneously the fundamental requirements for pHRI’s feasibility are seen as safety and dependability to ensure the human is safe and that the human can depend on the robot [36-38]. Both De Santis et al. [36] and Pervez & Ryu [38] agree on and outline the underlying research to achieve both safety and dependability. Safety includes safe design through lightweight design, passive compliant surface, adaptation of the actuation and control mechanisms of the robot as well as motion planning [36, 38]. On the matter of dependability, Pervez & Ryu [38] limit themselves to availability and reliability as constituents whereas De Santis et al. [36] take a wider scope and include safety, maintainability and integrity. This wider scope though does not necessarily stand in conflict with the narrower understanding of Pervez & Ryu. Whereas Pervez & Ryu [38] include failure management in safety separately, De Santis et al. [36] describe it as part of safety within dependability. And even though, integrity and maintainability are not specifically addressed by Pervez & Ryu, integrity’s characteristics are nonetheless addressed while maintainability is described as a precondition to achieve availability [36, 38]. Thus, neither of the groups of authors do address it with close scrutiny. Due to its characteristics of collaborative HRI as described above, pHRI also fundamentally influences the research on collaborative HRI in the areas of safety and dependability [12, 39]. However, collaborative HRI as specific modes of HRI requires more than those fundamental considerations from pHRI to be achieved.

10 2.3.4Reason for Interest in Human-Robot Collaboration

Before coming to the design of HRC, the reason for the scientific interest in this field is to be revised in detail. As mentioned earlier, the fundamental idea of the interaction between human and robot is the goal to allow the optimal leveraging of human and robotic skills [17]. The idea for why a combination of humans and robots is beneficial actually originates in Human Machine Interaction (HMI) research. In it, Fitts et al. [16] reasoned in what tasks humans are better than machines in general terms and vice versa. The five areas where they argue for human superiority are sensory functions, perceptual abilities, flexibility, judgement and selective recall, as well as reasoning. On the other hand, machines are argued to be better than humans in terms of speed and power, routine work, computation, short-term storage, and simultaneous activities. This complementary superiority can, thus, be used to appropriately divide responsibilities in human machine systems, as suggested by the authors. It has to be said that Fitts et al. are mentioning the caveat of machine skills further developing after their publication in 1951 and that their evaluation of superiority might change in the future. Especially, in terms of sensory function and perceptual abilities, advances in computer vision and research and development in sensor technologies provide reasonable doubt of human superiority in these areas [4, 13]. Regardless of the specific content, one main critique in the depiction of the human in Fitts et al.’s argument is its focus on human shortcomings which are to be compensated by machines [15]. In this essay on socio-technical systems, a contrasting analysis is thus presented – an “un-fitts” list, as shown in Table 3. With it, Hoffman et al. [15] argue that this list represents an argument for the consideration of a “human-machine system” which focuses on leveraging the abilities of both contributors.

Machines Are contrained in that Need people to Sensitivity to context is low and is ontology-limited Keep them aligned to the context Sensitivity to change is low and recognition of anomaly is ontology-limited Keep them stable given the variability and change inherent in the world Adaptability to change is low and is ontology-limited Repair their ontologies They are not "aware" of the fact that the model of the world is itself in the Keep the model aligned with the world world People Are not limited in that Yet they create machines to Sensitivity to context is high and is knowledge- and attention-driven Help them stay informed of ongoing events Sensitivity to change is high and is driven by the recognition of anomaly Help them align and repair their perceptions because they rely on mediated stimuli Adaptability to change is high and is goal-driven Affect positive change following situation change They are aware of the fact that model of the world is itself in the world Computationally instantiate their models of the world Table 3 The "Un-Fitts" List [15] Building on this understanding of complementary skillsets, Bradshaw et al. [40] analyze seven assumptions often made about the introduction of systems purportedly autonomous. The key “myth”, as they refer to the assumptions, is that the undeniable help machines, and robots

11 as a specific case of which, can provide is not to be understood as mere support to improve human capabilities but instead as an entire change in human capabilities needed. The authors further argue that autonomous systems can only be successfully implemented in strictly circumscribed instances. Outside these instances a system always interacts with other systems – in many cases a human – leading to interdependence rather than independence [40]. Hence, there is an essential requirement for the development of “natural and effective modes of interaction” (p. 59) in order to be able to leverage both skillsets to its maximum. This is further underscored and posited in [4] and is referred to as “intuitive design of the interaction and control” (p. 37). Therefore, this represents a fundamental argument for research on the design of HRI as the focus is shifting from either robot or human but rather how to best design for both agents. In line with this argument is research found under the alias of Human-Robot Teaming/Teams. Three papers notably describe the requirements to allow robots to transition from tools to team-members [41-43]. This is posited to inherently cause the above-mentioned interdependence and requires communication in the team. Bradshaw et al. [42] focus on coordination as essential in achieving human robot teaming, whereas Ma et al. [41] extend coordination with communication and collaboration as pillars of human robot teamwork. However, collaboration is described to be dependent on communication and coordination. Hoffman & Breazeal [43] turn the argument around by describing an architecture featuring shared activity, joint intention, common ground and goals which need to be communicated and coordinated to achieve collaborative human robot teams. From these elaborations the intertwinement between research on human-robot teams and HRC becomes clear. Thus, it can be argued that the transition of robots to team-members would necessitate the development of natural and effective modes of interaction. Hence, posing the question which form of relationship team-work represents as to understand which relationship should be regarded as most promising in being able to leverage both human and robot skills. For this purpose, an analysis of vocabulary used to describe teamwork is helpful. Bradshaw et al. [42] as well as Hoffman & Breazeal [43] refer to teamwork as a mean of achieving joint action. Ma et al. [41] does so as well, though only in the context of collaboration. However, as the authors around Ma describe collaboration to depend on communication and coordination, i.e., the purportedly equally important pillars of human-robot teamwork, it can be argued that teamwork refers first and foremost to collaboration. As has been delineated before, based on literature solely focusing on collaboration, joint action is the key differentiator to other modes of interaction. Thus, considering this common use of vocabulary to describe both teamwork and

12 collaboration, the just described literature on human-robot teamwork also holds specific relevance for HRC and grants special attention to HRC as a form of HRI due to its ability to complementary leverage both human and robot skills. However, as a later finding will reveal, this line of argumentation is too narrowly focused on joint action as enabler for the leveraging of human and robot skills. While literature on human-robot teams might be closely associated with HRC, this dissertation will follow a different proposition about which interaction relationship is most promising for leveraging both human and robot skills. As opposed to joint action individually, the entire cognitive process reaching either joint effort in the case of cooperation or joint action in the case of collaboration are to be understood as enabling the seamless leveraging of both human and robotic skills. The rationale behind this will become clear towards the end of Section 2.4.1.1. 2.3.5Human-Robot Relationship and Level of Autonomy

This section will initially differentiate the concepts of human-robot relationships and levels of autonomy. However, the level of autonomy is also considered a design aspect of HRI. Consequently, later sections will build on the delineations made here. While describing their continuous classification of collaboration levels in which the relationships described in Section 2.3.2.1 can be understood as discrete steps, Kolbeinsson et al. [44] posit the necessity for a differentiation between the level of automation and levels of collaboration, i.e., human-robot relationships. The idea of levels of automation originates in [45] which describes ten distinct levels ranging from entirely manual control over a machine or process to complete control of the machine or process over itself. Over the years many more taxonomies for levels of automation have been introduced to which Vagia et al. [46] give a comprehensive overview. After mapping the different taxonomies with different number of levels of automation against each other, the authors suggest a taxonomy themselves with eight levels, as depicted in Table 4.

Level of Description Description autonomy Level 1 Manual Control Computer offers no assistance The computer offers some decisions to the operator. The operator is responsible to Level 2 Decision proposal stage decide and execute. Level 3 Human decision select stage The human selects one decision and the computer executes. Level 4 Computer deciion select stage The computer selects one decision and executes with human approval Level 5 Computer execution and human information stage The computer executes the selected decision and informs the human Level 6 Computer execution and on call human information stage The computer executes the selected decision and informs the human only if asked

Level 7 Computer execution and voluntarily information stage The computer executes the selected decision and informs the human only if it decides to

The computer does everything without human notification, except if an error that is not Level 8 Autonomous control stage into the specifications arrives. In that case the computer needs to inform the operator

Table 4 Taxonomy of Levels of Automation [46]

13 Building on the existing automation taxonomies for human machine/computer interaction, Beer et al. [47] suggest an analogous framework for HRI while referring to levels of autonomy. This transition from automation to autonomy is not uncontroversial as it implies certain changes in design possibilities or requirements [48]. This point will be elaborated upon more towards the end of this chapter. In their work, Beer et al. [47] use ten levels to describe the transition of human to robot involvement. The capability dimensions along which these levels are differentiated by them are sense, plan, and act. Table 5 explains the difference between the levels. These three dimensions are based on literature on levels of automation which describes four basic functions: information acquisition, information analysis, decision and action selection, and action implementation [47].

Level of Robot Autonomy Sense Plan Act Description Manual H H H The human performs all aspects of the task including sensing the environment, generating plans/options/goals, and implementing processes. Tele-operation H/R H H/R The robot assists the human with action implementation. However, sensing and planning is allocated to the human. For example, a human may teleoperate a robot, but the human may choose to prompt the robot to assist with some aspects of a task (e.g. gripping objects). Assisted Tele-operation H/R H H/R The human assists with all aspects of the task. However, the robot senses the environment and chooses to intervene with task. For example, if the user navigates the robot too close to an obstacle, the robot will automatically steer to avoid collision. Batch Processing H/R H R Both the human and robot monitor and sense the environment. The human, however, determines the goals and plans of the task. The robot then implements th task. Decision Support H/R H/R R Both the human and robot sense the environment and generate a task plan. Hoever, the human chooses the task plan and commands the robot to implement actions. Shared Control With Human H/R H/R R The robot autonomously senses the environment, develops plans and goals, and implements Initiative actions. However, the human monitors the robot's progress and may intervene and influence the robot with new goals and plans if the robot is having difficulty.

Shared Control With Robot H/R H/R R The robot performs all apsects of the task (sense, plan, act). If the robot encounters Initiative difficulty, it can prompt the human for assistance in setting new goals and plans. Executive Control R H/R R The human may give an abstract high-level goal (e.g., navigate in environment to a specified location). The robot autonomously senses environment, sets the plan, and implements action. Supervisory Control H/R R R The robot performs all apsects of task, but the human continously monitors the robot, environment, and task. The human has override capability and may set a new goal and plan. In this case, the autonomy would shift to executive control, shared control, or decision support. Full Autonomy R R R The robot performs all aspects of a task autonomously without human intervention with sensing, planning, or implementing action. Table 5 Levels of Robot Autonomy (H-Human, R-Robot) [47] The dimensions of sense, plan, and act are separately attributed towards the human and/or the robot and are explicitly addressed in the framework. However, another dimension not explicitly delineated in the original work, but relevant for further elaborations is evident in the description: control. Up until the level of Shared Control with Human Initiative the human has the last say on what action to pursue and can stop while continuously monitoring the robot. In higher levels of robot autonomy, the robot is fully in control of the entire process while the human has limited possibilities to intervene and direct.

14 The underlying definition of autonomy is proposed as: “The extent to which a robot can sense its environment, plan based on that environment, and act upon that environment with the intent of reaching some task-specific goal (either given to or created by the robot) without external control” [47, p. 77]. One of the critiques in the use of autonomy is its understanding of being one dimensional [40]. Bradshaw et al. [40] explain two distinct dimensions of autonomy. One being self-sufficiency which refers to the autonomous entity being able to “take care of itself” (p. 54). The other being self-directedness referring to absence of external control over the autonomous entity. Applying this critique to Beer et al.’s framework, both dimensions can be understood to be addressed: self-sufficiency through the three distinct features of sense, plan and act, self- directedness through external control. Consequently, and in line with initial arguments in this section, collaboration and cooperation (CoCo) are fundamentally different from autonomy due to the goal-related aspect of either concept. Whereas, CoCo is defined through the robot and human sharing and simultaneously working towards a common goal, robot autonomy is concerned with the independence of the robot allowing it to reach a given or self-created goal. Therefore, autonomy can be understood as a part of HRI design rather than an equivalent. This notion is further underscored for HRI in general by Yanco & Drury [31] who mention level of autonomy as part of their taxonomy. As shortly indicated above, the transition from automation to autonomy is crucial from a human factors and ergonomics perspective. Hancock [48] defines automation in his work as “designed to accomplish a specific set of largely deterministic steps, most often in a repeating pattern in order to achieve one of an envisaged and limited set of pre-defined outcomes” (p. 284). Autonomous systems, though, are “generative and learn, evolve and permanently change their functional capacities as a result of the input of operational and contextual information. Their actions necessarily become more indeterminate across time” (p. 284). Critically, Beer et al.’s definition of autonomy lacks Hancock’s aspect of learning, evolving, and changing of functional capacities. As Hancock further explicates, this aspect of autonomy, however, is crucial for his argument of reflective contemplation how humans want to design and constrain such learning, autonomous systems. Despite this disparity, Hancock’s admonition holds relevance for HRC and does not require the discarding of Beer et al.’s framework. As de Visser et al. [49] extend Hancock’s work, they clearly argue for consideration of social and psychological factors involved in human’s acceptance of autonomous systems. Referring to human-autonomy teams, the

15 necessity of communication transparency within the team is pointed out by them. Namely, the amount of transparency and quality of information cues given by the autonomous system are crucial design aspects in such teams. However, this is posited to hold relevance for none- learning as well as learning autonomous systems. Consequently, based on the evident analogy of team development found in de Visser et al’s work to human-robot teaming literature, the salience of the following design considerations for HRI is further supported by human factors literature on human interaction with autonomous systems.

2.4 Design of Human-Robot Collaboration 2.4.1Interaction Design 2.4.1.1 Autonomy

Considering all descriptions above, an attempt is made to state a design recommendation for a specific level of autonomy which allows the optimal usage of human and robot skills. However, Goodrich & Schultz [10] posit that, as they call it, peer-to-peer collaboration requires flexible autonomy of the robot. Nonetheless, given the lack of either a definition of this form of collaboration, nor the existence of a delineation to other interaction relationships, no association with HRC as defined above can be assumed. Rather, as the dimensions of Beer et al.’s [47] framework sense and plan clearly allude to the crucial aspects of perception and joint planning which are both integral parts of CoCo, dynamic autonomy is posited to be required for both collaboration as well as cooperation. Thus, no specific autonomy requirement for the optimization of human and robot skill leverage can be made. Nevertheless, with the help of Bradshaw et al.’s [42] explication of coordination, an exclusion of certain levels of autonomy can be achieved. Namely, the authors describe directability as a prerequisite of coordination which refers to the ability of either agent (i.e., human or robot) to respond to directions by the other agent, thus, excluding both fully manual and fully autonomous as feasible levels of autonomy for collaboration. However, this stands in contrast with Goodrich & Schultz who posit that the robot “must [...] be able to flexibly exhibit ‘full autonomy’” [10, p. 219]. To resolve this dissonance, the suggested explanation is based on Bradshaw et al. [40]. The authors argue that autonomy needs to be specified in relation to the specific tasks and its embedding in the interaction scenario. Thus, speaking of “full autonomy” without the necessary context, as seen in Goodrich & Schultz [10], a universal understanding cannot be assumed. The contradiction in the exhibited literature is, thus, resolved by pleading for “full autonomy” found in Goodrich & Schultz’s

16 work to be meant to only apply in a particularly narrow way, hence, leading to a level of autonomy lower than “full autonomy” of the entire system. In order to proceed with elaborations on deliberately varying the levels of autonomy, it is necessary to devise the terminology used in the literature for referring to this process. When excluding the two extreme forms of autonomy, as done in the case of HRC, the remaining spectrum of autonomy levels is commonly referred to as shared control [31]. As is further explicated, shared control is initially seen as a static level of autonomy. However, as initially introduced by Kortenkamp et al. [50], the idea of “adjustable autonomy” overcomes the static paradigm and shifted it to being able to switch between autonomy levels. Yanco & Drury [31] consequently collect other terms referring to the same paradigm shift: sliding scale autonomy and mixed initiative. As becomes evident in the review by Musić & Hirche [51], literature also often elaborates on adjustable autonomy while referring to control sharing. As described above, merely sharing control, however, does not necessitate adaptivity of the levels of autonomy. Musić & Hirche further use the term adjustable control which is less ambiguous but under which they seemingly gather literature which is considering control sharing yet again. As further examples [52-54] underscore a terminological delineation of dynamic autonomy is still to be achieved. These pieces of research refer to dynamic task-allocation, mutual adaptation and adaptive coordination respectively without mentioning autonomy, yet addressing the dynamic autonomy paradigm. Nonetheless, Chen & Barnes [55] deliver some clarity by differentiating the meaning of the terms adaptive and adjustable autonomy as well as mixed- initiative. Similar to HMI research, adaptive refers to the robot changing its autonomy, adjustable to the human changing the robot’s autonomy and mixed-initiative describes a system in which both agents have the ability to change the autonomy level [55]. Sliding-scale and dynamic autonomy are, however, not addressed by the authors leaving a comprehensive terminology consolidation open. Herein, dynamic autonomy and mixed-initiative systems are used as interchangeable terms. As a last note on the choice of different initiation mechanism of adaptation, literature indicates the benefits of mixed-initiative systems for human-robot teaming [56]. Moving on with dynamic autonomy’s design implications regardless, especially the notion originating in the delineation of autonomy and interaction is interesting from a design perspective. Goodrich & Schultz [10] require a robot to be able to switch between different, appropriate autonomy levels for natural and efficient interaction, implying that the robot requires some sort of adaptivity in its independence in order to allow for true collaboration to take place. This is supported by empirical findings in the search & rescue field [57], in a proof

17 of concept in social HRC [43], in general terms for assembly in reference to symbiotic collaboration [13], and even more broadly for collaborative human-robot teams [41]. This implied flexibility of autonomy levels is originally found in industrial automation research on adaptive/adaptable automation which is used to accommodate varying operator preferences and conditions in their interaction with automated machines [58, 59]. In literature on adaptive/adaptable automation the ways through which a change in automation can be initiated are differentiated, i.e., either by the automated process or by the operator [60]. It is referred to adaptable automation if the operator initiates the change and adaptive if the machine does so. Intuitively, both forms of flexibility of automation require some sort of information exchange – be it explicitly or implicitly – between the agents in order to speak of deliberate instead of random changes by either the machine or the operator. Crandall & Goodrich [61] go even further by stating that for implementing adjustable autonomy it is often desirable to enable interaction between human and robot as naturally as possible. Thus, considerable design implications ensue from collaborative HRI’s need for dynamic autonomy. In general terms, the implications of the level of robot autonomy on interaction are two- sided, i.e., two diverging views have been established over the years of research. On one hand, the argument is made that higher levels of autonomy will lead to less need for interaction [47]. On the other hand, higher levels of autonomy are argued to require more sophisticated but also generally more interaction between human and robots. While developing their previously discussed framework, Beer et al. work out the differences in these two distinct notions. Whereas intervention as one sub-form of interaction alludes to the need of the human to control and steer the robot, interaction takes a wider scope by including all forms of communication [31, 47]. Consequently, while higher levels of autonomy lead to the necessity of more or less intervention, the setting in which the robot is employed rather than the level of autonomy determines the need for more or less interaction in the form of communication [47]. Thus, higher levels of autonomy require less intervention, given, e.g., the robot is equipped with adequate capabilities and allows the building of the appropriate trust level, while allowing, but not mandating, higher levels of interaction [47]. Besides these considerations regarding the level of autonomy, the dimensions of the levels of autonomy are also a relevant design aspect. For both sets of sense, plan, act, and control as well as information acquisition, information analysis, decision and action selection, and action implementation it seems evident that these functions are to be executed consecutively for each individually identifiable task. Therefore, a human and a robot can only work simultaneously if more than one task exists for which those information stages are

18 executed. Or if only an individual task is to be fulfilled, the robot and the human work simultaneously on at least one of the information stages such as both working jointly on information acquisition. However, as the goal is to use robotic and human skills for the execution of these stages, the compatibility of skill sets for these stages has to be given. In case of HRC, simultaneous acting is a prerequisite. It is argued that this contradicts the original promise of introducing HRC. As reported by Michalos et al. [62] robots in industrial settings provide strength, velocity, predictability, repeatability, and precision. When simultaneous acting is designed as in [63] and [64] though, as the human is involved in the execution of the action, the human can be expected to dilute the robot’s advantages, especially its repeatability, precision and velocity due to the previously elaborated human skills. In case this inherent challenge to mutual object manipulation cannot be overcome, this leads to a limitation of which functions can be executed mutually to sensing, planning and controlling. However, joint planning is a prerequisite for cooperation as well as collaboration. In contrast to joint action, no incompatibility can be found in literature for other joint activities such as joint planning. Thus, this dissertation makes the argument that both collaboration as well as cooperation are to be seen to equally enable the leveraging of human and robotic skills. In conclusion, CoCo is posited to require dynamic autonomy while avoiding the two extreme forms of autonomy. Especially towards the higher levels of autonomy the human will need to intervene less while more communication might be needed and is enabled to collaborate or cooperate. Moreover, function allocation and design are crucial for achieving CoCo while attention has to be given to which functions to allocate to whom. Once these foundations for CoCo are laid, however, the seamlessness of interaction is not only determined by the robot’s autonomy. It is also dependent on various ergonomic factors explicated in the following chapters.

2.4.1.2 Communication

Taking into account the design implications of dynamic autonomy and the relevance of human robot teamwork literature, achieving CoCo hinges on communication and coordination. Hoffman & Breazeal [43] specify communication in the context of HRC as the exchange of intentions, beliefs, desires, and goals in order to ensure shared beliefs between the agents and allow the execution of a shared plan. Underlying the communication are shared goals which serve as frame for the interpretation and understanding of exchanged information [43]. Ma et al. [41] define communication more broadly as “the expression or exchange of information between two (or more) parties” (p. 651). Lastly, Bradshaw et al. [42] provide a rather technical

19 understanding as they refer to direct communication as “complex messages”, e.g., language and expression and indirect communication as mediated signals. Similarly to Hoffman & Brezeal, also Bauer et al. [11] describe communication as a goal-driven mean to mediate intention and achieve a joint action in HRC. Further, Bauer et al. outline the possible ways of communication while fundamentally differentiating between explicit and implicit/subconscious communication (see Figure 5).

Figure 5 Ways of communicating Intention. Implicit/Subconscious in grey [11] The authors further explicate that explicit communication is used to transfer specific information whereas implicit communication is more abstract in its content and thus requires cognition to retrieve the information from it [11]. As for the interpretation of the communication Hoffman & Breazeal [43] as well as Bauer et al. [11] point towards humans interpreting intentions based on the underlying goal rather than the actual communication action. Thus, in order to facilitate communication, the robot needs to be equipped with a similar model of the human in order to correctly interpret its communication actions [11]. Another approach to classifying communication is its direction within HRI. Feed- forward communication serves the robot to be able to recognize human intentions and the environment, whereas feed-backward communication serves the human to increase their perception [65]. The difference in this classification is that certain ways of communication in Figure 5 can be both suitable for feed-forward and feed-backward communication whereas they can only be either explicit or implicit communication ways. In terms of terminology, Ajoudani et al. [65] choose to refer to the means of communication as “Interaction modalities” and “Interfaces” as they want to differentiate these from the ambiguously used term communication. In their publication, communication refers to the communication infrastructure rather than the act of communication, thus, leading them to describing the communication devices which recognize the agents’ intentions differently. Thus, in terms of terminology communication includes communication, perception, intention, interaction modality, and interaction interface. As for the selection of technologies achieving a certain communication, Gustavsson et al. [66] introduce parameters along which various communication technologies for industrial

20 HRC can be evaluated. The key dimensions of the evaluation are extent of usage, flexibility, duration, and an additional classification. Extent of usage denominates the technology’s ability to convey different kinds of messages. These types are command, data, highlighting, demonstration, guidance or option messages. Command messages include specific action requirements. Data messages convey data without real-world information. Highlighting messages indicate real-world position data for the next action to take. Whereas demonstration messages merely convey the workflow of how to execute a task, a guidance message communicates how to move by physical manipulation to exhibit how to execute a task. Lastly, option messages present alternatives to choose from. Flexibility analyses the technology’s ability to be extended with further features. Evaluation happens on an ordinal scale of not applicable, special use-case and poor or good flexibility. Duration plainly denotes the time it takes the communication to take place given the use of certain technology. Again, the used evaluation scale is ordinal with not applicable, poor or good duration. Lastly, the additional classification categorizes the technologies into wearable, hand usage, and limited coverage technologies. While the vagueness of the evaluation criteria can be criticized, the metrics provide a novel way of comparing different available communication technologies. This allows the design of flexible and robust communication in HRC. Flexibility is achieved by employing complementary technologies and robustness by combining redundant technologies [66]. Taking up the previously mentioned requirement of CoCo of establishing natural and effective communication, multimodal interfaces are designed to specifically meet this requirement [10]. By combining several modes of interaction, naturality can be achieved [67]. Perzanowski et al. are found to be the first ones in 2001 to attempt such a multimodal communication by combining human speech and gestures to control a mobile robot. Their work can be regarded as devising in this field with 250+ citations. Given the advancements in recognition algorithms and sensor technology, state-of-the-art multimodal interfaces will come much closer to fulfilling the requirement of natural communication.

2.4.1.3 Coordination

As mentioned above, complementing communication to achieve CoCo, coordination is of considerable importance. In Ma et al. [41] coordination is described as ensuring the smooth functioning and working together of several agents towards a goal, thus, requiring the efficient planning, organization and control of the usage of available resources, activities, and responsibilities.

21 Bradshaw et al. [42] describe the three fundamental requirements for successful coordination to be interpredictability, common ground, and directability. The former refers to the ability of the agents to predict the other agents’ next action in order to plan its own action. Common ground incorporates the need for shared knowledge about the shared history and the individual current state. Lastly, the latter refers to the ability of each agent to execute a command of another agent at any time in the interaction. Once more building on research from HMI, coordination is achieved through function allocation. As Jordan [68] describes function allocation in 1963 in the context of perceiving humans and machines as complementary resources, it addresses coordination’s main aspects: planning activities, organizing resources, and controlling responsibilities. In the sense of collaborative function allocation, Jordan points out that the underlying understanding of human’s and machine’s capabilities has to distance itself from the long-established interpretation of Fitts et al.’s so called Men-are-better-at/Machines-are-better-at list. Rather, function allocation under the premise of how the resources both machine and human provide can best be combined to accomplish a task. Hancock & Scallen [69] are more specific in their critique of Fitts et al.’s dichotomous list by arguing that function allocation in reality is dynamic as capabilities change constantly whereas the list is static resulting in its inadequacy for function allocation. In succession, Sheridan [70] takes a critical perspective on the evolution of function allocation. In his publication he argues that whatever was feasible, humans embodied in technology leading to more capable technology. Due to this intrinsic drive, it is stated that only the concepts unattainable by technology can be left to allocate to a human – namely setting an objective function and appropriate usage of creativity. At the same time, critique aside, the actual list in Fitts et al.’s publication remains relevant in current function allocation research [71]. It is pointed to the value of Fitts et al.’s list as a scientific theory as well as its mentioning of, at the time, not yet manifested aspects of function allocation. In conclusion on function allocation in the area of HMI, Cummings [72] differentiates three broad categories of tasks: skill-based, rule-based, and knowledge-based or expertise-based. She argues that skill-based tasks are the area of automation superiority as these tasks can be characterized by precise feed- back loops and known intended outcomes. In rule-based tasks the amount of uncertainty determines the potential of automation. In order to manage uncertainty, the collaboration between rule-based machines and uncertainty-adaptable humans is well-suited and potentially beneficial. Similarly, knowledge-based tasks lend itself to human-computer collaboration due to its high uncertainty, yet the computational support machines can provide is limited. The author concludes its analysis with questions to consider in the design of function allocation.

22 These questions are, for one, concerned with how accurately machines are able to sense its environment, thus, leading to accurate or erroneous computations. For another, they concern the degree of uncertainty in the environment by asking whether humans are apt to absorb sensor or reasoning shortcomings, and whether automation can help reduce uncertainty. Lastly, it is asked whether the human can improve automation’s reasoning. From this short outline of function allocation literature of the past decades the transition from Fitts et al.’s fundamental question of superiority towards questions concerning the most effective CoCo between humans and machines becomes visible. Moreover, the main questions concerning CoCo address cognitive processes rather than physical ones. Due to the lack of a comprehensive literature review, the application of function allocation in HRI is exemplified with the support of a selection of salient pieces of literature concerned with function allocation in HRI. With regards to function allocation on a meta-level, Beer et al. [47] describe their suggested framework of levels of robot autonomy as a way to assign functions to either the human or the robot. These functions namely are sensing, planning, acting, and control. While their framework can be seen to give an understanding of the functions to allocate, the framework falls short of, as suggested by literature on function allocation in HMI, providing design principles and guidelines for when to allocate certain functions to either the human or the robot. Thus, a major stream of publications is concerned with the operational allocation of tasks in HRI scenarios (e.g. [73], [74], [75] or [76]). These examples show the effort being undertaken to incorporate human factors such as capabilities [73] and trust [76] into task allocation as well as to dynamize the task allocation [74]. However, similar to function allocation in HMI, task allocation and decision on the level of robot autonomy resembles various heuristics rather than the joint effort to achieve an objectively desirable function. To draw upon function allocation literature in conclusion for HRI, the more uncertainty a process is exposed to, the more promising a collaborative approach to HRI, be it as actual collaboration or by cooperation. The functions are, hence, to be allocated so that the robot handles the aspects of the goal exposed to low or no uncertainty, whereas the human handles those aspects exposed to high uncertainty. As seen in the three previous chapters of the interaction design, the research on HRI is heavily influenced by previous research on HMI and generally the interaction between humans and automated systems. As Parasuraman et al. [77] point towards three meta-components of human-centered automation design – namely, functionality, interface, and adaptivity. From the authors explication of these components it becomes clear that adaptive automation translates to dynamic autonomy, interface to communication and functionality to coordination for the

23 context of HRI. Thus, the previous chapter can be regarded as fully addressing the body of research from automation and HMI relevant for HRI. However, due to the way robots are embodied and their physicality, one more interaction design consideration and several more technical and acceptance design considerations are relevant.

2.4.1.4 Interaction Setting

Initially introduced by Yanco & Drury [31] and adapted by [78] and [33], the level of shared interaction between teams refers to four basic interaction settings which can be further split up. One robot can interact with one human, one robot can interact with several humans, one human can interact with several robots and several robots can interact with several humans [31]. The ratio of robots to humans can thus also be expressed in these four formulations: 1:1, 1:n, m:1, and m:n. In the three cases of involvement of more than one agent on either side, Yanco & Drury [31] undertake a further differentiation. Either the agents receive and act upon commands from the other side independently or they receive them as a team and coordinate the execution among them. This results in eight interaction settings in total as can be seen in Figure 6.

Figure 6 Interaction Settings [31] Together with dynamic autonomy, communication, and coordination, the interaction setting describes the interaction design for a given HRI application. Whereas an examination of literature holds specific design recommendations for all other interaction design aspects, any interaction setting can accommodate the aforementioned recommendations with mere operational adaptations. 2.4.2Technical Design - Safety

For the introduction of robots in the workspace of humans, safety is of crucial importance as was outlined in the summary on pHRI. Thus, even though safety in itself does not provide an understanding of the three main questions, ensuring safety is essential for CoCo to be attainable. The according safety standards provide the framework in which humans and robots can work together. Villani et al. [12] summarize the general safety regulations into three types: general safety standards for machinery (ISO 12100, IEC 61508), specific safety aspects

24 for certain components and safeguarding safety aspects (e.g., ISO 13849-1 and IEC 62061 for PLCs, ISO 13850 for safeguarding) as well as safety measures for specific machinery like robots (e.g. ISO TS 15066 for collaborative robots operation). Besides these safety standards, another two standards differentiate four collaborative modes to attain safety specifically in open workspace settings – namely [79, 80]. The four different collaborative modes can be seen in Figure 7.

Figure 7 Different collaborative Safety Modes in HRI [33] Safety monitored stop is employed if the human and the robot do not permanently have to work in a shared workspace [33]. In case the human has to work in the shared workspace, the robot is stopped through a safety stop, thus, always only allowing one party to move and operate in the shared workspace [33]. Through hand guiding the human can navigate the robot to the desired coordinates [33]. This mode has to be activated through a special button on the controller [33]. However, as it allows direct contact between the human and the robot, the robot has to be equipped with stop monitoring and speed monitoring functions in order to stop the robot, when hand guiding is activated and control its speed while the human drags the robotic arm [33]. The speed and separation monitoring mode allows the human and the robot to share their workspaces and be active in it simultaneously [33]. The speed of the robot’s movements is reduced as the distance between human and robot decreases [33]. Lastly, in power and force limiting mode, the power and torque in the robot’s joints is controlled so that both can work in the same workspace simultaneously [33]. Power and torque are not adapted according to the distance of the human but so that the impact force during a collision between human and robot is minimized [33]. Thus, for robots to be able to share the workspace with humans, they either require certain embedded or retrofitted sensorics [12]. The current generation of cobots (short for collaborative robot) is stated to be equipped with the required sensorics and thus are eligible for open workspace interaction with humans ex-factory. This makes CoCo highly accessible and consequently further adds to the scientific interest in it. As opposed to any ascription of above described collaborative safety modes to specific relationships introduced earlier,

25 Vicentini [34] suggests to allocate safety modes after dedicated risk assessment along two dimensions: the frequency of human access to the shared workspace and the kinetic energy of the robot. This notion is in line with Malik & Bilberg’s [33] suggested framework, where safety modes are also not explicitly ascribed to specific relationships. As a concept cobots are first published by Colgate et al. in 1996 [81]. The initial concept of cobots is providing passive support to the operator for collaboration on tasks. The operator would provide the movement of the part, whereas the cobot provides guidance of the motion of the part. Thus, it becomes clear that from the very beginning in research on cobots up until most recently, this technological platform promises to allow the leveraging of robotic and human capabilities [82, 83]. 2.4.3Acceptance Design 2.4.3.1 Technology Acceptance

Implementing new technology in human spaces causes technological challenges as indicated above, but also concerns social and human factors which have been conceptualized in the Technology Acceptance Model (TAM) [84] as depicted in Figure 8.

Figure 8 Technology Acceptance Model (TAM) [84] Several extensions have thus tried to describe the external variables which stand at the beginning of acceptance. Previous experience, output quality of the technology, subjective norms, enjoyment, and trust are identified as some of the most influential external variables [85, 86]. In an attempt to consolidate the research attempting to model human behavior and attitude towards technology, among those the TAM, Venkatesh et al. [87] construct and validate their Unified Theory of Acceptance and Use of Technology (UTAUT). UTAUT is stated to be able to explain up to 70 % of the variance in intention, thus, posing the question of whether the practical limits of theoretical explanation of individual intentions has been reached. Figure 9 depicts the authors’ validated model.

26

Figure 9 Unified Theory of Acceptance and Use of Technology [87] To determine the applicability of the varying acceptance models on HRI, Bröhl et al. [88] conducted a cross-cultural study in , the People’s Republic of China, the United States of America (USA), and . Overall the original TAM as well as UTAUT are found to be applicable. However, with considerable differences in the influence of certain factors across cultures. The starkest differences are determined in job relevance, technology affinity, social implications, data protection and ethical implications. Previous studies suggest cultural differences such as being high or low context to cause cultural differences. The found pattern of differences, however, rather suggests that factors such as spread of automation in factories, pervasion of technology outside work, habituation to robots and strength of data protection laws influence HRI acceptance in cultures. But these factors are posited to be hardly influenceable short term by companies aiming to introduce robotics. Thus, for short term considerations the authors suggest to examine adjustment variables related to ergonomics such as perceived safety, perceived enjoyment, and occupational safety to positively influence HRI acceptance. Thus, it can be deducted that in order to investigate the acceptance of robots in shared workplaces closer investigation of the constituents of TAM and its extensions can be promising. In this vein, Charalambous et al. [89] design a “human factors roadmap for successful implementation of industrial HRC” (p. 195). In it, trust as an acceptance component plays a crucial role. As an initial step, the authors suggest training operators to calibrate their initial trust level and subsequently continuously allowing the operator to adjust its trust level through engagement and empowerment.

2.4.3.2 Trust

As indicated, trust is a central theme in the study of technology acceptance. This is further supported by Freedy et al. [90] who argue for trust, besides communication, team leadership, and supporting behavior to be critical for successful mixed-initiative systems. Thus,

27 the ability to develop trust also holds particular relevance for collaborative HRI scenarios which, as elaborated above, are mixed-initiative. Trust, however, also has an operational importance for the interaction between humans and robots. Trusting the robot excessively causes a loss in situational awareness, decreasing the human’s ability to intervene in case of failure [91]. At the same time, in case of little trust, the human will have higher situational awareness by monitoring the robot while neglecting its own tasks. Thus, correctly calibrating trust has a crucial role in the design of HRI. Building trust is a process described by de Visser et al. [49] and can be seen in Figure 10. Trust can grow and deteriorate depending on the actions taken by the agents in a team. In this case the research is focused on robot’s actions affecting the human’s trust in the robot agent in general settings.

1

Net Victim Relationship Effect Act

3 2

Repair

Dampen Relationship Regulation Act Figure 10 Trust Cycle (based on [49]) In the beginning the relationship is affected by some kind of action [49]. These kinds of actions can be costly or beneficial. Costly actions include failures, unintended contact, inefficiency, and miscommunications. Beneficial actions to the contrary include good performance, politeness, and enjoyable chit chat. In subsequence, regulation acts are suggested to repair costly or dampen beneficial actions. Lastly, the effect of these regulation acts is mediated by the specific operator’s trust perception and her or his inclination to trust reconciliation. As to elaborate more on the repair of trust, the authors list the various repair actions suggested by literature. These 13 measures, as shown in Figure 11, range from the robot blaming the human over ignoring the deterioration, downplaying the significance, and acknowledging its costly conduct to the robot promising to improve its conduct.

e

Blame Gaslight Deny Trump Ignore Recogniz Downgrade AnthropomorphizeEmotionally RegulateEmpathise Apologize Explain Promise

Figure 11 Trust Repair Acts (based on [49])

28 To consolidate research into influencing factors of trust, Hancock et al. [91] gather empirical studies and extract three main themes affecting human trust in robots: human-related, robot-related, and environmental factors. Human-related factors differentiate between personal characteristics like age, personality traits or attitude towards robots, and ability-based factors such as attentional capacity, operator workload, and situational awareness. Robot-related factors are split in performance and attribute related factors. The former includes, e.g., behavior, predictability, and dependability, whereas the latter includes, e.g., proximity, adaptability, and robot type. Lastly, environmental factors are grouped under team collaboration, e.g., in-group membership, culture, and communication, as well as tasking, e.g., task type, task complexity, and physical environment. The analysis of included studies subsequently leads the authors to the conclusion that, firstly, robot-related factors influence human’s trust the most and, secondly, among those the one’s attributed to performance have the highest influence, i.e., behavior, dependability, reliability of robot, predictability, level of automation, failure rates, false alarms, and transparency. In order to operationalize the measurement of trust and advance insights into trust in industrial HRI settings, Charalambous et al. [92] conduct a two-staged study with shop-floor operators. Initially, an exploratory study is used to find trust-related themes in industrial contexts. All themes found in the study and accordingly grouped can be seen in Figure 12. It is important to note that the presence as well as their respective importance appear to be in line with Hancock et al.’s findings.

Trust-related Themes

Robot Themes Human Themes External Themes

Personal safety Robot’s motion Robot’s safety Task Performance Safety Task Gripping & features complexity Robot Safe reliability programming Prior Robot’s size experiences w. Physical robots Experience Attributes Robot’s Mental models appearance of robots

Figure 12 Trust-related Themes (based on [92]) Subsequently, the scale developed by the authors reflects these trust-related themes according to their relevance [92]. Their relevance is determined with the help of a questionnaire conducted after a second round of experiments. In conclusion and in line with Charalambous

29 et al’s roadmap to HRC implementation, the authors point to the practical implications of their scale. It can be used as a tool during HRC implementation as well as a tool to determine the need for further training. Table 6 lists the scale’s elements.

Scale item Major components The way the robot moved made me uncomfortable Robot's motion and pick up speed The speed at which the gripper picked up and released the components made me uneasy I trusted that the robot was safe to cooperate with Safe Cooperation I was comfortable the robot would not hurt me The size of the robot did not intimidate me I felt safe interacting with the robot I knew the griper would not drop the components Robot and gripper reliability The robot gripper did not look reliable The gripper seemed like it could be trusted I felt I could rely on the robot to do what it was supposed to do Table 6 Trust Scale for HRC [92] 2.4.3.3 The Uncanny Valley

In terms of appearance, three categories are to be differentiated: robots, humanoids, and androids. Androids are robots deliberately designed to resemble humans in appearance and their interaction, whereas humanoids feature human-like extremities without the human look in appearance and interaction [93]. Thus, mere robots are to be understood as every other form of embodiment. As part of their taxonomy, Yanco & Drury [31] introduce a similar aspect named robot morphology. It refers to the robot’s appearance as well, but in contrast includes anthropomorphic, zoomorphic, and functional appearance. Whereas anthropomorphic appearance can either refer to or humanoid, and functional simply to robot, zoomorphic does not have a pendant in the above-mentioned categories. Functional robots, thus, lack any human feature in terms of appearance and interaction. Depending on the use-case of the robot, zoomorphic might be a relevant form of appearance. However, in the following elaborations, the three categories android, humanoid, and robot, i.e., functional, are relied upon. In one way of approaching acceptance. Mori [94] posits that human’s affinity towards robots is related to the appearance of the robot. Despite the lack of connection to technology acceptance studies, Mori’s publication has and continues to gain interest among researchers aiming to understand human’s responses to different robot designs as is further elaborated later in this section. In order to understand the relationship between the two variables, Mori ponders a phenomenon first described by him as the “uncanny valley”. Figure 13 shows the shape of the curve describing the relationship between appearance and affinity. The curve has such an “uncanny valley” as the appearance becomes too human-like and in consequence cannot fulfill the human’s expectations elicited from such an appearance in terms of human skills. As Mori

30 further explains, the amplitude of the curve further increases once the robot is set in motion. The human-like appearance is put to the test of human-like movement which reveals and possibly betrays the human-like appearance eliciting human discomfort.

Figure 13 Affinity for Robots plotted on the Robots' Human Likeness [94] However, the existence of the described and depicted plot is questionable despite its plausibility and anecdotal as, thus far, no clear empirical evidence is collected [95]. In that review of the uncanny valley, it is revealed that on one hand the employed methodologies in empirical corroboration experiments are not consistent, on the other hand, definitions of terms imbued in the concept are not consistent, leading to a patchy array of experiments ineligible for consolidation. Consequently, a new explanation for the presence of an uncanny valley and a possible way for experimental validation is presented. With this approach, Wang & Rochat [96] are then able to empirically support Mori’s original claim of the existence of an uncanny valley. Nonetheless, uncertainties in the methodology persist besides the remaining lack of a description of the precise shape of the valley’s curve according to the two authors. Besides, the hypothesis of increased amplitude due to moving robot representations is not investigated due to the exclusive focus on static pictures in the experiments. With regards to industrial robotics, the study by Wang & Rochat [96], nonetheless, does not hold much relevance as they focus on the part of the curve close to the uncanny valley instead of attempting a wholistic plotting of Mori’s curve. In this scoping literature review a lack of comparative investigations into Mori’s left part of the curve is observed. What seems especially interesting is whether functional robots, cobots, or humanoid cobots elicit more affinity.

31 2.4.3.4 Anthropomorphism

In order to gain further understanding of aspects of acceptance in industrial HRI settings, an analysis of a concept closely related to human-likeness, namely anthropomorphism, is suggested. Anthropomorphism describes the attribution of human or personal characteristics to non-human entities [95]. Despite several studies demonstrating anthropomorphism’s positive effects on, e.g., trust [91] and perceived learning capability [97], its relevance for HRI remains controversial. The study by Stadler et al. [97] lacks scientific rigor to permit reliable extrapolations. Moreover, Goetz et al. [98], although limited to service robots and also limited in statistical power, provides evidence for the salience of appropriate anthropomorphism according to the task at hand rather than the benefits of anthropomorphism in every scenario in general. In support of this, the previously mentioned meta-analysis by Hancock et al. [91] finds that performance is more relevant than anthropomorphism towards a human building trust in a robot. To get a more detailed understanding, an analysis can be split according to the agents of anthropomorphism. Factors which lead to anthropomorphizing robots include autonomy, moral accountability, reciprocity, communication, predictability, and perceived emotions [99]. Generally, two main areas are considered to elicit anthropomorphism: the robot’s physical shape and the robot’s way of interaction and conduct with the human [100]. As for the robot’s bodily design, the above described skepticism is supported by a study by Busch et al. [101] which indicates that human physical features (in their case a head) causes the operators to expect social cues from it. When those cues remain absent, it might cause lower usability scores. Furthermore, in a study conducted by Weiss et al. [102] participants prefer functionally designed robots over humanoid robots. Participants want the robot to be as efficient as possible implying the participants’ view of robots as a tool rather than a co-worker. Similarly, Elprama et al. [103] receive doubts of operators regarding the functionality of the humanoid cobot Baxter (see Figure 14) used in their experiment. This occurs despite the operator’s knowledge of robots’ capabilities in general. Contrarily, Kolbeinsson et al. [104] report that the role of the robot changes from tool towards assistant, partner, or teammate. However, their statement is not based on empirical evidence but rather deducted from aspirations to achieve collaboration in theoretical publications [105, 106]. Nonetheless, in a study by Sauppé & Mutlu [107] operators judge the human features of the used humanoid robot as eliciting a feeling of safety and predictability. Moreover, the existence of eyes on the robot prompts one operator to ascribe the robot intelligence.

32

Figure 14 Humanoid Robot Baxter [107] As for social cues in form of interaction instead of the mere physical appearance, studies are less ambiguous regarding their ramifications on operators’ responses. Again, in the study by Sauppé & Mutlu [107] the social cues provided by the cobot Baxter are positively perceived by operators. Two operators mention they feel they reach an understanding with the robot, however, the same operators also state they would like the robot to be able to articulate itself in order to interact more with the robot. Generally, operators ascribed human characteristics, e.g., personality and intent, to the robot. In another study, while comparing interaction with Baxter once providing more, once no social cues, Elprama et al. [108] find that factory workers appreciate social cues. However, their initial hypothesis that more social cues would ultimately lead to higher intention of use can only be partially supported. The authors posit that the limited corroboration is caused by the small sample size and the low number of employed social cues – them being: nodding of Baxter’s head, static eye gaze, and movement of the head towards the manipulated object. Zanchettin et al. [109] investigate physiological responses of students and university staff to three kinds of trajectories (one of which imitating human arm trajectories) with a two-armed cobot. It is found that the human-like trajectory reduces stress levels of the participants with statistical significance. However, the publication stops at determining the stress level and does not posit any ramifications on the participants’ acceptance of the cobot. Lastly, Si & McDaniel [110] investigate participants’ perceived comfort being close to the robot, comfort touching the robot, and friendliness. To investigate those variables, they use a Baxter cobot in conjunction with facial expression, speech, and gestures tested with 43 undergraduate students. All variables are rated more positively in the “meaningful gestures” condition, compared to “no gestures”, “arbitrary gesture”, and “meaningful gestures with face” conditions. Facial expression is not found to significantly enhance the perception of robot’s gestures. To

33 the contrary, in some cases the human-like face seems to negatively affect the participants’ trust and the robot’s perceived friendliness. As opposed to previously mentioned studies, where either only anthropomorphic appearance or anthropomorphic behavior is analyzed, the study by Weistroffer et al. [111] investigate both. Three different industrial robots with varying degree of anthropomorphic appearance up until a two-armed cobot without head are used and the robots are either moved machine or human-like. The authors use the Godspeed questionnaire as well as physiological measures to gather participants’ data. The two anthropomorphic robots are perceived more positively without significant difference among each other. The machine-like movement is seen as safer and leads to ascribing the robot higher competence. However, if human-like movements are executed by the most anthropomorphic robot, the combination of the two is perceived negatively. It has to be said that all but one study are using small sample sizes featuring no more than 20 participants. Also, not in all cases a representative sample is used in accordance with the application of industrial robots. Furthermore, all experiments use robots with some human features (head, two arms, or both), in four of nine studies the cobot Baxter is used. Doubts in the benefits of anthropomorphic design aspects can thus not be based on direct comparison of a humanoid and a functional robot but are rather based on operators’ verbal responses after experiments and indications found in trust-related literature. Moreover, the consolidation is hardly possible due to the studies’ lack of methodological consistency with TAM, with the exception of [108] and with modifications in [111], or consistency amongst each other. Neither uniform collection of operators’ responses, such as the Godspeed questionnaire, nor concepts such as the TAM are used across the reviewed studies to evaluate operators’ responses. However, as indicated in their systematic literature review on the topic, Savela et al. [112] suggest the TAM as a helpful tool to evaluate various human responses to judge their acceptance of new technology. Thus, in conclusion, anthropomorphism of robots employed in industry and on shop- floors remains an interesting research area. For now, no design recommendations can be made due to the contradicting, and at the same time, low number of empirical data points. In an attempt to consolidate previous literature, Fink [100], while being faced with the same inconclusiveness, suggests authenticity for robot design. It is vaguely described as the robot being itself. However, in conjunction with cited literature by DiSalvo et al. [113], Fink’s statement becomes clearer. The robot should retain robot-ness to prevent disappointing

34 expectations towards it, project human-ness to invite the human to interact with it, and lastly, retain product-ness to ease a user to engage with it. Despite, the inconclusiveness of anthropomorphic design’s implications on robot’s acceptance by operators, the examined literature underscores anthropomorphism’s relevance in the design of HRI. However, closer examination of other antecedents of acceptance such as trust as by Hancock et al. [91] are highly relevant. Considering the highlighted design aspects of HRI, there exist manifold vantage points from which to investigate HRI. As to further elucidate relevant aspects of HRI which are not covered so far, a brief investigation of ways to evaluate HRI is promising.

2.5 Evaluation of Interaction In their seminal work, Steinfeld et al. [114] lists and categorizes different evaluation metrics for HRI. Though not claiming completeness and comprehensiveness for all applications, the suggested framework provides detailed classification and varied metrics which serve as basis for a multitude of research publications. Figure 15 pictures the metrics introduced by Steinfeld et al.. For a detailed description of all metrics and effects the reader is referred to the original publication.

Task Related Metrics Biasing Effects Performance Metrics Navigation - Global navigation - Local navigation System Performance - Obstacle encounter - Quantitative performance - Subjective ratings Perception Communication - Appropriate utilization of - Passive Perception -Delay mixed-initiative - Active Perception -Jitter -Bandwidth Operator Performance Management Robot Response - Situation awareness - Fan out - System lag -Workload - Accuracy of mental models of - Intervention response time - Update rate device operation - Level of autonomy discrepancies Manipulation User Robot Performance - Degree of mental computation - Performance shaping factors -Self-awareness - Contact errors - Human role - Human awareness -Autonomy Social -Interaction - Trust characteristics -Engagement -Persuasiveness -Compliance

Figure 15 Metrics of HRI [114] One approach to measure HRI is to gather task related data [114]. While the suggested task related metrics are aimed at evaluating interaction with mobile robots, the underlying ideas can be generalized for other robots as well, e.g., navigation applied to a stationary robot can refer to motion planning. Which task related metrics to use for the evaluation depends on the task of the robot. A combination across tasks is possible though. These task related metrics will in subsequence be influenced by biasing effects. In order to be able to evaluate the interaction

35 between humans and robots, the performance of the system as a whole as well as the single entities, i.e., the human and the robot can be measured. They represent the response to task design, measurable through task related metrics, and biasing effects. Marvel et al. [115] review relevant literature on evaluation metrics in manufacturing followed by suggesting a framework to systematically evaluate interfaces and interaction between humans and robots. Among others, the authors refer to the work by Steinfeld et al.. Marvel et al., hence, allow for a comprehensive evaluation of collaboration specifically in industrial settings. Whereas, Steinfeld et al. focus on separating task related from biasing and performance metrics, Marvel et al. posit that robot, operator, team and process are the entities to be metricized with equal relevance. In order to get a comprehensive understanding of the entities, Marvel et al. suggest to combine quantitative and qualitative metrics to measure HRI both objectively as well as subjectively as seen in Figure 16. A particular aspect additionally added by the authors is the economic evaluation of the interaction. Besides listing the various metrics, the authors also review the corresponding measurement techniques for the metrics and design recommendations.

Evaluation Metrics

Robot Metrics Operator Metrics Team Metrics Process Metrics

Self-monitoring and Reaction time Effectiveness Gain/profit modelling Return on Self Equipment/software Fault detection, Situation Precision Performance Efficiency Investment cost awareness isolation, and awareness Recall Quality of work recovery Learning time SAGAT % Robot assistance Robot’s size requests Expert use time NASA-TLX % Human assistance Human Robot’s appearance Utility of Interface Error cost awareness Workload Workload Profile mixed initiative requests Human modelling Number non-critical Software quality and monitoring System Usability Scale human interruptions Quality in use Features Functionality Performance Quality of work Team cohesion Equipment Separation distance Workload Overall availability Equipment Approach speed Team distribution Performance Safety composition Diversity Effectiveness Pressure Quality Leadership Force distribution Setup time Performance Quality of work Timing Cycle time Mean time to failure Information accuracy Information accessibility Timeliness Usefulness Diagnostics Consistency and feedback Accuracy Availability Reliability Usability Understandability Quantitative metrics Qualitative metrics for objective measuring for subjective measuring Value added Figure 16 Evaluation Metrics for collaborative HRI [115] Lastly, three case studies are presented in which a selection of metrics are employed [115]. Through those, the authors demonstrate the necessity to choose best suited metrics to fit

36 the use case instead of blindly applying all metrics in every scenario. The independent variable in the three presented case studies is the degree of interaction ranging from cell to collaboration. In conclusion, it can be argued that whereas Steinfeld et al. provide a better account of the processes leading to certain performance measurements, Marvel et al. provide better guidance for practitioners. Marvel et al. discuss the operationalizing of the metrics as well as summarize implementation guidelines. Thus, depending on the scientific approach, the two different frameworks are to be employed. Both presented sets of evaluation metrics once more exhibit the interdisciplinarity of HRI. The impossibility to take into account all metrics, implies the relevance of the designer’s perception of certain metrics’ importance. Depending on the designer team’s preference for certain metrics, the HRI system will¸ be created. However, the selection of metrics has consequences on the robot’s and the human’s work with each other. In conclusion with the previous elaborations, a user-centric design seems inevitable if a successful HRI system is to be designed.

2.6 Summary of Scoping Literature Review The three questions set out to be answered according to the current body of knowledge are the following: RQ1: How can robots accommodate the differences of their human counterparts? RQ2: How can humans and robots make their intentions and demands understood by their counterpart?, and RQ3: How do humans perceive their robotic co-workers? This summary now aims to concisely address those three questions. The scoping literature review shows that through dynamic autonomy the robot can adapt its autonomy level either through instantiation by the operator, by the robot or by both. This adaptation in effect can lead to adaptation to any number of changes in the robot’s environment, be it the working environment or the operator itself. The adaptation can be supported by gathering information on one or some of the suggested metrics. The selection of the evaluation metric will depend on the focus the interaction designers have and which aspect is valued most. The literature on human factors repeatedly mentions the importance of the operator’s situational awareness and trust as defining metrics for the interaction with robots. Thus, it is perceivable that those two could be promising metrics to consider when adapting the robot’s autonomy to individual human counterparts. A great amount of research goes into answering the second question. Along the introduced typology of different means of communication, different technical interfaces are 37 being developed to replicate many of these means of communication. However, it is crucial to understand how these means of communication are perceived by the humans using them in order to develop interaction systems through which “natural and effective” means of communication can be achieved. Current literature, nonetheless, points at multimodal communication to satisfy the requirement the best. However, so far, no definite recommendation on which modes of interaction to combine can be given. With respect to the first question, it is easily conceivable that in such multimodal interfaces a variation of interfaces is selected depending on the human counterpart. Thus, multimodal communication can be understood to address both questions one and two. If both the autonomy and the communication are designed to allow for RQ1 and RQ2 to be answered satisfactorily, coordination as third component of interaction design needs to be designed accordingly in order to allow for flexibility and robustness of the system. Flexibility in that it allows for adaptation of responsibilities and robustness in that it ensures that all necessary tasks are accomplished by one of the agents. Lastly on RQ3, anthropomorphism emerges as an interesting field for further research. It combines research into the bodily appearance of a robot as well as its behavior elicited by its way of motion, gesticulation, or put most broadly, its way of communication. All play a crucial role in humans’ perception of the robot. However, it seems difficult to directly relate it to acceptance or trust which are widely operationalized concepts to measure humans’ responses to technology. Acceptance is also mentioned as the crucial factor by the interviewees for the introduction of robots in the workplace. As is easily perceivable, not only the robot plays a role in ensuring its acceptance but also the organization as a whole can accomplish a lot to facilitate acceptance of robots. With the further development of robots as technological platforms, acceptance of more and more capable agents will remain a highly relevant field of research.

3 Scenario Analysis 3.1 Motivation According to the proposed definition at the beginning of the dissertation, robots as a technology are unique in the way that they combine a bodily appearance with which they can manipulate and move in their environment as well as mental agency – a combination which no other technology features. However, it is this combination that poses a number of questions when it comes to humans interacting with robots. Three of them are preliminarily addressed in the previous scoping literature review. With the initial notion of HRI as an interdisciplinary challenge in mind, Dias’ [116] book chapter makes a case in point. It is argued that the engineer

38 developing a technology should be aware of the consequences of his actions as homo faber, man the tool maker, by taking into account her or his ethical understanding as homo sapiens. Engineers “should also see themselves as agents of humanization as well as transformation” (p. 149). On the transformational side, it is argued that the homo faber uses the knowledge about capabilities of the human as a template to develop technology [70]. The natural inclination, therefore, exists to “embody in technology whatever can be understood” (p. 214). On the humanization side, the homo sapiens is asked to gain an awareness for technology’s implications [116]. There exist a considerable number of publications for example on possibilities of technological unemployment caused by, among other technologies, robots (e.g., [117], [118], or [119]). A further stream of research on the implications of robots looks at possible social developments unfolding with the spread of advanced robots (e.g., [120]). Attempts to predict the timeframe of the endowment of robots with certain capabilities have also been made (e.g., [121]). While these works offer insightful analysis and are certainly helpful, the author of this dissertation views those works as describing trends we currently observe that are expected to shape the future of robotics. They constitute trends that can influence the engineers developing HRI as both homo faber and homo sapiens. Thus, this dissertation aims to shed light on the currently observable trends shaping the future of HRI on a mid to long term perspective of approximately 20 years by conducting a scenario analysis. In order to gain a more detailed understanding of how different future scenarios might impact HRI specifically, relevant design aspects of HRI described in the previous scoping literature review will be analyzed from the perspective of the respective future scenarios. The investigated aspects of HRI are the human role in the interaction with robots, human’s trust in robots, and multimodal communication between humans and robots. Given the uncertainty of the trends, no single future trajectory can be identified at this point. This motivates this author to employ a scenario analysis in order to provide insights into various, possible futures and manage the uncertainty adequately [122]. As a last note, the time frame is chosen as to acknowledge the time it takes for significant socio- economic shifts to unfold. This is underscored by investigations with a similar scope in the fields of manufacturing [123] and robots in particular [124].

3.2 Methodology The fundamental process used in this dissertation is depicted in Figure 17. The scenario field is identified by clarifying the motivation in the section above as well as by the delineation from Section 2.3.1 – the definition of robots. Thus, the scenario field is identified as HRI 39 without technological platform limitations within industrial companies in processes on or closely related to shop-floors and production facilities.

Figure 17 Scenario Planning Process (based on [125]) Before the remaining phases can be outlined and the applied methods specified, further particulars have to be defined. In their review, Amer et al. [126] outline three fundamental approaches to scenario planning – the intuitive logics methodology, a prospective methodology, and the probabilistic modified trends methodology. This dissertation follows the intuitive logics methodology, which is summarized in Table 7.

Scenario Characteristic Intuitive Logics Methodology adapted for this Dissertation Purpose One-time activity to make sense of situations and developing a transfer Scenario type/perspective Descriptive Scope Narrow: producing industry and HRI Time frame 20 years Methodology type Process oriented approach, subjective and qualitative Nature of scenario team Author of dissertation & selected experts from academia and industry Role of external experts Validating the driving forces analysis and guiding the scenario generation Starting point General concern of future development of HRI Identifying key driving forces Intuition, research, brainstorming Developing scenario set Along themes Output of scenario exercise Qualitative set of equally plausible scenarios in narrative form Use of probabilities No, all scenarios are equally probable Number of scenarios 2 – 4 Evaluation criteria Consistency, plausibility, creativity and relevancy

Table 7 Chosen Scenario Characteristics (based on [126]) Some methods used are already mentioned in Table 7. The step of driving force identification, phase two, is inspired by Burt et al. [127]. Initially, trends are identified from the existing body of literature which is initially guided by this author’s intuition and complemented by brainstorming based on the findings. In a second step, identified trends are grouped in driving forces as to get a more concise picture of the interrelations between individual trends.

40 Phase three of the driving force analysis is conducted by further, closer scrutiny of the existing body of knowledge which culminates in a detailed description of the identified trends. In a subsequent step, pairwise comparison is conducted between trends in each driving force and also between driving forces with regards to their likely impact on both the spread and the design of HRI. This is further supported by experts from academia with an understanding of the industry and digital technologies. Gaining insight into the likely impact of the identified trends and driving forces is deemed relevant for steering the scenario generation towards relevant scenarios. The phase of scenario generation consists of three distinct steps. The first is informed by the notion of adding narrative creativity to scenario planning with the help of relevant Science Fiction (SciFi) literature [128]. Technological scenarios are often judged to lack an adequate analysis of dependencies and the actual motivation for a certain innovation. SciFi literature, in turn, can provide an insight into the psychosocial driving forces of a technology. Hence, the first step of scenario generation is to exploratively investigate relevant themes in the SciFi literature. The detailed methodology is explicated in Section 3.4.1. The second step is to generate scenarios, on the basis of the identified driving forces and the insights gained from the SciFi literature. As indicated above this process is guided by central themes for the different scenarios. Thus, each scenario falls under one theme. The development and specification of themes emerges from previous steps. Lastly, scenarios have to be analyzed with an eye to consistency, plausibility and creativity. First and foremost, consistency is considered the most important quality indicator of scenarios [126]. Instead of creating final scenario narratives and having those evaluated by external experts, this dissertation firstly creates scenario themes based on the driving force analysis which is then challenged in a round of semi-structured interviews with experts from academia and industry. Thus, instead of employing the common quantitative evaluation of consistency, a qualitative revision round is conducted. This allows this author to adapt both the themes and scenario narratives before scenarios are pre-emptively finalized as to ensure consistency, plausibility and creativity. Therefore, this is seen as the more viable option in this dissertation as a mean to balance efficiency and effectiveness. Lastly, phase five of the scenario analysis constitutes the transfer of the scenarios’ implications to specific aspects of HRI. It is inspired by the blending of scenario planning and technology road-mapping with several adaptations [129]. As to the aspects that are analyzed, the initial thought from the motivation for the scenario analysis serves as guide and is merged with the current knowledge described in the scoping literature review. It emerges that a designer of HRI should both be homo faber and homo sapiens and thus consider not only fulfilling the

41 technical task at hand but also regard the aspects of the role of the human, trust and communication in the design. Together those aspects express the engineer’s view on the relation between the human and robot. In consequence, projecting future scenarios on those three aspects provide an insightful understanding of the human’s relation to the robot in the future. In consequence, Section 3.3 describes the trend and driving force identification as well as the driving force analysis. In subsequence, Section 3.3 combines both phase 2 and phase 3 of the above outlined scenario analysis process. This is followed by the scenario generation in Section 3.4. Herein, the themes from SciFi literature are examined as well as the scenario themes are being created. Furthermore, results from the conducted expert interviews are introduced and finally, the scenario narratives are established. Lastly, in Section 3.5 the literature reviews on the relevant aspects of HRI, i.e. role of human, trust, and multimodal communication, are conducted in order to be able to accomplish the scenario transfer as the fifth phase of the scenario analysis process.

3.3 Driving Forces Identification and Analysis using PEST 3.3.1Methodology

In order to make the relevant uncertainties accessible to this author, the implementation of the PEST framework into the scenario planning is chosen. For its implementation, the process suggested by Burt et al. [127] serves as the blueprint. Instead of rigidly adhering to PEST as a framework to categorize uncertainties and driving forces, PEST is used to sort, order, and subsequently associate ideas with each other to provide a foundation for the scenario analysis. As to inspire this author’s identification of ideas and trends, exploratory searches through the Google Scholar directory and targeted searches in the Scopus2 database in specific journals (e.g., Technological Forecasting & Social Change) are undertaken. Keywords (KW) used to explore the existing body of knowledge regarding manufacturing and industry in general, are, among others, future manufacturing, future robot(ics), PEST/STEEP/PESTLED manufacturing, future of work. A particularly plentiful stream of research is the one on future of work. A second component in the exploration is forward search which is applied to the widely cited books by Rifkin (“The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era” and “The Third Industrial Revolution; How Lateral Power is Transforming Energy, the Economy, and the World”). Both books cannot be examined in detail. Therefore, synopsis and commentaries are included instead ([130] and [131]). Forward

2 https://www.scopus.com/home.uri 42 search is also applied to influential publications on scenario planning ([125] and [126]). KWs used in all these forward searches are, e.g., technology, robot(ics), driver, and driving forces. These two components are revisited iteratively until this author perceived saturation and no new inspiration from newly examined works can be derived. Saturation can be assumed even further with the help of the fourth and last step. Eventually, the body of knowledge is approached to particularly look for works examining global trends. Two particular papers are found in the realm of manufacturing ([123] and [132]). It is discovered that the trends perceived as relevant are already included. At times the classification or form varies, but as those are resolvable, saturation can eventually be assumed. Figure 18 sketches the just described process.

Literature Search e.g., “future manufacturing”, “future robot(ics)”, “PEST/STEEP/PESTLED manufacturing”, “future of work”

1 Revision of Trends Forward Search Papers on Global Trends in Manufacturing: Books by Rifkin “The End of Work: The Decline of the R. Alizadeh and L. Soltanisehat, "Stay competitive in 4 2 Global Labor Force and the Dawn of the Post-Market 2035: A scenario-based method to foresight in the design Era” and “The Third Industrial Revolution; How Lateral and manufacturing industry" & Power is Transforming Energy, the Economy, and the E. More et al., "UK Manufacturing Foresight: Future World” Drivers of Change" 3

Forward Search Influential publications on scenario analysis: H. Kosow and R. Gaßner, Methods of future and scenario analysis: overview, assessment, and selection criteria & M. Amer et al., "A review of scenario planning” Figure 18 Trend Identification Process 3.3.2Trend Identification and Analysis 3.3.2.1 Foreign Direct Investment Resistance

As the case of KUKA in Germany is showing, foreign direct investment is anticipated to be under closer scrutiny by the national government in Germany [133] and discussions around this topic is ongoing in other countries as well [134, 135]. In the latter two articles, advanced technologies, robotics among those, are particularly mentioned to be protected from foreign investment. Thus, investment regulations might play a crucial role in the future development of the robotics industry.

3.3.2.2 Reshoring Production

Along with, in this context irrelevant, other protectionist political decisions such as tariffs, global production allocation of companies can shift [136, 137]. Further evidence is provided, by finding that, for example, almost a quarter of German companies operating in China are considering to remove some or all of their operations from there [138]. As the articles on protectionist induced reshoring exhibit, a decision of reshoring ultimately also includes

43 economic considerations [136, 137]. Several news reports have published companies’ decisions to relocate production into their home country [139-141]. In all reports, robots are mentioned as the enabling technology to produce in higher wage home countries. In terms of implications on HRI, this trend is relevant as a change in production location implies a change in socio- economic environment in which HRI as a socio-technical system resides. The corresponding contrasting trend is the disintegration of production [142]. The implication of which is the spread of production across the globe in various socio-economic structures as the example of a barbie doll shows.

3.3.2.3 Standards and Regulation

Standards and regulations have a considerable impact on the relations between a technology, associated business models, and the formation of a market for those business models [143]. Standards and regulations, and standards in particular are influential in market sizing as well as scoping and shaping business model opportunities. Factual evidence is provided by articles on the standards of the internet and wireless communication [138]. Influence on standards is described to be able to determine the influence of companies and nations on the course of technologies. In the case of specialty robotics, the German industry is considered to be currently leading efforts in standards and regulation [143]. An important aspect to consider in the future standardization and regulation of robot technology is the difference in scope of standardization and regulation. Whereas standardization is mainly shaped by private entities or the industry in general, regulation in the form of law-making solely lies in the hand of public bodies [144]. According to the author, due to the increased speed of developments, particularly in the field of robotics, regulation lacks technological insight to influence developments. On the contrary, it is posited that standardization lacks the obligation since public bodies have to consider ethical and societal issues of new technologies. As will become more evident later on, especially the consideration of ethical and societal issues is influential for the development of HRI. Consequently, the further development of this trend will play a crucial role in the development of the robotics field.

3.3.2.4 Mass-Customization Consumption

A trend repeatedly mentioned in publications on HRC is mass-customization (e.g., [20], [145] and [14]). Mass-customization refers to the demand for customized products at costs of mass-produced standardized products [146]. As pointed out in the scoping literature review, robots and collaborative HRI in the industrial context are being developed to allow for seamless leveraging of robotic and human capabilities. In consequence of which, flexibility and

44 efficiency are expected to increase in order to fulfil requirements posed by mass-customization consumption.

3.3.2.5 Conscious Consumption

Herein, both socially and environmentally conscious consumption are being regarded as conscious consumption. Socially conscious consumption refers to a consumer’s heeding of negative and beneficial social consequences of his or her purchase, usage, and dumping of a product [147]. Conducting a study, It is revealed that a company’s social responsibility, recycling, and environmental impact play a role in socially conscious consumption. Thus, environmental conscious consumption can be regarded a particular consumption focus within socially conscious consumption. A behavior related to environmental conscious consumption is environmentally oriented anti-consumption [148]. It describes consumption reduction, avoidance or rejection. Thus, robots as production device could be impacted by a conscious consumption pattern in two ways. On one side, environmentally conscious consumption and specifically the avoidance and reduction could impact the viability of robots for production. On the other side, and subjectively judged as more relevant, socially conscious consumers might put companies implementing robots in their production under specific scrutiny to do so in ethically and socially acceptable ways.

3.3.2.6 Dematerializing Consumption

Two ways of dematerializing our consumption can currently be observed. For one, the sharing economy is a facilitator. By sharing what is already owned by other peers or by pooling resources owned by a provider, consumption is dematerialized in the way that less material goods are needed [149]. Another way in which consumption is dematerialized is examined. It is by replacing physical goods for usage by digital ones, e.g., instead of buying CDs, music is streamed. In turn, less demand for products might impact the viability of robots as production equipment.

3.3.2.7 Decentralized Production

Through additive manufacturing and product layouts shared online, production is “democratized” [130]. As well as HRC, additive manufacturing is brought in relation with mass-customization consumption as the technology offers significant flexibility and adaptiveness to changing customer demands [4, 150]. Thus, HRC and additive manufacturing can be seen as trade-off technologies for meeting customized demand at lowest possible costs depending on the centralization of production.

45 3.3.2.8 Organization of Labor

Fleming [118] argues that labor unions are one way for workers in semi-automated occupations to ensure they do not face downward wage pressure. Méda [151] reports labor unions to be a likely explanation for an employing organization to allow their employees influence in decisions related to their job. Considering the possibility of technological unemployment through the introduction of robots [118], the strength of labor unions is an important factor shaping the future of work [118, 152].

3.3.2.9 Rising Real Wages and Salaries

A comprehensive account of the relation between wages and robotics is provided by Arntz et al. [117]. Three effects are explicated to take place, namely, the displacement effect, the productivity effect, and the reinstatement effect. According to the authors, undisputedly, robots will replace humans in certain places by taking over their activity which is referred to the displacement effect. Their reasoning unfolds as follows. As this leads to gains in productivity, the economy is expanding leading to a higher demand for labor. Moreover, gains in productivity allow the workforce to negotiate wage increases Higher wages in return make the introduction of robots more feasible, thus, closing the feedback loop. As is already mentioned in the scoping literature review, the introduction of robots rather than merely replacing human work, also changes human work. As pointed out by Fleming [118] this is particularly true for occupations which are not fully but only partly automated. Beyond changing tasks, also new tasks emerge because of which labor will be in demand [117]. The reinstatement effect addresses this mechanism. The authors sum that the displacement effect causes decreasing real wages, whereas the productivity effect results in increasing wages and the reinstatement effect increases real wages as well. Which effect will be dominating cannot be predicted and is further influenced by workers’ mobility [117]. This factor refers to the flexibility of workers’ wages as well as their employment. In conclusion, it becomes clear that the introduction of robots is closely intertwined with developments of real wages. In a heuristic attempt, Fleming [118] models three occupation patterns. In jobs where technological know- how and managerial skills come together, displacement is unlikely. In occupations where part of the tasks can be achieved by robots, structural changes will occur This is where replacement as well as change of human work will occur. Lastly, there will be jobs for which employing robots is not economic. As to include the trend currently observed, across the Organization for Economic Cooperation and Development (OECD), an international organization of mostly

46 developed countries, real wages have constantly increased after the global economic crisis in 2008 and 2009 with only few exceptions such as Italy, Mexico or Greece [153].

3.3.2.10Work-Life Balance

While employing organizations have been slow to adapt to employees’ desire for work- life balance initially [152], employers in the USA are reported to be increasingly allowing flexibility [154]. However, jobs in the manufacturing sector are significantly fewer times observed to be featuring flexibility possibilities. Regardless of the specific current situation, the demand for work-life balance is still judged to be of importance for employers to consider in the face of I40 [155] as well as for the future competitiveness of manufacturing enterprises [123].

3.3.2.11(Re-)Skilling Offers

Adoption of education and training are consistently named as vital strategies to allow for a smooth transition through the structural changes induced by robots and automation in general (e.g., [117], [123], [6], [26] and [124]). Fries [6] points to the responsibility of politics as well as industry to reform the public education system and provide vocational training. In Germany, for example, the federal agency for work and labor offers vocational training and reskilling offers [156]. Eventually, extensions in training and adaptions in education can empower the workforce in terms of mobility and qualification [117].

3.3.2.12Demographic Change

Demographic change is a widely discussed trend in developed countries with long lasting effects (e.g., [155], [6], and [157]). It describes the phenomenon of change in a nation’s ratio of younger to older people [155, 157]. Possibilities to cushion demographic change are to increase women’s participation in the workforce and immigration [157] or to increase the retirement age [6, 157]. As this trend directly relates to the composition of the workforce, it might also influence the development of future HRI systems.

3.3.2.13Intensifying Working Conditions

A further trend recently observed is an intensification of working conditions. While it is observed that across the OECD average working hours over a year decreased [158], workers also report working with higher intensity [151]. This trend can be related to the previously mentioned notion of the human being enframed by human’s technology [116]. Thus, this trend directly alludes to the relation between humans and workers.

47 3.3.2.14Falling Cost of Technology

Just like other previous technologies, the cost of robotics is reducing as the technology advances [119, 159]. Thus, more applications will be economic to consider for robots. Therefore, just like developments in the cost of human labor, the development of costs of robots inevitably also has implications on the interaction between humans and robots.

3.3.2.15Maturing Technology

Technological advances lead to widening the field of possible applications [119]. In line with previous predictions, an expert panel is projecting crucial technological developments in the 2020s in the fields of artificial intelligence, language, mobility, and vision powering more capable robots [121]. As the goal of this scenario planning is not to predict the implications of individual technological advances on HRI, no detailed judgment on the cited projections is necessary. The focus lies rather on having those projections provide technological guidelines for the development of scenarios later. Technological advances in itself are not of interest when trying to understand the future of HRI as a socio-technical system, as new technology will only reproduce previous socio-economic structures [121]. Rather, the design of the processes of adaption, recreation, and modification of a technology in question determine the future [124]. This is further underscored by the conclusion of Fleming [118] stating that the implementation of technologies is shaped by socio-economic and organizational forces.

3.3.2.16Centralization of (Economic) Power

What is called the “robotics divide” by López Peláez [120], refers to the trend herein referred to as centralization of (economic) power. As also previously mentioned, López Peláez repeatedly points to new technologies reproducing current socio-economic structures, namely capitalist societies [120, 121]. Table 8 shows the divide emerging from robotics and suggest the trend of centralization of economic power as well as political and individual power. While not all listed consequences hold relevance for the scenario planning conducted herein, the list still delivers valuable insight into potential results of the proliferation of robots with increasing capabilities. Nonetheless, the table accomplishes to exhibit that not only economic power might become more centralized but also power in the private as well as the public realm. Further empirical evidence is provided by Autor et al. [160] describing the mechanism of rising market concentration to so-called “superstar” companies. This concentration is found to be bolstered by technological change in an industry. In consequence of the concentration in the most productive companies, labor share of overall economic output is diminishing. It is found that employment in those “superstar” companies does not decline, however, because of the

48 concentration of economic activity in more productive companies, more economic output does not require more labor.

Table 8 Entities and the Consequences of Access to advanced Robotics [120] 3.3.2.17Privatization of Research

On one side, this trend includes developments in Intellectual Property (IP) and patents. Nelson [161] elaborates on the danger of privatization of research – it being the closure of knowledge in patents. Technological progress is described to rest on market forces driving innovation emerging from a publicly accessible science base. Technological progress is in consequence hindered if, for one, patents are held on research which can be regarded a tool for further research. For another, progress is stymied if the patented research might lead to the development of beneficial products or techniques. Nelson acknowledges that empirical evidence for significant increases in patenting activities is scarce and thus, wants to ignite a discussion as to prevent described dangers from materializing. Subsequent studies draw an inconclusive picture on the progress of patenting. Van Looy et al. [162] discover in their study at K.U. Leuven that scientists involved in innovation and patenting are also more heavily involved in scientific publishing compared to their peers not involved in innovation and patenting. The study, however, lacks an analysis of the quality of the respective scientific publications. In contrast, while limited in scope, Thompson et al. [163] find significant evidence for lower scientific proliferation of research tools if they are patented compared to not patented research tools. Focusing on the field of computer science, Arora et al. [164] find that large companies have decreased their publication activity while maintaining their patenting activity in the time frame from 1980 to 2006. When looking at how companies now tend to create

49 knowledge, it can be observed that companies increasingly seek collaborations with universities for research projects as to decrease research expenditure and gain knowledge simultaneously [165]. Gretsch et al. find that for successful collaborations a company should not aggressively retain their existing IP and try to claim new IP from the project by tightly binding the university to contracts. However, no empirical evidence is shared as to how aggressively companies retain and claim IP. Henkel et al. [166] provide empirical evidence from the computer component industry that considerable impetus is needed for companies accepting the positive feedback loop of openness related to their IP. The authors also find, though that IP is only selectively shared. It is posited that selective openness is pursued by companies in order to ensure the collaboration is headed in a commercially relevant direction [165]. Investigating the dependence of the inclination to patent of the innovation leadership of companies and their openness, Arora et al. [167] find leading companies collaborating openly patent findings more than closed leading companies and following companies as to avoid spill-over of knowledge to outside organizations. Closed leaders, however, show similar patenting numbers as open as well as closed followers. On the other hand, the just elaborated developments are complemented by a slowly but steadily increasing share of research expenditure of business enterprises compared to research expenditure of the higher education and the governmental sector across the OECD [168]. In 2018, for which the latest data is available, the business sector spent on average 2.6 times as much as the higher education and governmental sector combined across the OECD. This ratio has been steadily increasing in the observable time frame of 2005 to 2018. Further empirical evidence for the United States is provided in [169]. In sum, research is increasingly shaped by companies which is dubbed herein as privatization of research.

3.3.2.18Summary

Table 9 presents the identified trends which are deemed relevant for the future of producing industry and the introduction of robotics in it. As a last note before the table is presented: the individual trends are grouped in a second step. In the description of these groups the according references are given from which the inspiration, if applicable, for the individual trends is derived.

50 Trend Associated PEST Classification Associated Literature (Re-)Skilling Offers Political, Societal [6], [26], [117], [123], and [124] Foreign direct investment resistance Political, Economic [133-135] Centralization of (economic) power Political, Economic [120, 121], and [160] Falling cost of technology Economic, Technological [119], [159] Rising real salaries/wages Economic, Societal [117], [118], and [153] Reshoring production Economic, Technological, Political [136-142] Demographic change Societal [6], [155], and [157] Dematerializing consumption Societal [149] Mass-customization consumption Societal [14], [20], [145] and [146] Conscious consumption Societal [147], [148] Organization of labor Societal [118], [151], and [152] Growing demand for work life balance Societal [123], [152], [154], and [155] Intensifying working conditions Societal [116], [151], and [158]. Maturing technology Technological [119], [121] Decentralizing production Technological [4], [130], [150] Technological standards and regulation Technological, Political [138], [143], and [144] Privatization of research Technological, Economic [161-169]

Table 9 Brainstormed Trends for Producing Industry 3.3.3Driving Forces Identification and Analysis 3.3.3.1 Outline

The identified 17 trends can be analytically categorized into four major driving forces – technological protectionism, consumption pattern, workforce emancipation, economics of technological replacement – which are grouped in Table 10 and explained, in turn, in what follows below.

51 Driving Force Associated Trends Technological Protectionism Foreign direct investment resistance Reshoring production Technological standards and regulation Consumption Pattern Mass-customization consumption Dematerializing consumption Conscious consumption Decentralizing production Workforce Emancipation Organization of labor Rising real salaries/wages Growing demand for work life balance (Re-)Skilling offers Economics of technological replacement Rising real salaries/wages Demographic change Intensifying working conditions Falling cost of technology Maturing technology Centralizing of (economic) power Reshoring production Privatization of research Table 10 Driving Forces and associated Trends 3.3.3.2 Technological Protectionism

Three trends can currently be observed in the public media which all can be summarized as technological protectionism. These trends are not directly entangled and interdependent, however, they can be seen as forming the driving force of technological protectionism together. Overall, this driving force represents who retains or wields influence on the trajectory of a technology, in this case robotics. Skepticism towards foreign direct investment can be seen as a tool to retain technological influence of private entities of countries. Reshoring production has influence on which socio-economic environments robotics are adapted to. Lastly, standards and regulations, most directly, reflect influences on robotics. While the current situation in terms of standards for robotics is described above, a trend cannot be described.

3.3.3.3 Consumption Pattern

According to common economic theory, supply and demand are key market forces [170]. Consequently, robots as a production technology and thus, in that function located on the supply side, are not only discussed technologically but also discussed in relation to allow companies to meet a certain demand pattern. Thus, two trends are evidently counted towards this driving force, namely mass-customization and conscious consumption. They can be seen to stand in contrast to each other, thus, potentially influencing robotics in opposite directions. While mass-

52 customization with its focus on quantity influences companies to increase resource efficiency, conscious consumption with its focus on quality in terms of the way of production influences companies resource usage, including efficiency, but also including ecology and society. The power of influence of these two trends is weakened given the dematerializing of consumption. Robotics as a technology for material consumption rely on demand for material goods. Thus, as indicated above, dematerializing consumption can erode the viability of robotics. As opposed to eroding the viability of robotics by a lack of demand for material goods, acceptance and spread of additive manufacturing can erode the viability of robotics by offering a “democratic” option to produce material goods independently from manufacturing companies. However, also within companies additive manufacturing can be seen to compete with robotics for production allocation, depending on the consumption behavior.

3.3.3.4 Workforce Emancipation

When it comes to social consequences of the introduction of robotic technologies, the development of trends associated with workforce emancipation can play a crucial role. Thus, trends that are associated with the influence of the affected labor are considered in this driving force. Most evidently. the organization of labor provides labor with a platform to advance their cause both in terms of the spread but also in terms of the design of robotic systems in companies. Thus, closely related to labor unions is the development of salaries and wages. As hinted at in the listing of trends above, the development of salaries and wages has to be seen in the context of two different driving forces. In terms of workforce emancipation, the development of real salaries and wages is to be seen as the reaction to changes in productivity [117]. According to the authors, with increasing productivity, demand for labor increases which also increases real wages. Robots are shown to have increased productivity in the past. Thus, the productivity effect is described as an enabler to empower the workforce to demand higher salaries and wages. However, there is also a counter-balancing effect pushing wages down, which emphasizes the importance of e.g., labor unions in the process of the development of real wages and salaries in the face of robotization [117, 118]. The desire for work-life balance represents another trend within workforce emancipation. The disparity between workers’ desire and the adaptation by companies exhibits the potential for organization of labor as facilitator for adaptation. Lastly, the importance of (re-)skilling offers is indicated in the description of the trend above. While the organization of labor again can serve as platform to bring forward workers’ demand for such offers, this trend was taken up within this driving force as extensions in training and

53 adaptions in education can empower the workforce in terms of mobility and qualification [117]. Thus, in parallel to organization of labor, advances in workers’ skills are a form of empowerment as well.

3.3.3.5 Economics of technological replacement

This last driving force gathers the most trends. Its aim is to summarize trends leading to more or less introduction of robots and the mechanism behind those trends. Fundamentally, robots will be introduced if it is cheaper for the company to use robots rather than humans [117]. Hence, the first trend in that regard which is also associated with another driving force is the development of real salaries and wages. With increasing wages, the replacement with robots will become more likely. Similarly, as costs of robots are falling, replacement will also become more likely. In addition to these fundamental economics, further trends frame these two developments influencing the significance of the economics of technological replacement. Firstly, with increasing maturity of robots, more tasks will be eligible for replacement. Moreover, demographic change motivates companies to consider replacement as vacant positions from retirees cannot be filled with new people. As mentioned above, remedies to demographic change can involve policy decisions. However, once more using the economic model of supply, demand, and price building [170], those policy decisions made with regards to demographic change can be expected to eventually also impact the development of real wages and salaries by manipulating labor supply. Hence, affecting the economics of replacement indirectly. Herein, no closer scrutiny of those policy decisions will be undertaken. However, for the development of scenarios later on, awareness of the phenomenon and impacts of which are deemed relevant. Further and in contrast to the emancipation of labor, advances in robotics can lead to a concentration of (economic) power as described above. In consequence, causing a limited number of players to influence the trajectory of robotics. Shaping a positive feedback loop, the concentration of economic power is interdependent with the privatization of research. Companies with a technological edge over competitors are more readily advancing their productivity by conducting patent protected research. As the description of the trend above is showing, disclosure, and spending behavior of companies is hinting towards research being driven by private companies, manifesting their productivity edge. This has implications for the economies of replacement as technology is advanced in the direction of highest interest to the companies leading technologically. Also, it can imply that technology is developed according to those companies’ value systems and socio-economic ideas. Lastly, there are further trends affected and created by the economics of replacement which are, thus, also grouped into this

54 driving force. While the intensification of work is not in itself affecting the spread or design of HRI, especially the understanding of it according to Dias [116] of the human being enframed by its technology gives a psychosocial understanding of the economics of a technology and in this case robots. 3.3.4Trend and Driving Forces Evaluation

In order to gain a better understanding for the trends’ and driving forces’ relevance for scenario generation, an evaluation is conducted. Similar to cross-impact analysis with which strong scenario drivers can be identified [126], herein, a pairwise comparison of trends within each driving force and between driving forces is conducted. The author of the dissertation conducted the comparison twice at two distinct time points and two external researchers from the field of cyber-human technologies are recruited to conduct the comparison independently. Raw results are attached in Appendix B and Appendix C. Subsequently, the two evaluation runs of this author are averaged and in a second step averaged again with equal weights with the evaluation of the independent researchers. Table 11 lists the trends and driving forces which receive the highest and second highest ranking with a margin of /- 0.25 points for inclusion. For the driving force economics of replacement, the first three highest ranked trends with a margin of /- 0.25 points are included to acknowledge the much higher number of trends agglomerated in this driving force.

Spread of HRI Result Design of HRI Result Technological Protectionism Technological Protectionism Technological standards and regulation 5.3 Technological standards and regulation 6.0 Reshoring production 4.3 Reshoring production 3.7

Consumption Pattern Consumption Pattern Mass-customization consumption 8.8 Mass-customization consumption 7.2 Dematerializing consumption 6.8 Conscious consumption 6.3

Workforce Emancipation Workforce Emancipation (Re-)Skilling offers 7.3 Organization of labor 7.7 Organization of labor 7.2 (Re-)Skilling offers 6.7 Rising real salaries/wages 5.8

Economics of Replacement Economics of Replacement Falling cost of technology 17.8 Maturing technology 17.8 Maturing technology 17.2 Privatization of research 15.3 Demographic change 16.2 Demographic change 14.3 Falling cost of technology 14.3

Driving Forces Driving Forces Economics of Replacement 7.8 Economics of Replacement 7.3 Workforce Emancipation 7.2 Consumption Pattern 6.5 Consumption Pattern 6.0 Workforce Emancipation 5.8 Table 11 Trends and Driving Forces Evaluation Result There are several take-aways for the scenario generation. In terms of technological protectionism, the importance of technological stands and regulation is underscored by the

55 evaluation, both for the spread as well as for the design of HRI. Regarding consumption patterns, the spread and design of HRI is judged to be most affected by mass-customization consumption. However, the second trend varies – dematerializing consumption for the spread of HRI, conscious consumption for the design of HRI. As for workforce emancipation both organization of labor and (re-)skilling offers are judged as the most influential on both spread and design of HRI. As to the last driving force, economics of replacement, falling cost of technology is evaluated to be most impactful on the spread of HRI, whereas for the design of HRI maturing technology is most influential. When considering the driving forces among each other, technological protectionism is deemed least relevant, whereas workforce emancipation is regarded as most impactful for both the spread and design of HRI. Overall, this underscores initial qualitative judgments made in the elaborations on trends and provides a foundation to base the scenario generation on.

3.4 Scenario Generation 3.4.1Insights from the Science Fiction Literature 3.4.1.1 Methodology

As described in the general methodology section, SciFi can provide scenario planning with insights into psychosocial processes that might otherwise possibly be missing [128]. Two further elements are to be mentioned which motivate this dissertation to consider SciFi works. For one, as SciFi literature is not formally constrained as compared to scientific publications, SciFi can provide symbolic creativity to the generation of scenarios. For another, it is argued that thinking about the future, as done in scenario analysis, requires to recognize the dependence of public decisions. Rather than trying to predict specific decisions, the aim is rather to acknowledge the influence public opinion can have on policy decisions which in turn shape the future [128]. In consequence, for one, SciFi literature and its processing in the media are said to have more influence on public opinion than other elements of scenario analysis. For another, SciFi literature can be regarded as providing room for innovations to happen [171]. By having engaged with the future for several decades and bringing visions of the future to a broad audience, according to the authords, SciFi works shape creativity and feelings toward change. Both aspects are influential for innovation activity. Thus, both the public as well as the scientific sphere are influenced by SciFi making it a potentially insightful component for this dissertation. In order to account for the influence of SciFi literature on public opinion, it is suggested to include the most impactful SciFi literature in the analysis. Impact, as is further explicated in the following paragraph, is evaluated based on parameters relating to the above mentioned

56 relevant aspects of SciFi works for scenario analysis, i.e., psychosocial aspects, public presence, creativity, and technology representation. Based on a preliminary search for influential SciFi works, the focus of the analysis of works is set to literature. Thus, books, comics but also movies which are based on identified literature works are considered. That way, efficiency in the analysis can be ensured. Since it will not be possible to read several books and watch several movies due to time constraints of this dissertation two steps to access the wealth of SciFi literature are suggested. First, identification of impactful works is necessary. Second, the identified works have to be examined for themes and their psychosocial considerations in order to provide relevant insights for the purpose herein. For the first step a database has to be identified in which impactful works can be identified. Goodreads3 is used, due to its tagging feature and large community of 90 million registered members [172] and around the same number of page visits each month [173], it is judged by the author of this dissertation as an adequate tool for the desired task. Despite being mostly accessed in the United States, it is a page of global reach [173], adding further to its relevance for the task herein. Thus, Goodreads and specifically the list on Goodreads with works tagged “Robots” [174] is used as the starting point for further analysis. The list contains 3508 works tagged “Robots”. In the form presented online the books are ordered by the number of times they are tagged “Robots”. As that is not deemed the appropriate indicator of their spread, the number of ratings is used as the appropriate indicator for the public spread of a book. Thus, in the selection process the list is manually searched for the next most reviewed book. In order to further ensure relevancy for the goal of this dissertation, certain in- and exclusion criteria have to be set for the books in the list on Goodreads. First of all, not only recent works will be included. A snapshot of current works with relevancy would not allow to perceive a change of issues addressed, tracking the evolution of themes is deemed more appropriate as it allows this author without background in psychosocial studies to better judge which themes will be relevant for the scenario planning herein. Moreover, as the list on robots will also include non-fiction works, the second inclusion criteria is that the work is tagged with robots as well as science-fiction. As to prevent a distortion of the themes towards overly unreasonable themes, the time frame of works is set to 50 years which is not enforced sharply but rather with a flexible boundary of +/- 10 years. This time frame is deemed to allow the authors of SciFi works enough freedom to realize the essential virtue of SciFi mentioned in [128] for the present context. The story does not account for any current memory and thus, disentangles itself from past occurrences. A fourth criterion is that the story involves robots in the industrial context or the

3 https://www.goodreads.com 57 stories hold relevance for the industrial context. Hence, e.g., stories on home service robots are not included. In contrast, the work by , I, Robot, with its [175] is deemed relevant as the proclaimed laws can easily be applied in an industrial context. If the synopsis on Goodreads does not provide sufficient information to judge on these inclusion criteria, the work is deemed irrelevant as unmentioned themes cannot be deemed central in the work. These criteria are applied to all works with more than 5000 ratings and that are tagged with “Robots” ten or more times. In the face of limited time resources and as the SciFi literature provides a fundamental understanding of psychosocial processes as well as additional creativity in scenario generation, these criteria are judged sufficient and represent a manageable trade-off between efficiency and effectiveness. In the second step, a trade-off between time resources and comprehensiveness has to be undertaken. It is infeasible for the time resources available in this dissertation to read all identified works. However, only reading synopses is considered not comprehensive enough to capture relevant psychosocial facets and themes relevant for industrial robotics. Therefore, the titles of identified works will be entered in scientific directories such as Google Scholar. By using scientific publications about the identified SciFi works, this author minimizes the time effort for accessing the contents while ensuring comprehensiveness by relying on scientific analysis of the works. Using this approach, both relevancy and efficiency in the analysis can be ensured. Given promising preliminary search results, the analysis of identified works is complemented by collecting and synthesizing publications which analyze various SciFi works. These publications are of special interest in case they include one or more of the previously identified relevant SciFi literature works. An example of such a work is [176].

3.4.1.2 Identification Results

By limiting the list of 3508 works to only the ones with more than 5000 ratings and being tagged with “Robots” ten or more times, 84 works are identified. Six relevant works for further investigation are found which are listed in Table 12. A detailed overview of the works retrieved July 24, 2020 can be found in Appendix D.

58 # of times Relevant for Year of Time SciFi Work Title # Ratings tagged SciFi? Industrial Reason Summary Publication Frame "Robots" Context

Do Androids Dream of Electric Helps to inform us on how humans are Person is hired to find androids gone rogue, a Sheep? (Blade Runner) 336.565 117 1968 53 years Yes Yes treating robots that were made to help relationship between an android and the hunter by Philip K. Dick them. develops.

Universal character of the three laws of I, Robot Short stories illustrating the shortcomings and 269.251 254 1950 near future Yes Yes robotics that can be applied to industrial by Isaac Asimov potential conflicts arising from the three laws. context as well.

Implications for technology that surrounds Robots unite for a war against humans, some Robopocalypse 34.970 133 2011 Near future Yes Yes us, analogy could fit well in industrial individual humans recognize glitches leading up to by Daniel H. Wilson context. the war.

Approaches philosophical questions arising A guy in love buys one of the first robots and shapes Machines Like Me 16.477 12 2019 1980 Yes Yes from newly advancing technologies from a its personality, hints of romance are included as by Ian McEwan personal perspective. well.

A person has to take a history test for an academy. Genesis Poses philosophical questions arising from 11.122 14 2006 - Yes Yes Through the test she discovers old questions about by Bernard Beckett newly advancing technologies. technology and philosophy still hold relevance.

Robots are solely used for dull labor. At some point R.U.R. 7.949 26 1920 - Yes Yes Robots built to replace humans as labor. they emancipate and revolt but struggle to discover by Karel Capek how to replicate themselves. Table 12 SciFi Works identified as relevant As indicated in the methodology section, the time frame criterion is not enforced rigidly. While going through the list on Goodreads, it is found that most works do not include a precise mentioning of the year it is set in. Thus, hints of near future are deemed as equally sufficient in fulfilling the core idea - investigating works which build a world sufficiently detached from the current one, but not detaching it in a far distant setting. For example, stories essentially evolving around a world of seamless interplanetary travel or conflicts between species of different planets are excluded even if the time frame indicated a near future world. Other common themes not included herein are stories seemingly solely focused on war or conflict between robots and humans, romantic stories with domestic robots or stories centered on robots in a world post human civilization. With this in mind the selection of works is further explained. “Do Androids Dream of Electric Sheep?” and the movie based on it “Blade Runner” are deemed relevant given the insights the works provide on humans’ relationship with a man-made species [177]. Despite the scant evidence for relevance in the short summary on Goodreads, the work is used for further investigation, not the least due to its popularity expressed in its number of ratings. “I, Robot” is included based on the universality of the three laws of robotics introduced in the work. As a collection of short stories [178], the book is understood as providing manifold insight into the consequences and possible conflicts arising from the three laws. In “Robopocalypse” ubiquitous technology turns against humans [179]. As we depend more and more on technology, in private but also in work environments, the story is deemed relevant in providing insights of the psychosocial processes leading up to the conflict as well as the idea of living in a world run by technology. Unlike other works featuring domestic robots, “Machines Like Me” does not primarily, given the summary, evolve around a love story but provides philosophical considerations of the education of a malleable technology [180]. Therefore, relevancy to

59 industrial context is given as the work can be used to analyze values and their implications when applied to technology. “Genesis” features a girl having to take a history test [181]. While she thinks she knows history, during the exam the main character discovers the prevailing relevance of past philosophical questions concerning science. As the summary explicitly states, those questions can be applied in the present context to Artificial Intelligence (AI). While robots are not explicitly mentioned, the assumed universality of the philosophical questions grants closer analysis of “Genesis”. Lastly, “R.U.R.” fits the context of this dissertation as robots are introduced in the story as better workers than humans. During the development of the story the humans have to observe what implications it has to develop robotic technology to become human [182].

3.4.1.3 Relevant Themes in Identified Works

Asimov’s three laws are, one, “A robot may not injure a human being or, through inaction, allow a human being to come to harm”, two, “A robot must obey orders given to it by human beings, except where such orders would conflict with the first law”, and three, “A robot must protect its own existence as long as such protection does not conflict with the first or second law” [183, pp. 15-16]. Thus, they ensure human’s control over the robots [184]. However, as the robots become more capable, humans struggle to perceive the robots as machines that still require the same kind of control than less capable machines. In Asimov’s robotic world, the robots do not have motives themselves, they operate as told in absolutely rational manner. Two issues are raised by the development of a servile slaves for humans. For one, stagnation of human dynamism and abatement of creativity ensue from Asimov’s depicted positivistic technological development [128]. For another, due to the robots’ capabilities humans will have more leisure, therefore, humans need to learn what to sensibly do with the newly gained leisure [176]. Even though Asimov, in his short stories, elaborates on situations where the three laws stand in conflict, the robots cannot solve those situations as there is no mechanism for them to overrule or disregard a certain law in a situation. R.U.R. by Čapek plays with robots’ development beyond servile servants [182]. Similarly to Asimov’s stories, humans become dependent of the robots for their work [185]. However, Čapek develops his story differently from that point on. Robots start to realize the humans’ dependence on them and claim superiority over humans. Thus, Čapek illustrates a pitfall of humans believing robots will liberate them. Robots are thought by their creator to liberate humans by producing all the goods humans need [176]. “People will do only what they enjoy” (p. 74) and thus, “the subjugation of man by man [...] will cease” (p. 75). But in Čapek’s story, building robots that are able to

60 replace humans in work ensues in robots being able to realize their master’s dependence leading to insurrection [185]. In the process of realization of their superiority, the robots are made even more human by giving them a soul. With it, they are supposed to feel for humans [176]. But in contrast, it is claimed that “No one can hate more than man hates man” (p. 75). Thus, making robots more and more human without control boundaries to liberate humans has unintended consequences. “Machines Like Me” comes to this conclusion very similarly. Despite taking place in the private space of humans, the robot introduced in the story takes human decision- making for entirely rational whereas human decisions are at times inconsistent with rationale or logic [186]. In the story these misunderstandings end in several negative consequences for the human. Thus, “Machine Like Me” epitomizes the desired liberation of humans from irrational behavior. However, as it turns out, this is not always desired by humans, leading to disruptions. A theme central in “Robopocalypse”, “Do Androids Dream of Electric Sheep?” and “Genesis” is the reflection on what it means to be human. Only few elaborations on “Genesis” could be found. Central to the story is making the reader believe the main character is human while it is an android [187]. The story further features a to the reader recognizable android and human which are forced to engage given their shared imprisonment [188]. In the course of development, the android striving, very much like humans, for perfection is not free from imperfection given its interaction with the human. Thus, eventually, the distinction between the android and human fade [188, 189]. No closer examination of “Genesis” is possible due to the lack of processing literature. In “Do Androids Dream of Electric Sheep?” and the movie “Blade Runner” based thereon, androids, in the movie called replicants, from the beginning of the story are in their most sophisticated form almost not recognizably different from humans but are refused human status [190]. Whereas “Genesis” is not explicitly set in a violent context, “Blade Runner” is set in a world where replicants are banned on planet earth [176]. Nevertheless, a group is residing among humans and a formerly retired cop is called back to hunt the androids. Thus, the story evolves around the hunt and back and forth fight between the cop and the group of androids. It remains undisclosed whether the cop might actually also be an android [190]. Socio-economic interpretations agree that “Blade Runner” is questioning current class structure and questions the authority drawing a line of who belongs to society. Phrased differently, the story represents individual’s anxiety of losing identity and freedom in a dehumanized, meaning technologized, society [191]. As opposed to “Blade Runner” where humans attack androids, “Robopocalypse” has robots attack humans [179]. Using the introductory thought of the human partly as homo faber, “Robopocalypse” has a general AI named Archos which renders the

61 human from homo faber, the tool maker for enhancing its capabilities, to humans as servants of technology to advance technology’s abilities [192]. In exemplary cases, the story has humans dissolve into the AI, in consequence having the human lose its identity. Another reading agrees with the argument of a fusion of human and AI [193]. This reading also poses the conclusion of a changing human nature to passivity and replaceability by human’s desire to change its environment with technology and consequently over-relying on technology.

3.4.1.4 Relevant Themes in Complementing Science Fiction Works Compilations

Czarniawska and Joerges [176] examine four works (“R.U.R.”, “I, Robot”, “Player Piano” and “Blade Runner”) for the relationship between robots and companies in SciFi. No pattern of themes over time can be detected, however, the analyzed works feature specific conflicts. Those conflicts are also addressed in more general terms in the authors’ following work, thus, no new perspectives can be gained from detailed elaboration in [176]. In another work, Czarniawska & Joerges [194] develop a taxonomy of 12 SciFi robot works, both literature and movies, which are included based on a definition of robot closely matching the one proposed earlier herein. The identified themes of the taxonomy are presented in Table 13. The taxonomy is separated in categories of “good” and “bad”. The judgement based on which themes are rated as positive or negative is conducted by the authors according to the depiction of the theme in the corresponding SciFi work. As can be seen to the right of the table, several of the included works in the taxonomy are also included in the previous section herein. Given that, also works which were not deemed relevant for this context herein are included the list of themes. Thus, the list of previously identified themes is extended considerably.

62 Category Theme Science Fiction Work What robots can do to people: Good Perform all "dirty, dull and dangerous" jobs R.U.R.; Player Piano; 2001; Star Wars; Blade Runner; The Matrix; Stepford Wives; Interstellar; Seveneves Perform jobs that are impossible for human bodies 2001; Blade Runner; Interstellar; Seveneves Perform complex tasks better than people Player Piano; 2001; Star Wars; Robot- surgeon; Big Hero 6; Interstellar Work faster and more efficient; learn new skills quicker All 12 Free people from work R.U.R.; Player Piano; 2001 Protect and defend people Star Wars; good droids; Snow Crash; the dogs; Big Hero 6 Offer companionship, sympathy and care I, Robot; 2001; Big Hero 6 Surpass their programming in a manner that serves people Player Piano; 2001; Star Wars, good droids; Snow Crash, the Librarian Take over the world, as people are self-destructive I, Robot Help to save the world, as people are destructive 2001; Interstellar What robots can do to people: Bad Kill or damage people in fights among groups of peopleR.U.R; I, Robot; Player Piano; 2001; Big Hero 6 Commit criminal acts R.U.R; I, Robot; Blade Runner; Big Hero 6 Deprive people of jobs R.U.R., Player Piano Ridicule people Snow Crash, the Librarian; Interstellar Surpass their programming in a way threating to people Player Piano; 2001; Star Wars, bad droids Take over the world R.U.R.; The Matrix Use people as a source of energy The Matrix What people can do to robots: Good Give them consciousness ("soul", "free will") R.U.R.; I, Robot; 2001 Give them non-human shapes, making them unthreatening 2001; Star Wars, R2D2; Snow Crash; (remove "uncanniness") and free them from human failings Big Hero 6; Seveneves What people can do to robots: Bad Make them human-like, and equip them with human failings 2001; Star Wars, C-380

Use them to stop women from achieving equality Stepford Wives Treat robots as slaves R.U.R.; I, Robot; Player Piano Table 13 Robot SciFi Works Taxonomy (based on [194]) The provided categorization of good and bad provides a rough understanding of the psychosocial views behind the related themes. Especially insightful are closely related themes that appear on both the negative as well as the positive side. On one side, being faster and more efficient than humans and freeing humans from work is seen as positive, while, on the other side, it is seen as negative as well. Thus, this can be understood as mirroring the scientific discussion elaborated on beforehand on the ramification of introducing robots to workplaces. Čapek develops both sides of the categorization in “R.U.R.” in 1920. His work is, thus, posited as outlining the basic points of most future discussions on robots [194]. A further look at the themes of the categories related to what robots can do to people on a meta level reveals that positive themes are mostly related to robots working for humans or in individual cases protecting humans’ environment from humans, whereas negative themes are mostly concerned with violence. This can be interpreted as SciFi populating psychosocial arguments around work-related positive aspects and/or violence related negative ones. Similarly interesting are the themes that are considered good and bad that people can do to robots. This is interpreted as ascribing the robot independence and identity in the SciFi works in question which can be seen as an overarching theme in itself. Making robots more human by giving them consciousness

63 while perfecting their abilities without irritating humans is depicted as positive while solely using them for labor and equipping them with human fallibility are negative themes. From this ambiguity emerges the impression that humans in SciFi feel a sense of responsibility for treating robots as human counterparts while avoiding human appearance to ensure their acceptance as peers. 3.4.2Scenario Themes

In a step towards generating scenario narratives, themes for each scenario are devised. Then, based on the theme, each of the relevant trends’ developments are built into a narrative. As the trend and driving factor analysis is showing, workforce emancipation, consumption pattern, and economics of replacement are identified as the most impactful for future scenarios with a perspective on the spread and design of HRI. Given the initial elaborations on the trends within each of the three driving forces, these driving forces are currently developing in opposite directions and partly developing complementary. Thus, three scenario themes are developed each based on the contrary aspects of the three driving forces while including the complementary aspects of all the other driving forces including technological protectionism where this is applicable. As summarized by [126], scenarios are not to be developed with a focus on the most likely outcome. Rather, scenarios are developed to outline possible, plausible futures. In the herein applied intuitive logics methodology, scenarios are required to be of equal probability. To support this exercise, the idea of a continuum between technological determinism and social constructivism for the future development of a technology is used [195]. Whereas technological determinism is commonly used to imply an uncontrollable pathway of technological development which shapes society, social constructivism is understood as the opposite point of view that society shapes technological development. Hence, Dafoe suggests to balance both extremes when studying technologies and society as both extremes lead to implausible analyses. Figure 19 depicts the suggested continuum.

Figure 19 From Social Constructivism to Technological Determinism [195] Dafoe further analyzes the circumstances under which either social constructivism or technological determinism are more applicable to explain interdependence between technology and social change [195]. It is posited that in contexts of low or no economic or military

64 competition, social constructivism is more applicable whereas in contexts of high economic or military competition, technological determinism holds stronger explanatory power. With these elaborations as preparation, three scenario themes are developed. In the first scenario, the theme centers on growing workforce emancipation. This theme is located on the social constructivism side of the above described continuum with a focus on people directly involved in the socio-economic system of investigation – HRI. Technology and society are influenced by the dominance of fears related to loss of employment and the advance in the split of society. Innovation efforts are negatively impacted by the fear of losing in the imminent structural change. The second scenario is based on the emancipation of consumers and their indirect influence on innovation. This theme is located in the middle of the two extremes and is focused on parts of the society not directly involved in the socio-economic system under investigation. Consumers raise concerns related to the casualization and divergence of society in growing numbers propelling companies to serve those consumers adequately. Thus, while companies reiterate the need to keep up with the competition technologically to maintain competitiveness, the focus for innovation is on human-centered approaches. Lastly, the third theme follows the understanding of the spread and design of technology being mostly shaped by economic competition. Thus, intensifying economics of replacement with less influence of workforce or consumer emancipation will be the center point of this scenario theme. Psychosocial themes dominant here are fears of companies to lose out in international competition, hence, forcing them to keep up with competitors’ productivity gains in order to stay in business. This negative pressure to increase efficiency and productivity serves as argument towards the workforce and society for the introduction of technology into work processes. 3.4.3Insights from Interviews 3.4.3.1 Motivation and Methodology

The aim of conducting interviews at this point in the dissertation is to validate identified driving forces and SciFi themes and put them into context. Furthermore, initial notions for scenario themes are challenged together with the interviewees as to ensure plausible and relevant scenario storylines. To this end, a total of five verbal interviews are conducted. The interviewees include members of academia as well as industry. The member of academia is a researcher for social, philosophical and religious implications of AI (I#1). Furthermore, another interviewee with an academic background in form of a PhD in Comparative Literature specializing in human

65 experience of the future and currently responsible for producing and managing manufacturing and technical documentation is interviewed (I#2). In addition, a member of an AI-focused innovation platform of an industrial conglomerate is interviewed (I#3). Lastly, two principals from a management consultancy are interviewed (I#4 and I#5). Thus, these interviews are deemed to provide an adequate spread and depth to validate the driving forces and scenario themes. The interviews are conducted semi-structured. A set of about eight questions are prepared and depending on the responses of the interviewees, the interviewer has the freedom to intervene with follow-up questions. The interviews are set for 30 to 60 minutes, depending on the interviewees’ availability. Notes are taken throughout the interview. Interviews are not recorded, but thought protocols are completed right after the interview is concluded. Questions and corresponding notes as well as role descriptions of each interviewee can be found in Appendix E until Appendix I, further demographic information is not collected. Despite the small number of interviews conducted, the gender share is balanced with 60 % males and 40 % females. The cultural background is rather homogenous with interviewees exclusively from European and Northern America countries.

3.4.3.2 Results

In line with the balanced world-view suggested by the continuum between technological determinism and social constructivism, the interviews reveal the inevitability of a certain degree of technological transformation caused by robotics. This is underscored by one interviewee’s notion of the public opinion being unaware of current technological capabilities (I#1). Public opinion is assumed to be shaped more by fringe successes rather than the readily implementable technological achievements (I#1). Raising the awareness in the eyes of the interviewee depends on the creation of transparency – transparency about where and how AI or robots are involved (I#1). On the matter of change of consumer behavior, interviewees from both academia and industry posited that the speed of adaptation and calls for change are slower than is expected (I#2, I#4). While acknowledging that we are currently witnessing a change in our value system driven by ecological considerations, the extension of the change to societal issues is not deemed as imminent (I#1, I#4, and I#5). Thus, the societal role in the technological transformation is not to stop the transformation from happening (I#1, I#4, and I#5). In one interview, the conclusion was made that the question of how strong social policies should be, is misplaced (I#4). Rather, society should aim to create an environment of sufficient and inducing social security to facilitate risk-tolerance (I#4). As one management consultant elaborated, this

66 facilitation does not necessarily have to be initiated outside companies (I#4). Companies’ management is aware of the current transformation caused by I40 and robotics and, thus, initiate (re-)skilling offers and strategic personnel planning (I#4). However, in general, social forces such as consumer behavior are often slower than expected or agglomerate to a fringe niche movement instead of permeating through entire society (I#1, I#4, and I#5). Nonetheless, COVID-19 and the current pandemic are mentioned once as possible ignitors of global shifts in society (I#2). Repeatedly, education is mentioned as a key to the current transformation (I#2, I#5). Both interviewees from academia as well as industry agreed that competency-based rather than application-based training and education are necessary (I#2, I#5). Only with the appropriate competencies can the above-mentioned transparency be used to create societal awareness for the transformation at hand and can society broadly adapt to the technological transformation. Values to be embedded in education are analytical thinking, empathy, and the ability to abstract, and philosophical thinking (I#2). As for privatization of research, one interviewee from academia corroborated the strong influence of the economic-military complex on research agendas through common projects (I#2). Hence, suggesting a tendency towards technological determinism based on economic- military competition. At the same time, one management consultant mentioned many companies’ lack of vision for long-term innovation which could result from collaboration with academia given management’s short-sightedness due to shareholder pressure or lack of patience (I#4). This pressure and lack of patience also causes the companies to not use the latest scientific insights as long as they are not scalable and readily applicable to the problems at hand (I#3, I#4, and I#5). Similar to societal change, technological advancement is judged by interviewees from academia and industry to be perceived faster than actually occurring (I#2, I#4). Given technological challenges one interviewee from academia deems human-robot teams in particular still unfeasible for the coming decades (I#4). Nonetheless, one management consultant pointed towards the existing business case for HRC, especially in industry domains of non-repetitive tasks such as asset maintenance in the chemical process industry (I#4). One interviewee from academia and one from industry are also asked on the convergence or divergence of ideas related to robots globally. Both interviewees agreed that currently, perceptions of service or domestic robots are much more positive in Japan for example (I#1, I#5). However, the given reasons for it are often overly simplistic (I#1). Moreover, this dichotomy is not overserved by the interviewee from the industrial sector (I#5).

67 Given the economic pressure to introduce robotics in industry, the interviewee also sees a spill- over effect rather of acceptance from the industrial realm to the service one, than the other way around (I#5). An aspect emerging in the discussion with the member of the AI-focused innovation platform is the economic potential of standards and regulation. With more autonomous robots a dilemma in regulation arises (I#3). If systems are learning and evolving over time given their respective exposure in operation, original certifications for the system might become invalid (I#3). Thus, either evolving systems are not certified for operation or certifications must allow the system leeway to evolve (I#3). In consequence, standards and regulations can fundamentally influence the adoption of autonomous robotics. Depending on the complexity of certification mechanisms this can influence the economics behind new technologies. Thus, according to the interviewee, standard setting bodies have two functions for companies involved (I#3). For one, transparency is created among companies collaborating in such bodies (I#3). This leads to a form of competition control as technological progress becomes transparent for other companies (I#3). For another, companies want to show they are being proactive to work on standards for innovative technologies to prevent innovation stymieing intervention from public bodies in form of regulation (I#3). Moving over to themes discovered in SciFi literature, both interviewees from academia who are interviewed verbally have done research on or with SciFi literature, thus, serve well to challenge found themes. Both interviewees corroborate the importance of SciFi and its processing have in public and scientific discourse on the perception and development of technology (I#1, I#2). This ties back to the first paragraph in this section. Along with public opinion being dominantly shaped by fringe successes, so is public opinion and human thinking about technology deemed to be overly shaped by un-reflected thinking about what is functional and what the technology is capable of (I#1, I#2). Examples include, movies with androids or anthropomorphic robots move researchers to develop human-shaped robots without considering the most applicable shape of the robot for the task at hand (I#1). Achieving the development of such human-shaped robots, thus, makes the public associate dystopian SciFi themes with those robots (I#1). Similarly, a lack of reflection on the innate capabilities of AI systems causes dystopian assumptions being made about current AI (I#1). An example is the media coverage of an unexpected move of AlphaGo4 (I#1). The move was reported as “divine”, however, given the functioning of current AIs, the abilities of AI systems exclusively depend on the data the AI is trained with (I#1). Therefore, the move is unexpected but still within the

4 An AI trained to play Go which is an Asian board-game. 68 possibilities of humans (I#1). However, the reporting of a “divine” move relates to SciFi themes of technology superseding humans. In addition to previously discovered themes, another one is discovered in one interview which relates to the educational challenges. SciFi works featuring robots, but also SciFi featuring technology in general, repeatedly features main characters who start to question why certain decisions are made and what the implications of those decisions are (I#2). Thus, apart from reiterating dystopian or utopian themes, SciFi works also serve as a reminder of the questions having to be asked to keep society and technology in balance (I#2).

3.4.3.3 Implications for Scenario Themes

The scenario themes are supported by the interviews in their foundations. However, some adaptations are necessary, while the emotional states motivating a certain scenario remain plausible. Theme one focusing on an emancipation of the workforce is deemed to need a moment of realization in order to significantly impact current levels of workforce emancipation. Therefore, scenario one is additionally including an elaboration on possibilities for trends inducing such a push-back in the workforce. Similarly, in the second theme an analysis of a possible exacerbation of trends is included leading to a consumer push-back. In parallel to socially driven innovation, economically driven innovation is judged plausible and influential.In the third theme, no overly exacerbating trends are included while the focus is on a moderated technology trajectory by economic-military competition and current social structures without fundamental changes. Given the lack of insight into effects of external shocks, such as the current pandemic, on society and their unpredictability, such possibilities are not elaborated. Moreover, the interviews exhibit that the outlined scenario themes are not necessarily exclusive to each other. Rather, it is possible that the various scenarios are unfolding simultaneously, but in different industry segments or companies. Consumer-oriented industries and companies therein are more likely to be confronted with elaborations made in scenario two. Whereas, companies distant from end-consumers or companies producing consumer goods under less public scrutiny might be more affected by the first or third scenario. Similarly, companies which operate in industries in which robots are already widely applied, the economic rationale might tend more towards the third scenario. These differentiations are beyond the scope of this dissertation. Indications of these propositions are, however, prevalent herein. Lastly, the interviews highlight the inevitability of technological progress to some extent underscoring the limits of radical social constructivism. Thus, all three scenarios are

69 featuring a technological trajectory driven by economic-military competition differing in the strength and form of the mediation by societal forces. 3.4.4Scenario Narratives 3.4.4.1 First Scenario – Workforce Emancipation

As explicated above, initially, current trends and possible external shocks motivating a stronger workforce emancipation are described before the narrative of this scenario is illustrated. The development of the narrative is based on the trends identified as most impactful for spread and design of HRI. From the description of trends above, several exacerbations are identified which can spark an increase in operator workforce emancipation. For one, a continuous intensification of working conditions can result in emancipation. This intensification can be caused by continuous efficiency improvement demands directed at the workforce, in this context, either as a form of competition between existing technology and the workforce, or as pressure to avoid replacement by technology. Another general trend possibly ending in operator workforce emancipation is the growing disparity within the overall workforce propelled by technology. Based on the trend analysis above, four diverging areas can be identified. Possibilities for increased work-life balance are currently skewed towards office workers. Technology can be used to enable greater flexibility for operators as well, but this is not as readily achievable as for office workers. Furthermore, three closely related trends are facilitated by technology driven productivity competition. At current levels of worker mobility, salaries as well as skills and thus employment chances are diverging and lead to increasing inequality in society [117]. Individual trends or a combination of them, supported by an increasingly hostile public view on automation as well as robots in general can thus lead to the workforce stymying the adoption of proven as well as new technology. As for the narrative of this scenario, Table 14 shows the trends evaluated as important for both spread and design of HRI and the corresponding narrative in the scenario of workforce emancipation.

70 Spread of HRI Elaboration Technological Protectionism Technological standards and regulation Public intervention in standard setting bodies is pursued in order to forcibly align standards and regulations with fears of the anxious workforce. Reshoring production Reshoring activities are reduced as hesistation to innovation-led relocation persists.

Consumption Pattern Mass-customization consumption Consumers are increasingly price sensitive and, thus, only go for mass-customized products in case they offer savings over alternatives. Dematerializing consumption Consumers accept dematerialized consumption offers in case they reduce their expenditures. Conscious consumption Conscious consumption is a niche behavior in the emancipated part of the privileged of society.

Workforce Emancipation (Re-)Skilling offers As innovation does not spread widely through industry, (re-)skilling is fought over by labor unions and companies. The former want to enable upward social moility to the workforce, while the latter block large scale (re-)skilling given the economic pressure in industry. Thus, (re-)skilling is kept to a viable minimum to allow the continuation of business and maintain workforce enagegement. Organization of labor Labor unions stimulate hesitation in the workforce towards innovation and automation. Thus, each investment decision of the companies is fought over by management and labor representatives. This creates an atmosphere of hostility where each side tries to cicumvent the other. Rising real salaries/wages Real salaries stop to rise and stagnate increasing price sensivity of consumers.

Economics of Replacement Falling cost of technology Depending on the development in other markets, costs of technology will fall more or less sharply, putting more or less pressure on the domestic industry to implement technology as well. Maturing technology Similar to falling costs, the development of maturity will depend on advances made in other economies. The more technology matures without being implemented, the greater the economic pressure on companies will be. Demographic change Demographic change eases the economic pressure in parts as retiring workers can not be replaced with younger workers, given a lack of young people. Thus, technology can be introduced in its place. Privatization of research Privatization of research slows given the economic pressure on companies. Highly applied research is conducted in existing systems, rather than driving innovation and creating long-term innovation strategies.

Driving Forces Workforce Emancipation In this scenario the emancipation of the workforce leads to innovation hesitation and economic pressure on companies. Companies' management and labor unions hold diverging views on how to maintain competitiveness and jobs. Economics of Replacement The economics of replacement lead to increased economic pressure on companies as they are stymied to unfold by the workforce. While trends such as demographic change offer some relief, the amount of pressure strongly depends on the development of innovative technologies in other economies. Consumption Pattern In the given environment, consumers become increasingly price sensitive overall, thus, becoming a part of the economic pressure on companies. Table 14 Workforce Emancipation Scenario - Trend Narratives This scenario uses the “automationless” scenario in [123] as inspiration to shed light on the wider sociological processes and trends associated with such a development. The dominant discourse in this scenario is the one between companies facing economic pressure to become more efficient and productive while the organization of labor stymies technology-facilitated efficiency gains. The emancipated workforce therein sees the replacement of their work with technology. One area where the divergence of views is most prominent is the possibility of (re-)skilling. Labor unions see it as a chance for the workforce to gain upward social mobility by becoming legible for jobs with higher skills demands. At the same time, economic pressure on companies ties their hands to offer large scale (re-)skilling and limits (re-)skilling to a minimum in order to maintain a viable level of competitiveness. However, this again causes an inability of the workforce to adopt large scale technological innovation as they lack the required skills. Thus, technological development in those companies is held back by the exhibited gridlock. 71 3.4.4.2 Second Scenario – Moderated Innovation

While socially conscious consumption remains a niche development in the other two scenarios given the dominance of the economic pressure on companies and society, herein, a growing group of aware consumers push companies to innovate in order to accommodate socially conscious production demands. Similar to the scenario of workforce emancipation and given the current levels of consumer emancipation, an exacerbation of trends inducing a shift of attention is needed to enable the needed shift in attention for this scenario. Especially an intensification of working conditions and public awareness campaigns shedding light on it are one possibility for an attention shift. Another option is the possibility of increasing divergence within society which raises attention from groups of society not directly, negatively affected by this divergence. While a differentiation of whether the spark is ignited by consumers, innovative companies, or by evident societal divergence is out of scope here, elaborating the ensuing consequences for the other relevant trends is so. Therefore, Table 15 lists the narrative developing in the scenario of consumer emancipation. However, the impetus for public attention does not necessarily have to come from consumers, e.g., activist consumers creating awareness on social media, but also innovative companies using a niche for differentiation and thus sparking consumers to build an awareness. Companies can use innovation as employment differentiator catering a conscious consumership and workforce. Thus, driving this scenario equally as a consumer push-back

72

Spread of HRI Elaboration Technological Protectionism Technological standards and regulation Standards and Regulation are predominantly shaped by technical and human factors equally enabling productivity harnessing innovation while considering societal implications. Reshoring production Production is reshored to protect IP and used to drive domestic innovation promoting collaborative HRI.

Consumption Pattern Mass-customization consumption While innovations in collaborative HRI allows for efficient mass-customization, consumers in growing numbers shun products produced in masses. Thus, consumership drives innovation beyond current requirements in flexibility and efficiency. Dematerializing consumption Where possible, consumers prefer dematerialized consumption. Starting in urban areas, this trend spreads into rural areas. Conscious consumption Complementing the avoidance of mass-customized products is the active search for socially conscious products and activist consumers elucidating other consumers on socially unconscious companies promoting trickle-down effects towards conscious consumption throughout society. Simultaneously, innovative companies provide corresponding supply. Whether the spark for such a movement ignites in the consumership or in innovative companies leading the way is out of scope here. Eventually, this drives innovation to accomodate consumers' wishes while maintaining efficiency.

Workforce Emancipation (Re-)Skilling offers (Re-)Skilling is an integral part to accommodate technological innovations in processes. They are also part of socially conscious company branding. Organization of labor Labor unions are engaged in integrating innovation and shaping accepted work scenarios, moderating the innovation trickle-down in the company and demographic change. Rising real salaries/wages Wages continue to rise given the upgraded job profiles driven by innovation.

Economics of Replacement Falling cost of technology Trickle-down of technological innovation causes older technologies to continue to become cheaper, leading to continuous upgrading of human's role. Maturing technology Innovation drives maturation of technology supporting socially acceptable use-cases and enables their adoption. Demographic change Demographic change facilitates the introduction of productivity gains through automation as the workforce shrinks while job profiles for younger workers can be upgraded. Privatization of research While research becomes increasingly private, collaboration with public entities is increasingly sought after to incoporate human factors in the research.

Driving Forces Workforce Emancipation The workforce is up- and reskilled to sustain innovation. Workforce emancipation can mainly be expected in the process of automation efforts aimed at alleviating demographic change. Economics of Replacement The economics are a side effect in this scenario. The main driver for technology adoption is wholistically thought-through innovation. Consumption Pattern Increasing awareness and conscious consumption shift companies' attention to innovation-led socially acceptable technology adoption. Table 15 Consumer Emancipation Scenario - Trend Narratives It has to be said that the widespread realization of collaborative HRI use-cases will significantly depend on the further development of technology. Nonetheless, the trend narratives illustrate the innovation-driven development of productivity gains and thus economic competitiveness. However, instead of focusing on pure economic pressure to achieve this, the emancipation of consumers induces trickle-down strategic adaptations towards cooperative and collaborative approaches to robotic technologies. Thus, research agendas are influenced to account for humans in the loop of technology and to consider human factors as well as societal implications of innovation strategies. The push towards innovation leads companies to upgrade job profiles in order to allow for the adoption of the innovation in the company. Thus, in combination with the social awareness campaigns and facilitated by demographic change, projected societal divergence is

73 mitigated as lower skill-profile jobs are being automated. At the same time, widespread skill upgrades lead to greater education on the capabilities of technology, thus, easing hostile views towards and fears associated with technological innovation.

3.4.4.3 Third Scenario – Economics of Replacement

The scenario of pure economics of replacement shaping the technological future of industry, as mentioned earlier, has a strong emphasis on technological determinism. The trends described as part of the driving force of economics of replacement are seen as the outcome of the economic-military competition described as driver for technological determinism in [195]. As opposed to the scenarios one and two in which stronger societal influence on technology adoption is described, little societal impact is perceivable. The main argument for disregarding societal issues brought forward by societal stakeholders is the economic need for technology and more specifically automation to remain competitive. A push-back from society is limited to its fringes as a moment of realization is absent in this scenario. The precise mechanisms resulting in this absence might include policy, public disinterest or companies’ political power. A detailed investigation of the facilitating mechanisms, again, is out of scope herein. Thus, Table 16 describes the narratives of each of the relevant trends for the scenario of dominant economics of replacement.

74

Spread of HRI Elaboration Technological Protectionism Technological standards and regulation Standards and Regulation are projected to be shaped according to economic interests of key players while trying to maintain a minimum of stymieing of efficiency-led technology adoption. Reshoring production Reshoring their production, technology leaders aim to secure their IP and to indirectly allay labor unions fearing job losses. Prime focus is not to drive innovation towards human-centric HRI, rather it is to pursue productivity gains.

Consumption Pattern Mass-customization consumption Through the spread of technology, efficiency gains are used to lower prices inducing consumers to consume advertised mass-customized goods. Dematerializing consumption Dominant market players leverage their strength into markets of dematerialized consumption aspiring to emulate market dominance of companies such as Apple, Alphabet, Amazon or Facebook. Conscious consumption The argument of economic competitiveness will stand in contrast with individual calls for responsible consumers which have the influence to change production patterns (similar to current developments in climate change and activist consumer behavior). Thus, conscious consumption is a niche behavior in the emancipated part of the privileged of society.

Workforce Emancipation (Re-)Skilling offers Only necessary (re-)skilling for the maintenance of competitiveness will be made, while focussing on keeping costs of such offers as low as possible. Organization of labor Current labor organizations will be maintained or weakened by e.g., outsourcing labor through contracted labor. Arguments related to economic pressure faced by companies are used to promote more liberal labor regulations and more liberal treatment of labor. Rising real salaries/wages Risen real salaries and wages will provide the economic incentive to automate further. This leads to a split in low and high paid workers. Overall, this leads to increased price sensivity of consumers.

Economics of Replacement Falling cost of technology Falling cost of technology will provide the economic incentive to automate and only involve labor where necessary. Maturing technology Maturing technology will provide the technological capability to increase the degree of automation. Demographic change Demographic change will help companies dismiss workers if necessary as more people will be in the age eligible for (early) retirement. Privatization of research Research will be used to advance technology according to economic incentives of key market players. Increased privatization of this research leads to a lack of interdisciplinary research on social implications as well as insights from human factors in the development of future technologies.

Driving Forces Workforce Emancipation The emancipation of the workforce is focused on the minimally viable maintenance of jobs rather than quality of jobs given the economic pressure of companies. The reality is accepted, although grudgingly, as inevitable. Economics of Replacement The economic pressure results in increasing automation and vice versa. Consumption Pattern Given the widespread price sensitivity of consumers and the acceptance of reality as inevitable, no dynamic towards change is created. Table 16 Economics of Replacement Scenario - Trend Narratives Similar to scenario one, (re-)skilling is contested. As the company is forced to introduce technology and automate, it is similarly forced to train its workforce for the new technologies. However, as indicated in the table above, this training is kept to a minimum due to the economic pressure on the company. Thus, worker mobility is not significantly increased and more innovative technology cannot be introduced either. Demographic change will serve as easing factor so that pressure on companies is not fully related to workers, nonetheless, the affected parts of society are constantly confronted with the economic pressure. This results in increased price sensitivity which is reflected in the consumption pattern exhibited by those parts of society. In conclusion, the readily adoption of proven and tested technologies ensures that companies are keeping up with competition. However, the design of those proven systems is

75 only marginally reconfigurable. Thus, considerations around human factors and societal issues will not be seamlessly integrable. Lastly, in this scenario, catching up with innovation leaders will be a significant challenge. Rather, technological leadership is geared towards harnessing efficiency with a focus on technology or the human providing these gains. This central rationale, thus, also induces other companies in the same industry to apply this rationale since the economic precedence is set. This leads to an intensification of competition, propelling the described rationale even further. Thus, the relation of this scenario to technological determinism is clear.

3.5 Scenario Transfer 3.5.1Methodology 3.5.1.1 Approach

In order to be able to provide a sound transfer of the described scenarios to aspects of HRI, the aspects under scrutiny have to be analyzed in further detail. The scoping literature review provided the necessary understanding as to which aspects provide detailed insights into the design of HRI. As to gain a higher, sufficient understanding of the three design aspects, two kinds of literature reviews are conducted. The three aspects for further review are trust of humans in robots, multimodal communication between the human and the robot as well as the role of the human and the associated control of the human in the interaction. The reasoning for the application of the specific literature review for these aspects is provided in the introduction to the scenario analysis. It is argued that these three express fundamentals of the interaction between humans and robots and express a HRI designer’s view on HRI, thus, motivating a transfer of future scenarios to those aspects. The following two sections provide the fundamental characteristics of the two different literature reviews employed herein.

3.5.1.2 Theoretical Literature Review

To describe the approach used in the theoretical literature reviews, the taxonomy by Cooper [23] is employed. The taxonomy is also used in the initial scoping literature review. Table 17 shows the elaborated taxonomy for the theoretical literature review herein.

76 Characteristic Chosen Categories Focus Research Outcomes Theories Practices or Applications Goal Integration Identification of Central Issues Perspective Neutral Coverage Exhaustive Coverage Organization Conceptual Audience General Researchers Table 17 Theoretical Literature Review along Taxonomy (based on [23]) The theoretical literature reviews herein are incorporating research outcomes, meaning empirical research, theories as well as practices/applications in order to integrate the research and identify central issues with a special focus on issues for future research to resolve. The theoretical literature reviews take a neutral perspective. Through being systematic in the identification of research to be included, the coverage is exhaustive. The organization of works is along concepts to address an audience of general scholars in order to facilitate the transfer of scenarios later on for the audience. These elaborations match the description of theoretical reviews in [22]. Theoretical reviews are devised to build explanations by employing a comprehensive search strategy taking a broad scope of questions into consideration. As to synthesize findings, content analysis and/or interpretive methods are to be applied. In order for the theoretical literature reviews to be systematic, the approach of Levy & Ellis [24] is chosen, as suggested in [22]. Three fundamental steps are necessary for a systematic, i.e. comprehensive, literature review, namely, collecting and screening literature, processing the literature, and finally creating the literature review. In the first step of collecting literature, three sub-processes are to be followed. Gathering literature starts with KW search in scientific databases [24]. Depending on the field of research several databases are suggested to be included in the search. Herein, this author chooses to focus on the Web of Science5 and the Scopus database for KW search. Due to previous experiences during the scoping literature review, these two databases are deemed to provide sufficient breadth and depth for subsequent search steps. Following KW search, backward and forward searches are conducted [24]. In backward search, three possibilities for further literature search are suggested. For one, the reference list of publications found in the KW search are scanned for further relevant works. Depending on the necessary depth for grasping a topic, several reiterations of backward reference search can be conducted on the reference lists of references. A second backward search approach is to track backwards what the authors

5 https://apps-webofknowledge- com.focus.lib.kth.se/WOS_GeneralSearch_input.do?product=WOS&search_mode=GeneralSearch&SID=F1fXl L4eJUvOCCqu4cH&preferencesSaved= 77 of initially identified literature have published previously. Lastly, in the course of backward reference search, previously used KWs in the research field can be identified with which reference lists can be screened. Complementing backward search, forward search is used to find literature published after initially identified literature. In forward search, forward reference and forward author search are employed. They are working equally as in backward search, but in the other direction, searching for literature that is referencing identified literature from the KW search and searching for more recent publications by identified authors. Throughout gathering literature, in order to ensure a high-quality review, the source of literature is to be scrutinized. It is recommended to focus on leading, peer-reviewed journals and peer-reviewed conference proceedings from professional or research associations. Usage of other sources should be limited to factual information. Another consideration necessary throughout the gathering of literature, according to Levy & Ellis, is to check the applicability of the of literature under scrutiny to the scope set in the beginning. For this, a screening of title and abstract is used in order to determine exact applicability, remote applicability or no applicability to the set scope. Supporting this effort, a coding scheme is developed which is applied to the found literature. Eventually, the authors suggest the literature gathering to be complete when newly discovered literature does not provide novel insights into the topic. This includes finding no new arguments, methodologies, findings, authors, and studies. A further possibility to detect completion is if no new literature is found or the literature found has already been screened before. In the second overall step, processing, six sub-processes are recommended – first, know the literature, second, comprehend the literature, third, apply, fourth, analyze, fifth, synthesize, sixth, evaluate. Table 18 lists the six steps and the related activities.

Processing Step Related Activities Know the literature Listing, defining, describing, and identifying Comprehend the literature Summarizing, differentiating, interpreting, and contrasting Apply Demonstrating, illustrating, solving, relating, and classifying Analyze Separating, connecting, comparing, selecting, and explaining Synthesize Combining, integrating, modifying, rearranging, designing, composing, and generalizing Evaluate Assessing, deciding, recommending, selecting, judging, explaining, discriminating, supporting, and concluding Table 18 Literature Processing Steps and related Activities (based on [24]) Lastly, in the output step, the goal is to report the accomplished work [24]. Thoughts and results from the input and processing step are to be documented in a way to elucidate the topic in scope.

78 To complement these general remarks on how to conduct a systematic literature review, the widely cited work by Denyer & Tranfield [196] is additionally consolidated. However, as their guide is addressing doctoral students, not all steps can be replicated herein due to resource restrictions. Nonetheless, their guide is deemed relevant by providing additional descriptions to achieve transparency, inclusivity and explanatory power. To that end, and just as done herein, Denyer & Tranfield [196] recommend novice scholars to conduct a scoping literature review in order to be able to determine the scope of systematic literature review in the research field. In conclusion of their work, special attention is paid to formulating the RQs as concisely as possible as to allow the literature screening to be well guided. In a next step, literature has to be located [196]. In that step, both papers agree on complementing database KW search with manual literature search like cross-referencing or forward and backward search [24, 196]. After gathering literature, it has to be evaluated and selected [196]. Explicit selection criteria have to be set in order to ensure repeatability and transparency. It is recommended to conduct small-scale searches to determine exclusion and inclusion reasons which then form the selection criteria. Always part of the selection criteria should be whether the literature at hand provides relevant information to answer the RQ. Quality appraisal is named as an addition to the selection criteria to evaluate the gathered literature. This, however, is deemed not feasible for the scope of the systematic reviews herein. Following the selection, analysis, and synthesis of the selected literature is to break down the publications and describe the relations between them. The aim in this step is to rearrange information from individual sources to reveal knowledge not accessible from reading the literature independently. To accomplish this, data from the individual works has to be extracted. Table 19 shows the suggested questions for data extraction. While Denyer & Tranfield [196] suggest to conduct the data extraction with several scholars, this is deemed too laborious for this dissertation. Thus, the systematic literature reviews herein rely on data extraction solely conducted by this author. A further adaptation is suggested. As 13th question, the data extraction herein will also ask, what possibilities are mentioned for future research? In conclusion of the analysis and synthesis step, the extracted data is to be reflected and examined for commonalities, differences in order to eventually synthesize the data.

79 1 What are the general details of the literature - author, title, journal, date, language? 2 What are you seeking to understand or decide by reading this? 3 What type of literature (philosophical/discursive/conceptual, literature review, survey, case study, evluation, experiment, etc.)? 4 What are the authors trying to achieve in writing this? What are the broad aims of the literature? What are the literature research questions and/or hypotheses? 5 How is the literature informed by, or linked to, an existing body of empirical and/or theoretical research? 6 In which contexts (country, sector and setting, etc.) and which people (age, sex, ethnicity, occupation, role, etc.) or organizations was the study conducted? 7 What are the methodology, research design, sample, and methods of data collection and analysis? 8 What are the key findings? 9 How relevant is this to what we are seeking to understand or decide? 10 How reliable/convincing is it - how well-founded theoretically/empirically is this (regardless of method)? 11 How representative is this of the population/context that concerns us? 12 In conclusion, what use can I make of this? 13 What possibilities are mentioned for future research? Table 19 Data Extraction Questions (based on [196]) As final step, reporting and using the results is left [196]. Both [24] and [196] state that this step is about reporting all previous steps. Denyer & Tranfield [196] suggest a structure of motivation, methodology, findings, discussion, and lastly, conclusion with a summary, limitations, recommendations, and future possibilities. As the RQ, search string, and selection criteria are dependent on the topic in focus, those details of the employed methodology are described in the respective sections of the theoretical literature reviews (3.5.2.2 and 3.5.2.3). Similarly, there, the focus and scope are elaborated for each topic respectively.

3.5.1.3 Descriptive Literature Review

Similar to the initial description of the other literature reviews conducted herein, the descriptive literature review herein is characterized along the taxonomy in [23] as seen in Table 20. Similar to the description in [22], descriptive literature reviews are used to analyze empirical studies in a representative manner without appraising the quality of the studies herein. However, this approach is adapted slightly to also descriptively summarize theories and practices or applications relevant in the research field under scrutiny. The aim is to get a comprehensive look into the investigated topic by including relevant literature representatively.

Characteristic Chosen Categories Focus Research Outcomes Theories Practices or Applications Goal Integration Identification of Central Issues Perspective Neutral Coverage Representative Pivotal Organization Conceptual Audience General Researchers Table 20 Descriptive Literature Review along Taxonomy (based on [23]) The descriptive literature review, thus, aims to include empirical research outcomes, theories as well as practices and applications to describe the topic in focus holistically. The

80 identified pieces of literature are then to be integrated and central issues identified to give the reader an understanding of the development of the topic and the current areas of interest for research. The review takes a neutral perspective, covering the existing body of knowledge representatively and pivotally. Given time constraints, not all literature is included but to follow the development of the field of research, the decisive pieces are included. The literature is ordered along the concepts and contents of the topic. The review addresses general researchers to ensure the audience can follow made elaborations later on in the scenario transfer. In order to identify relevant literature, the initial devising piece of literature in the field of investigation has to be identified. In consequence applying forward search, the development of the topic and its related concepts and ideas are described. However, the selection criteria are not defined specifically. The evaluation for inclusion is made in context of the development of the topic under investigation. 3.5.2Literature Reviews

3.5.2.1 Descriptive Literature Review – Role of Human

3.5.2.1.1 Research Questions This descriptive literature review aims to understand the different roles humans are attributed in HRI in the literature. More specifically, the interest lies in understanding the meta- functions allocated in the different roles found. Especially with robots being said to transition from tool to teammate the human role in the interaction will most likely change as well. Thus, understanding the various roles humans take in interaction with robots will be of great importance moving forward with interaction design. The RQs for this descriptive literature review are, therefore, as follows: RQ1.1: What are the human role assignments found in HRI literature? RQ1.2: What is the control associated with those roles?

3.5.2.1.2 Literature Search Methodology This literature review is a descriptive literature review as described above in Section 3.5.1.3. Thus, literature is not comprehensively searched and therefore, also no rigid search string is developed. Rather, starting from a publication, the development of the research topic is covered by including pivotal and representative research publications. Therefore, the first step is to identify the inception of the topic which is accomplished through the scoping literature review. In it, the taxonomy of Yanco & Drury [31] and again in which, the work of Scholtz [197] is identified as the starting point for this literature review. Thus, with these works initial forward search is conducted through the Google Scholar directory. Search terms used are “role

81 of human”, “human role”, “human status” and “status of human”. All works are also undergoing backward search by scanning their reference lists and bibliographies for further relevant publications.

3.5.2.1.3 Selection Criteria As is described above in the methodology section on descriptive literature reviews, no explicit selection criteria are set for this review. This author is including publications according to perceived relevance for the RQs. The number of citations will serve as reference point as well. However, the more recent publications are, the less citations can be expected. Thus, this criterion cannot serve as a definite selection aspect. However, only articles in peer-reviewed journals and proceedings from peer-reviewed conferences are included. This is ensured by visiting the journal’s or conference’s homepage and checking those for indications of a peer- review process.

3.5.2.1.4 Results While Scholtz [197] references his own work when introducing the five roles in this publication, this one is his first on the topic on a peer-reviewed platform. The five human roles he devises for HRI are the human as supervisor, the human as operator, the human as mechanic, the human as bystander, and lastly the human as teammate. Citing Scholtz’s work, Yanco & Drury [31] also refer to these five human roles in HRI. However, their interpretations slightly differ. While Scholtz [197] attributes the supervisor monitoring, controlling, and intervention functions on an overall goal level, Yanco & Drury [31] only include the monitoring function. Both publications agree that a human operator is interacting more with the robot than the supervisor and has to correct the robot’s actions to align them with the interaction goals [31, 197]. Scholtz [197] remains open about the operator adjusting parameters of the robot or directly teleoperating it, whereas Yanco & Drury [31] give the operator only the teleoperation function to adjust actions. Both agree that the mechanic has the function to change physical aspects of the robot [31, 197]. Similarly, the role as bystander is equally explicated by both as the role with no active interaction with the robot but a requirement for understanding the robot’s behavior [31, 197]. Lastly, the human as teammate is understood by Yanco & Drury [31] as the role in which the human and the robot work towards the same goal. Scholtz [197] attribute the human teammate the function of adapting the robot’s actions by giving commands while highlighting that the overall goal adjustments only resides with the human supervisory role. Moulières-Seban et al. [198], with reference to Scholtz’s work, also describe the supervisor, operator, bystander, and mechanic, in their work called maintenance operator equally as Scholtz. However, instead of the teammate, the authors introduce the coworker role 82 in which the human shares the task and object in manipulation with the robot, resembling joint action and, thus, collaboration. The authors introduce a sixth role – the designer or programmer, who develops the robot until it is deployed into operation. Focusing more on operational roles in HRI rather than maintenance and preparatory roles, Ong et al. [199] introduce five relationships with varying degree of robot autonomy. The robot autonomy is increasing with each relationship being introduced. In the master-slave relation the robot is imitating each human action, thus, representing a human teleoperator role. In the supervisor-subordinate relation, the human is planning robot tasks which are then executed in confined autonomy by the robot. This clearly can be associated with the supervisory role described by Scholtz above. In the partner-partner relationship the robot is tasked to provide help to the human to fulfill the task at hand. While nothing is said about the human’s responsibilities in this relationship, it can be associated with Yanco & Drury’s teammate role description. In the teacher-learner relationship, the human takes the role of an instructor exhibiting task performance to the robot which then adopts the exhibited performance. As Scholtz [197] specifically mention that in their consideration learning both on the human’s as well as on the robot’s side are not included, no association of this relationship towards the original role descriptions can be made. Lastly, the robot is given full autonomy [199] which indirectly ascribes the human the role of the bystander. Also focusing on operational role descriptions, Onnasch & Roesler [200] devise five human roles while referencing Scholtz. The supervisor and bystander role herein are understood equally to Scholtz’s eponymous roles. However, Onnasch & Roesler explicitly understand the operator as the teleoperator of the robot and being higher in hierarchy. Lastly, roles devised as collaborator and cooperator are inspired and differentiated from the original teammate role. Whereas the collaborator is engaging in joint action to reach common goals with the robot, much like it is devised in the definition of HRC in the scoping literature review, the cooperator shares the overall and lower level goals with the robot but they do not engage in joint action. Thus, Onnasch & Roesler open up the possibility to investigate the relation between collaborator, cooperator, and teammate roles. Bradshaw et al. [42] specifies the teammate role through suggesting a set of policies for the human-robot team interaction. Similar to the original suggestion of Scholtz, Bradshaw et al. require a team leader to whom the robot is referring for instructions. However, the robot as teammate to the human can also initiate actions itself extending the original understanding of teammate. Crucially, this makes the human-robot team, as devised by Bradshaw et al., dynamic in terms of the robot and human autonomy. As is shown in the scoping literature review, such systems are also referred to as mixed-initiative. Bradshaw

83 et al. [201] as well as Jiang & Arkin [56] characterize mixed-initiative systems by goal-oriented actions undertaken by several actors where it is not fixed who will undertake which tasks beforehand. Rather any of the actors can take initiative and execute an action which it is capable of [56, 201]. Jiang & Arkin [56] further devise the levels of mixed-initiative which are posited to range from mixed-initiative on the level of execution, planning or goal-setting. Thus, as Figure 20 shows, seven distinct human roles in HRI are delineated herein.

Teammate

Supervisor Team-leader Collaborator Co-operator Controller Instructor Bystander Figure 20 Human Roles in HRI The supervisor role attributes the function of goal setting, constant monitoring, and, in case the robot’s actions have to be adapted, tele-operated control. The human role as team- leader is only relevant in mixed-initiative systems in which the initiative over either goal setting or planning is not mixed. Thus, the team-leader has the function to either set the overall goal and/or plan actions whereas the team-leader does not have the function of directly interacting with the team. In both the collaborator and cooperator role the human can be seen as teammate to the robot as at least the execution is mixed-initiative in such systems. The roles are differentiated in line with definitions made in the scoping literature review. Whereas the collaborator engages in joint action with the robot, the cooperator does only engage in joint effort. Depending on the extent to which aspects of interaction (execution and possibly planning) are designed as mixed-initiative the control of the human over the robot varies. The controller role gives the human the initiative over planning and execution where the focus of the role is to command the robot as to instigate an action by the robot. In contrary, in the instructor role, the human exhibits planning and execution of an action to the robot which is then executing it independently. Lastly, in the role of the bystander, as delineated previously, the human does not actively interact with the robot, meaning that the human’s actions do not directly command or instigate a robot’s actions. Thus, in conclusion, only in the supervisory, collaborator, and co- operator roles control over the robot is dynamic, whereas the control in all other roles is static.

3.5.2.1.5 Limitations and Future Work While this literature review only includes a limited number of works, the development of human roles in HRI seems to be limited in the literature. However, as is shown, a uniform understanding is yet to emerge. Thus, this dissertation delineates one such understanding which resolves the aspects of the introduced literature. Given the nature of descriptive literature reviews, comprehensiveness cannot be claimed though. Future work should work towards the

84 delineated understanding of human roles. Especially aspects of control and meta-level functions is still to advance. This should help investigate human’s perception of their roles in HRI.

3.5.2.2 Theoretical Literature Review – Trust

3.5.2.2.1 Research Questions Hancock et al.’s [91] work on factors influencing trust in HRI is taken as the starting point for this literature review. The conducted meta-analysis, while being limited by the number of studies that could be included, reveals the importance of the robot’s performance and its attributes such as type, size, and behavior for human’s perceived trust. In order to be able to project the abstract future scenarios from above onto trust, connections should be perceivable. Thus, the list of trust relevant factors synthesized by Hancock et al. is compared to trends and themes discussed above. Several associations are perceivable as is shown in Table 21. Trust Factor from [91] Associated Trend or Theme Workload Working intensity Age Demographic change Attitude towards robots, appearance, anthropomorphism SciFi taxonomy from [194] Expertise, prior experience (Re-)Skilling offers Level of automation Replacement Table 21 Association of Trust-relevant Factors and discussed Trends and Themes To start with, a fundamental understanding of the concept of trust has to be established. Thus, the first RQ for this review is: RQ1.1: What constitutes trust? RQ1.2: How is trust modelled? Based on the comparison above, the following further RQs are posed for the literature review: RQ2: How is trust affected by workload in HRI? RQ3: How is trust affected by age in HRI? RQ4: How is trust affected by attitudes towards robots, appearance and anthropomorphism in HRI? RQ5: How is trust affected by expertise and prior experience in HRI? RQ6: How is trust affected by the level of automation? Moreover, as to support the other two literature reviews, two more RQs are posited in order to guide the selection process of revealed literature: RQ7: How does trust relate to communication in HRI? RQ8: How does trust relate to control and the role of the human in HRI?

85 3.5.2.2.2 Search String The scoping literature review reveals the establishment of the term “trust” in HRI literature. Therefore, the string is built around the term “trust” as it can be assumed that relevant literature, without “trust” being mentioned in it, could be discovered in the complementary forward and backward search. This is assumed despite finding that Hancock et al. [91] include several studies in their meta-analysis not featuring “trust” in its title. Given that trust is found to be broadly discussed in the interaction of humans with automation and autonomy as well as on conceptual levels, the string does not include references to the industrial context as this would lead to implicit exclusion of works on trust on a conceptual level still relevant to the RQs. Rather, exclusion of specifically irrelevant contexts is undertaken while conducting title, abstract, and KW screening. As trust is broad in its scientific relevancy further limitations have to be included. While the inclusion of terms related to industry is overly limitational, inclusion of terms related to HRI are deemed more appropriate. However, specific terms describing the interaction such as “team”, “interaction”, “co-existence”, “cooperation”, or “collaboration” are varied. Thus, the string is limited to “human” and “robot”. Several combinations of the terms “trust”, “human”, and “robot” are applied. After comparison of the results, it is decided to apply the following search string in the Scopus database: TITLE-ABS-KEY ( "Trust" AND ( "human robot" OR "human-robot" ) ) resulting in 569 entries, and the following in the Web of Science database ALL=( "Trust" AND ( "human robot" OR "human-robot" ) ) resulting in 425 entries. Both searches are conducted on September 1st, 2020.

3.5.2.2.3 Selection Criteria No Author

Only works with a given author are included. Found publications without a filled author- field are conference descriptions. Thus, these database entries do not provide additional scientific insight.

Duplicates

As two databases are used to extract publications which have an overlap in the journals they access, duplicates are removed. Duplicates are identified if the title is the same with minor differences while the authors are the same and the abstract is the same as well.

86 Year

Given that the starting point in the literature is set to Hancock et al.’s [91] work from 2011, works older than 2011 are excluded. The 400+ citations of their work exhibits the influence this piece of literature has and therefore justifies using it as the starting point of the investigation.

Language

Only works in English or German are included. This author is proficient in both languages which is used to the advantage of this literature review.

Keywords

Based on the context of this research, the focus of this literature review is to include publications with relevant context, i.e., specifically industrial contexts or research without dedicated context that advance the field of research in general terms. Thus, elaborations focusing on social robotics, domestic robotics, military robotics, or medical robotics are excluded based on KW search. The aim of KW-based filtering is to exclude publications elaborating on trust in overly unrelated research domains. Validation of which KWs can be used to exclude publications is based on random sampling throughout the testing for KWs with which to limit the number of preliminary relevant publications. Thus, the KW-based filtering can be deemed appropriate. However, it is still possible that false positives and false negatives are missed in the KW-based filtering process. False positives refer to publications incorrectly excluded and false negatives to incorrectly included. Hence, a more detailed screening of title, KWs, and abstract as to avoid inclusion of false negatives as well as forward and backward search as to avoid exclusion of false positives is conducted subsequently.

3.5.2.2.4 Results from Screening After applying the selection criteria, the remaining publications’ abstract, title, and KWs are screened to widen the scope of the selection criteria and exclude publications that do not fit the set context. Furthermore, with regards to the RQs, publications are grouped accordingly. Publications usable to answer different RQs than the ones outlined above are also grouped but are later on not included in the literature review. The software used for the described analysis is EndNote X96. Results of the done analysis and classification are shown in Table 22.

6 https://endnote.com 87 Category # of categorized Publications All Publications from Scopus and Web of Science 994 No Author 34 Duplicates 313 Older than 2011 37 Not English or German 2 Remainder for keyword-based Selection 608 Keyword (KW) social robot(s), social (human(-)robot) interaction(s), socially assistive 107 KW humanoid 18 KW teaching, education, school 11 KW medicine, medical, patient, rehabilitation, (health) (care), disability, nurse, surg(ery/ical), chair 32 KW assisted living, elderly (care), ag(e)ing, care home, home care, walking aid 2 KW home or domestic robot(s), domestic, home, household, living, (social) media, daily live(s) 9 KW army, military, soldier, fight, (search (&/and)) rescue 18 Remainder for title-, keyword- and abstracted-based Screening 411 Special Issues 2 Irrelevant Use Case Emotional/Expressive/Trusting Robot 6 Military associated Robot 8 Social Robot 29 Service Robot 23 Domestic Robot 8 Health Robot 14 Agriculture/Construction Robot 11 Autonomous Car 25 Gaming 6 Without Category 23 Connection with HRI aspects Connection with Transparency 21 Connection wih Failure 14 Irrelevant HRI applications Face, gesture, activity, emotion or intention recognition 4 Collision Avoidance, Motion Planning, Handovers 23 Control 7 Sensing 6 Task Allocation 7 Without Category 23 Trust peripheral Aspect in Publication 39 Predicting/Estimating/Measuring Trust 20 Metrics 3 Further Duplicates 6 Relevant Publications for Literature Review 83 Future of Trust 2 Trust General 25 Theory on Trust 4 Connection with Communication 13 Connection with Workload 2 Connection with Static Role of Human 3 Connection with Dynamic Role of Human 10 Connection with SciFi 17 Connection with Expertise 1 Further Interesting Pubications 6 Table 22 KW-, Title- and Abstract-based Screening - Trust In order to answer the above outlined RQs, categories are formed during the title, KW and abstract screening. These categories are formed based on the subjective screening by this author. Coding with the help of further authors is out of scope for this dissertation. As most categories are self-explanatory, only the reason for including Further Interesting Publications

88 is elaborated in detail. A number of publications are identified which are deemed potentially relevant to elucidate this author across RQs. Thus, no unambiguous attribution to a category is possible. Nonetheless, it is accepted that some of these publications might have to be discarded after the content screening and data extraction as their focus is not appropriate. In conclusion, this category can be understood as a shortcut for the forward and backward search later as to avoid missing out on false positives in the title, KW, and abstract screening. The 83 papers classified as relevant are then checked for accessibility with this author’s digital library access at KTH Royal Institute of Technology. Two publications cannot be accessed resulting in 81 papers for further analysis. Thus, the above described data extraction is applied to those publications.

3.5.2.2.5 Data Extraction Results Before the different RQs are addressed, it has to be said that based on the screening of the content of all publications further exclusions are made in the course of the data extraction. For one, some papers are found to be irrelevant to the RQs, despite the abstract, KW, and title screening. This concerns 18 papers. For another, studies or publications which report results on two pages, meaning they are usually a summary of a presentation at a conference, are not included herein. This concerns another 14 papers. The reason for which, is that statistical and contextual analysis is not possible for those papers in sufficient detail. Forward and backward screening of the content-screened publications results in a further five publications to additionally undergo data extraction. Title, KW, and abstract screening is conducted ahead of data extraction in order to determine whether potential publications fall into the relevant categories. Furthermore, in order to only include peer- reviewed publications, availability of potentially relevant publications is checked in the Scopus and Web of Science databases. If a publication is not available through those, the paper is not included. Of the five publications, one is found to be two pages short. Thus, this publication is not included in the literature review. In conclusion, 53 publications are eventually included in this literature review. The subsequent section presents the results from the data extraction along the lines of the posed RQs.

3.5.2.2.6 Review Results RQ1.1: What constitutes trust? & RQ1.2: How is trust modelled?

Despite Smithson [202] arguing to consider automatons trusting humans, the focus herein will be on humans trusting robots they interact with. Thus, when subsequently referring

89 to trust, trust of humans in robots is meant which is in line with findings in the meta-analysis building the center piece of this literature review [203]. However, it is remarked that the research community is not uniformly agreeing that trust is the adequate concept to describe humans’ relations with robots as artificial agents. In his opinion paper, Fossa [204] argues that trust serves to minimize betrayal. Thus, in order to speak of trust in relations between humans and artificial agents, such as robots, he argues, requires the artificial agent to be autonomous in the sense of it being able to choose to betray the human which again requires the artificial agent to act with purpose. While the author agrees that artificial agents fulfill a purpose and are purpose-built artefacts which are able to act autonomously within those given purposes, they are not able to set purposes themselves. In reference to the role descriptions, this means the robot has initiative over goal-setting. These requisites, the author argues, are not given for current artificial agents, thus, rendering the discussion about trust in human-robot relationships groundless Rather, the author reasons for the concept of reliance to describe dependence of humans on artificial agents as the social component of betrayal and purpose-giving is missing in it. Nonetheless, as will become clear throughout the answer to the above given RQs, trust is a viable concept for HRI given its conceptual and definitional construction. Definitions of trust are varied [205]. Definitions emerge depending on the context in which trust is studied – interpersonal or human-automation relations [205]. Thus, understandings of trust include trust as an attitude, an intention, or a behavior [205]. As the authors state, in human-automation literature, trust is conceptualized as a “multidimensional psychological attitude involving beliefs and expectations about the trustee’s trustworthiness derived from experience and interactions with the trustee in situations involving uncertainty and risk” (p. 137). In the just cited literature on trust in automation, the focus is more on cognitive, i.e., evaluation of performance expectations and real performance, rather than affective aspects, i.e., evaluation of the trustee’s intrinsic motivation to secure the trustor’s interest. Thus, the authors’ suggested definition defines trust as “an attitude which includes the belief that the collaborator will perform as expected, and can, within the limits of the designer’s intentions, be relied on to achieve the design goals” (p. 137). As pointed out by Kirkpatrick et al. [206], the research on trust in automation is to be understood as antecedent to trust in robotics. They point out that the understanding of trust and the suggested definition influences possibilities to advance the understanding of trust in HRI. In case of an overly cognitive-based definition, progress in establishing trust in HRI might be slowed according to the authors. Empirical examples of which feature trust-calibrating robots which can measure the trust they

90 elicit in their interacting counterparts by measuring their own performance and using the performance measurement as a direct trust measurement. This includes [207-209]. Kirkpatrick et al. [206] further points out that, even if research does not use an affective-based definition of trust, humans interacting with robots might perceive trust in such an affective-based way. This can be argued for with experiences of humans anthropomorphizing technology. Similarly, Coeckelbergh [210] argues that in order to understand trust in HRI, it is irrelevant whether robots are agents capable of the above mentioned affect, rather it is important to base the question of trust in HRI on robots’ appearance to humans which is formed by, and is forming, the affect. Also, for Coeckelbergh, anthropomorphism is the driving force behind humans attributing affect to rational agents. This is further underscored by Ososky et al. [211] who differentiate between two forms of trust – trust in competency and trust in intention. The former relates to the cognitive aspect mentioned above, the latter to the affective aspect. The authors subsequently propose trust in competency to be more important for trust in automation, whereas trust in intention is more important for trust between humans [211]. Empirical evidence for this is provided by Wu et al. [212]. In a coin entrustment game, humans tended to trust imagined robot opponents more than imagined human opponents over longer periods of interaction given their perception of robots as being reliable and precise whereas humans are perceived as emotional. However, as robots transition from tools to teammates, both forms of trust might become important in HRI [211]. Thus, herein the above described definition is accepted with the caveat of differentiating between objective and subjective trust. Exactly this difference makes it difficult to measure trust. Lewis et al. [205] discuss the lack of uniformity in measuring trust in empirical studies leading to ambiguities in the empirical grounding. Given this nature, self-reporting, psychometric instruments such as scales are stated by the authors to be the most direct. Questionnaires, contrarily, are posited to be intrusive and thus, cannot provide a continuous monitoring of trust development. A detailed investigation of trust measurement, however, is out of scope herein. As for the conceptualization of trust, trust in HRI is again informed by human- automation research [213]. In that regard, trust is consisting of three layers – dispositional, situational and interactional trust. Dispositional trust indicates a person’s given disposition to trust a robot. Situational trust refers to a person’s inclination to trust a robot given the direct environment and situational context. Lastly, interactional trust is trust emerging from current interaction with a robot. Further investigation of factors influencing those layers and the influence of those layers of trust on HRI is done in the following sections..

91 Another aspect of conceptualization of trust is the timely phases of trust. Those are initial trust development, subsequent trust calibration, and lastly outcomes influenced by trust such as reliance, compliance, complacency, and general use [214]. This time-line is indirectly supported by Charalambous et al.’s [89] roadmap for successful HRC implementation. The two focal points in their roadmap is the initial trust calibration for operators by adequate training followed by the constant trust calibration enabled by the empowerment of humans. The aim is to instill the appropriate level of trust in order to avoid overreliance, i.e., disuse, or complacency, i.e., misuse. Similarly, Ososky et al. [211] advocate for building an appropriate level of trust rather than maximizing trust. Further expanding on that idea, Hancock et al. [215] imply that it might sometimes be helpful for the robot to deceive the human. Nonetheless, deception is argued to be handled carefully by designers as it can impact human’s trust in robots if mishandled. However, as to put the expectations on building appropriate trust in context, Xie et al. [216] find that calibrating trust alone does not guarantee successful HRI. Rather, humans build mental models beyond trust including, e.g., robot intention, according to which they decide to delegate and interact with robots. Nonetheless, the importance of trust in HRI studies is underscored by Xie et al. Adding to the above list of reliance, compliance, complacency, and general use, Smithson [202], while building on previous publications, suggests uncertainty as another output. It is argued that commonly uncertainty is argued to be reduced by a trust relationship, while uncertainty is argued to be a natural part of trust relationships at the same time. As a solution to this ambiguity, the author suggests that the trustor decreases its uncertainty about the outcome of an interaction, but therefore, increases the uncertainty about the way the outcome is attained. Teo et al. [217] examine theory relevant for HRI. They find the Uncertainty Reduction theory as relevant in explaining trust in robots. While it is left open whether uncertainty remains as posited by Smithson, the theory posits humans are inclined to decrease uncertainty through transactions. Thus, building on Smithson’s argument of an uncertainty trade-off, the theory would posit that humans are inclined to trust robots if the uncertainty trade- off is positive, meaning that the uncertainty of outcome is larger than the uncertainty of method. In conclusion of this conceptualization, it becomes clear that trust in HRI is not something pre- defined by the design of HRI, rather it is to be seen as an aspect of the interaction which emerges throughout the interaction [217]. Before moving on to modelling trust, in response to the criticism of studying trust in human-robot interaction by Fossa [204] , the difference between objective and subjective trust outlined by Kirkpatrick et al. [206] as well as Hancock et al.’s [215] suggestion of deception as

92 helpful tool to calibrate trust exhibit the relevance of social dimensions such as anthropomorphism and autonomy in HRI. Therefore, the introduced concept of trust is regarded as adequate for HRI studies. In an attempt to model trust in HRI, Sanders et al. [218] examine existing literature iteratively and interview subject matter experts. This results in the trust model shown in Figure 21.

Figure 21 Trust Model [218] The model is subsequently successfully validated by the meta-analysis by Hancock et al. [91] which also serves as the starting point for this literature review. In order to conceptually locate this meta-analysis, Ososky et al. [211] attribute the understanding of trust in competency to it. The meta-analysis aims to synthesize research on different antecedents as part of the three major categories influencing human trust in robots – human, environmental, and robot characteristics [91]. As for several antecedents little empirical evidence is available, the same meta-analysis is conducted another time almost ten years later [203]. Figure 22 shows the updated antecedents.

Ability Based Characteristic Based Attribute Based Performance Based Attentional Capacity/ Gender Proximity/Co-location Behavior Engagement Age Robot Personality Dependability Expertise Race Adaptability Reliability of robots Competency Culture Robot type Predictability Operator Workload Education Human Factors Robot Factors Appearance Level of Automation Prior experiences Expectancy Anthropomorphism Communication method Situation Awareness Personality traits Failure rates Attitudes towards robots Trust False alarms Comfort with robots Transparency Propensity to trust Self-confidence Satisfaction Self-efficacy Contextual Factors

Team Collaboration Tasking Tenure Task type Team conflict Task complexity Team performance Multi-tasking requirements In-group membership Physical environment Interaction frequency Shared mental models Proximity/Co-location Role interdependence Team composition Team cohesion Figure 22 Human-Robot Trust Antecedents (based on [203]) 93 The original meta-analysis finds performance related factors of the robot to have the largest impact on trust in robots and robot attributes with a large effect as well [91]. These results are also consistent with what is to be expected from the above mentioned Uncertainty Reduction theory [217]. While a number of antecedents are only suggested as influential in the original study, the second one, currently in press, can analyze those factors’ effects more comprehensively [203] as the next three paragraph shows. For human-related factors overall, they relate positively and significantly to trust in robots. Ability-based factors do not exhibit a significant antecedent of trust, whereas characteristic-based aspects show significant predictability for trust. However, while prior experiences are positively correlated with trust, they are not found to significantly predict trust, expertise as amount of training is both significantly and positively predicting trust. As to characteristic-based aspects, culture, expectancy, comfort with robots, and personality traits such as complacency and tendency to anthropomorphize, and a person’s extroversion are significant predictors of trust in robots. Expanding on the finding of culture being a significant predictor of trust, the study finds that respondents from (United States) US American as well as Asian cultures have higher trust in robots as compared to European respondents. Lastly, age is not found to be significantly influencing trust Nonetheless, the study exhibits a negative correlation between age and trust, meaning older adults tend to trust robots less. Similarly to human-related aspects, robot-related factors also significantly impact human trust in robots. Also, just as the previous meta-analysis, both performance-based as well as attribute-based antecedents are found to significantly influence trust in robots. However, dependability as performance-related aspect is discovered to negatively and significantly relate to trust, while a robot’s reliability is positively and significantly impacting trust Furthermore, communication method is significantly influencing trust which is understood here as the way the robot communicates with the human. As for attribute-based factors only robot personality which includes aspects of a robot’s positive facial expression, empathy, likeability, and sociability is significantly and positively predicting trust. However, it is noted that while not being significant, higher anthropomorphism correlates with higher levels of trust. As the last of the three main antecedents, environmental aspects are overall not significantly predicting trust. Neither are tasking aspects overall. However, team collaboration aspects overall are found to significantly predicting trust. In it, only in-group membership which describes the robot and human being associated as a group or team is revealed to be a significant predictor of trust.

94 In conclusion, the updated meta-analysis again shows that robot-related factors have a stronger impact on human trust in robots than human and environmental aspects. However, the picture on trust-influencing factors is much finer now than before. As stated above, on some of these aspects it will be expanded later on in order to establish a connection with themes found in the scenario analysis. However, before moving on to answer the remaining RQs, insights on the conceptualization of trust in future HRI are found in the reviewed literature which are to be addressed here as well. First of all, research acknowledges that trust will remain of importance in interaction studies involving humans and automata to which robotics can be counted [219, 220]. Matthews et al. [220] argue for better understanding individual differences in order to ensure trust in future HRI. Currently, agreeableness and extraversion are found to be positively related with trust in machines, whereas conscientiousness is negatively related with trust. Similarly, research in robot personalities is motivated for future HRI. Lastly, (dis-)similarity between the personality of the robot and the interacting human is another area for investigation for future HRI. Particularly, around the often-stated transition from tool to team member a number of insights on the conceptualization of trust in the future can be presented. Depending on whether the robot is seen as an advanced tool or a team member, differing mental models are used as reference for trust in those respective robots [221]. Thus, alongside this conceptual extension, the authors also suggest two further antecedents which influence the use of either mental model. These antecedents are AI comprehensibility and perceived social agency. Kessler et al. [222] find that trust in HRI will be increasingly similar to interpersonal trust. As indicated alongside the definition of trust beforehand, this would imply a stronger focus away from trust in competency and towards trust in intention. The authors, however, debate two ways in which this could evolve. Either robots are developed to replicate human abilities or robots are developed with a greater focus on autonomy different from human’s. While on the former trajectory an evolution towards interpersonal trust in HRI seems inevitable, trust in HRI might develop towards a different relationship with the latter trajectory. Similarly, while ascribing current robotics technology the ability to be perceived and act like good colleagues, a prerequisite for which is being trustworthy according to the authors, the open question looms whether we should actually create robots that are good colleagues [223]. Lastly, along the trajectory of robots morphing from tool to teammate, Alaieri & Vellino [224] argue for a need of ethical principles embedded in robots. It is further argued that only with those principles embedded robots will be trusted by humans as these make their decisions explainable and justifiable. Similarly, Razin & Feigh [225] argue for norms such as fairness to be incorporated

95 in robots in order to elicit trust in them. Consequently, for the future of trust in HRI, it will be important to consider whether robots are being developed according to a human blueprint and how to embed ethics or norms into their algorithms.

RQ2: How is trust affected by workload in HRI?

While both meta-analyses include operator workload as antecedent, no significant interaction is found with trust [91, 203]. The present literature review finds one study conducted in Germany with sufficient length to be included examining both trust and workload [226]. In it, cognitive load is investigated alongside trust for an easy assembly task on a computer. The robot is introduced as an assistant to complete the task. Results of the experiment show that participants trust a proactive assistant robot more than robots only assisting if explicitly asked or completely autonomously intervening while at the same time cognitive load is not significantly increased. While this does not exhibit any direct connection between trust and workload, it implies that neither of the two has to suffer at the expense of the other. Thus, given an adequate interaction design a negative correlation can be avoided.

RQ3: How is trust affected by age in HRI?

As indicated in the meta-analysis, younger people tend to trust robots more than older workers [203]. In the pairwise comparison of the meta-analysis age is found as significant predictor of trust. Additionally, a study from the United Kingdom (UK) is found in which older and younger adults are asked for several aspects including trust and perceived capability of a robot either modelled older or younger in appearance in either a cognitive or a physical task [227]. While the main goal of the study is to examine whether humans leverage aging stereotypes onto robots, the study also contains some insights for trust in robots based on age. Whereas older adults exhibited decreased trust in the less reliable robot when it is doing the physical task, older adults trust the reliable robot more in the cognitive task. Contrarily, younger adults’ trust did not vary with changing reliability and task domain. Thus, trust changes more readily for older adults. Tying back with elaborations made earlier on the trust calibration to elicit appropriate trust levels, this can have both positive as well as negative consequences. Trust calibration might not have to be externally initiated for older adults. However, in case the amplitude of trust calibration is too strong, externally initiated trust calibration might still be necessary in order to avoid dis- or misuse.

96 RQ4: How is trust affected by attitudes towards robots, appearance and anthropomorphism in HRI?

As Table 21 above states attitudes towards robots, appearance, and anthropomorphism are used to associate research on trust with themes identified in SciFi. That SciFi can in general influence a human’s trust in robots is also posited by Hancock et al. [215]. The SciFi themes revealed both through the interviews as well as the literature search, so is the perception of this author, are most closely related to the three mentioned antecedents of trust. The meta-analysis finds a significant interaction between anthropomorphism and trust in the pairwise comparison in the way that higher anthropomorphism indicates higher levels of trust [203]. Further, the meta-analysis finds interaction between a human’s attitude towards robots and trust as well as between appearance and trust. Those interactions are, however, not found to be significant. Interestingly, appearance is not found as trust antecedent in the 2011 meta-analysis, whereas new studies of the time after the 2011 analysis reveal appearance to be an antecedent. In order to differentiate appearance and anthropomorphism, the delineation from [203] is used to distinguish in further studies. Appearance refers to the physical form and look, whereas anthropomorphism, in line with the description used in the scoping literature review, is the ascription of human outer and inner aspects to an animal or system. Several additional studies are found from which results are reported below. Investigating the interaction between robot forms from different robot domains (from therapy over industrial to military) and different trust predictors with students in the USA, Schaefer et al. [228] find perceived intelligence of the form as the most significant predictor of trust. For robots from the industrial domain perceived intelligence is the only significant predictor for trust as it accounts for more than a third of the variance. For robots in the service domain (in ranked order from most to least significant) perceived intelligence, robot classification, and social influence are the three significant trust predictors as they account for 43 % of the variance together. Lastly, in order to account for the possible shift from tool to teammate of robots also in the industrial realm, the results for robots from the social domain are reported. In it, (in ranked order from most to least significant) perceived intelligence, robot classification, and neuroticism predict trust significantly accounting for 43.3 % of the variance together. The study does not provide any information as to which appearance features might elicit perceived intelligence but for all domains mentioned here it accounts for more than a third of the variance indicating the importance of perceived intelligence in the design of robots. In another study with students in the USA, appearance aspects material, texture, and color are investigated for their association with perceived internal characteristics of robots [229].

97 Material is found a significant predictor of approachability, competence, professionalism, practicality, industrialism, and robustness. The analyzed materials are metal, wood, and plastic. Metal is perceived most approachable, competent, industrial, and practical of the three. Color is found a significant predictor for approachability, playfulness, aggression, professionalism, gender, and industrialism. The analyzed colors are blue, red, and grey. Blue is perceived most approachable, passive, and feminine, whereas grey is perceived as most serious, masculine, industrial, and professional. Lastly, texture is found a significant predictor of honesty, competence, gender, professionalism, practicality, industrialism, and robustness. Rough, slippery, and smooth are the investigated textures. Smooth is perceived the most honest, competent, professional, practical, and robust. Slippery is perceived the most masculine. This study is posited as a first step to investigate the effect of design features on trust without measuring trust elicited by the investigated design manipulations. While perceived intelligence is not directly questioned in the present study, perceived competence is assumed the closest to it. Thus, with reference to the previously mentioned study, a smooth, metal robot is perceived as most competent as compared to other investigated forms possibly leading to higher perceived intelligence instigating higher trust than other appearance forms. However, this deduction would need to be empirically validated. In line with the scoping literature review herein, it is differentiated between anthropomorphism of the bodily shape of the robot and the anthropomorphism of its behavior. While the first study mentioned on this topic, again conducted with students in USA, investigates the interaction between reliability and trust in HRI [230]. It is found that functional vs. anthropomorphic bodily design interact with task complexity to influence trust ratings of the human. For the task at hand, the authors suggest that the anthropomorphic design features of alternative robots might have been too excessive leading to reduced trust compared to a more functionally designed robot. Thus, this study highlights the interdependence of anthropomorphism with other trust antecedents to eventually elicit trust in the human. In another study with students in the USA, a functionally designed and an anthropomorphic robot are used in a cognitive task together with a human [231]. While not measuring the competency component of trust, the authors focus on security, respect, and unease to measure trust. It is found that participants rated both robots below neutral in terms of security and respect with no significant difference between the two. However, participants felt significantly greater unease about the functionally designed robot than the anthropomorphic one. Thus, this study provides a fine level of understanding which aspects of trust are affected by bodily anthropomorphic design, it being the sense of ease rather than security and respect. Again using a cognitive task,

98 Zanatto et al. [232] investigate whether exposure to a more or less anthropomorphic robot in an initial interaction could influence the perception of trust in a less or more anthropomorphic robot in a subsequent interaction with presumably UK residents. In line with previous findings, the more anthropomorphic robot is rated as significantly more trustworthy than the less anthropomorphic robot. However, the interaction with a more anthropomorphic robot first is not leading to significantly higher trust ratings in a less anthropomorphic robot in subsequent interaction. Significant spill-over effects are only measured for likeability, anthropomorphism, and animacy. Therefore, while familiarization with an anthropomorphic robot might positively influence aspects of humans’ perception of other, less anthropomorphic robots the overall influence is not significant enough to influence trust in other robots. This has obvious implications for training for HRI. After having investigated bodily anthropomorphizing, behavioral anthropomorphizing is now investigated. As the design and behavior of robots is anthropomorphized, the perceived gender of the anthropomorphic robot might become an additional relevant design consideration [233]. In this study with US residents the robot is gendered by it stating a male, female, and no name as well as changing the pitch of the robot’s voice. Further, the study consisted of two parts. The authors do not find a significant effect of the perceived robot gender on trust and perceived competency for different occupations. Neither do the authors find a significant effect on trust ratings between the robot having the same or a different gender from the human participant. Thus, as for anthropomorphizing robots their perceived gender does not significantly impact humans’ trust in them. Lastly on anthropomorphism, two studies investigating both bodily and behavioral anthropomorphizing are presented. You & Robert [234] investigate the influence of surface- level and deeper-level similarity of robots with their human counterparts in interaction on trust with US residents. More superficial similarity refers to more visible similarities such as demographic aspects, whereas more profound similarities refer to similarities in, e.g., values, attitudes, beliefs, and/or personality. The authors find that, similar to previous studies, the task type moderates trust. Surface-level similarity is correlated with higher trust in low-risk tasks, whereas deeper-level similarity is correlated with increased trust in both low- and high-risk task contexts. Similarly, Natarajan & Gombolay [235] investigate the influence of bodily anthropomorphism and interactional behavior on trust with students in the USA. They find that rather than the mere form factor of the robot, the perceived anthropomorphism influences human’s trust in the robot. Alongside this, the study reveals that the robot’s interactional behavior (in this study verbal) strongly influences trust. Thus, the two studies together indicate

99 that behavioral anthropomorphism has greater relevance to elicit trust in humans than bodily anthropomorphism. As for attitudes towards robots, two indicative studies are found. Schniter et al. [236] investigate emotions triggered in a trust game with, presumably, US American students. Ahead of the game, participants are not found to report significantly different trust ratings between a human and a robot co-player. Human participants show greater positive emotional responses in interaction with humans while exhibiting less negative emotional responses when interacting with robots. While trust is not measured after the experiment, this study indicates that robots provide more stable interactions in terms of emotional responses. Moreover, this shows that attitudes to blame the robot as less poignant than to blame fellow humans. In another study, Złotowski et al. [237] US residents are shown videos of variously formed robots conducting a variety of tasks. The experimental conditions are, one, the participants are told that the depicted robots are autonomous and can freely choose to accept or disregard human commands or, two, the robots are not autonomous and will always follow human commands. The study does not directly measure trust. However, various attitudes are gathered. Participants are significantly more likely to perceive autonomous robots as an identity threat and have a significantly more negative attitude towards autonomous robots. However, participants’ attitude to interact with an autonomous robot is not significantly lower. Nonetheless, the authors posit that people might be hesitant to trust autonomous robots. While these studies do not establish a link between trust and attitudes towards robots, the studies exhibit human’s inclination to establish attitudes towards robots. Humans are milder towards robots in a cooperative setting, however, robot’s autonomy triggers negative attitudes.

RQ5: How is trust affected by expertise and prior experience in HRI?

As (re-)skilling offers are understood quite broadly, both expertise and prior experience are related to it. Expertise is understood as the knowledge about a task or domain, whereas prior experience is referring to the amount of previous interaction [203]. The meta-analysis shows, expertise, in the sense of amount of training, is positively and significantly corelated with trust. Prior experience is also positively, and significantly found to predict trust through pairwise comparison. Similarly to appearance in the section above, expertise and also prior experience is only found as antecedent to trust in the most recent meta- analysis. Ahead of this finding, Hancock et al. [215] argue for designers giving humans interacting with robots greater understanding of a robot’s feasible contributions, thus, increasing the human’s knowledge about the robotic partner. With respect to likelier future interactions between robots and non-professionals, Janssen et al. [219] stress the importance of

100 expertise and experience to prevent inappropriate use, strongly associated with uncalibrated trust. The authors stop ahead of recommending more training for non-experts though. Nonetheless, the importance of appropriate training and prior experiences to elicit appropriate trust is exhibited by these sources. In their study, Sanders et al. [238] find that prior experience increases trust across task domains of the robot – significantly so, for assembly, nurse, and detonation robots. The authors further posit that their results indicate that humans with more prior experience might have higher trust baseline ratings compared to less experienced humans. As a tie-back towards the previous RQ, it is also found that prior experience changes attitudes towards robots of the humans. Participants with less experience had significantly more negative attitudes towards robots than those with more prior experience. Given that attitudes towards robots also has a slightly positive correlation with trust [203] an increase in prior experience might have positive influence on trust in robots through multiple ways. In another study, Volante et al. [230] also finds empirical evidence for prior experience with robots increasing trust ratings. Thus, it seems fair to say that training and exposure to robots is crucial for robots to elicit trust in humans.

RQ6: How is trust affected by the level of automation?

The meta-analysis finds no significant interaction between the level of automation of a robot and the trust of interacting humans [203]. Nonetheless, the pairwise comparison conducted in the analysis suggests that lower levels of automation increase trust. No studies investigating this relation are found in this literature review. However, in their encompassing publication, Lewis et al. [205] suggest that higher levels of automation reduce trust as the system becomes potentially less transparent. This suggests that, similar to workload, trust does not necessarily have to decrease if the level of automation increases as long as transparency is maintained.

RQ7: How does trust relate to communication in HRI?

As this RQ links to the other theoretic literature review herein, the aim is to investigate how different interaction modalities elicit human trust in robots. Communication modalities are not covered by the meta-analysis, however, a closely related concept is – transparency [203]. Transparency is referring to the capability to provide information. Thus, transparency in this sense refers to the provision of information whereas interaction modalities refer to the way information is provided. Transparency is newly added to the list of antecedents in the most recent meta-analysis and is not found to be significant. Nonetheless, as Ososky et al. [239] argue transparency is playing an important part in trust calibration, which is again important

101 for appropriate use of robots as previously described. The authors posit a shift in transparency requirements as robots transition to greater autonomy. While in automated systems commonly invisible interfaces are used whose focus is on creating transparency of the system’s environment and not of the system itself, robots which are more autonomous will shift the focus on creating transparency of the robot’s system rather than the environment. The authors call this the shift from invisibility towards understandability. This transition also effects the way, i.e., the modalities, of interaction. Different modalities might be more suitable for different messages. Interestingly, in the meta-analysis transparency is grouped as a robot-related trust factor [203]. Herein, however, both ways of providing information are considered. Ososky et al.’s argument is empirically supported by Nikolaidis et al. [240]. The authors investigate several communication conditions through a collaborative table-carrying task in the USA. In order to achieve the optimal task performance, the authors test several communication strategies of the robot. The robot, one, either moves towards the optimal goal without further communication, or, two, verbally commands the human to move towards the optimal goal, or, three verbally informs the human which is the optimal goal with an explanation. While the robot always achieves the optimal goal in the first condition, in the other two the robot adapts to the strategy of the human in case the human refuses to accept the suggested optimal strategy by the robot. Humans are found to trust the explanatory robot more than both the robot not communicating verbally and also the one commanding the human while also being more likely to accept the robot’s suggestion of the optimal way to turn the table than in the other conditions. This clearly suggests that understanding the system is facilitating adaptation to the robot as indirect trust measurement and to this end this experiment suggests an additional communication modality to convey understandability besides implicit action. In a perpetuation of the research initiated by Ososky et al., Sanders et al. [241] compare the trust elicited by different robot feedback modalities. In the experiment, students from an US American university are teamed up with a remote robot and are provided with either graphical, auditory, or textual information about the robot in either constant, contextual, or minimal time intervals. The graphic feedback is eliciting the most trust, followed by auditory and textual modalities. Furthermore, the constant condition received the highest trust rating. In two further experiments, robot’s eye-gaze and its influence on trust is investigated [242, 243]. In a cognitive task in which the robot is trying to help the human participant, presumably Australian students [242]. Human’s trust is found to be moderated by task difficulty. Gaze towards the human increased trust in difficult tasks and decreased it in easy tasks whereas other lifelike movements of the robot are not affecting trust. However, gaze has a converse effect on human’s

102 performance with gaze improving performance on easier tasks and weakening it for harder tasks. In the second study with the same setting and a similar set of participants, the authors modulated different forms of gaze in order to find whether results changed with less confrontational gaze styles [243]. However, no significant differences between gaze conditions is found while task complexity is again significantly influencing human’s trust in robots just as in the previous study. Interestingly, women are found to trust the robot significantly less than men as the robot gazes at them. the authors conclude that gaze might not be appropriate for the tested task type. Lastly for robot to human communication, Bergman et al. [244] argue to also consider non- verbal communication in order to increase human’s trust in collaborating robots. They particularly suggest to investigate animalistic behavior of robots as a way to increase trust. The authors base their argument on the previously introduced Uncanny Valley theory. It is cautioned from modeling robot behavior after human’s as to avoid negative, human responses. Rather, animation is suggested as a way to imbue robot’s behavior with greater understandability. However, the authors acknowledge that not all interaction should be based on these non-verbal, implicit behavior of the robot, rather, it is argued to be a promising additional interaction modality. Two studies are found to relate to human to robot communication. The first study combines insights for both directions of communication. The experiment, conducted with presumably French participants, investigates trust in relation with three bidirectional communication conditions in a remote collaboration task [245]. The three conditions are, one, either no verbal and no textual communication, two, verbal, and textual communication related to the task, and, three, verbal, and textual communication unrelated to the task. Participants are found to have significantly increased trust as they are able to communicate with the robot task- relatedly compared to both other conditions. While in the no-communication condition, the human can see what the robot sees, it has no possibility to try to understand the system. Thus, this is in line with previous theoretical elaborations and suggests again the applicability of verbal communication to facilitate system understanding. The second one examines the possibility to interact emotionally with robots in work environments [246]. US American residents are participating in this experiment in which a robot or a human are interacting with another human either emotionally intelligent or not. The used modalities to exhibit the interaction between the two agents to participants are either video, audio, images, or text. While trust is not significantly influenced by the agent type (human vs. robot), and the presented modality, robots with higher emotional intelligence are eliciting higher trust than less emotionally intelligent robots. As the authors conclude, this suggests that trusting working

103 relations between humans and robots might be facilitated by the possibility to interaction emotionally with the robot. The examined research reveals several possibilities for future research. It is shown that both the task type as well as the task complexity moderate human’s inclination to trust robots as well as the interaction modalities eliciting trust. Thus, future research should try to capture these interactions and shed further light on which settings require more emotionally focused interaction modalities and which require more cognitive focused modalities. In any way, a one- fits-all approach in modality design seems counterproductive if trust is to be facilitated. Furthermore, as was also shown, trust might stand in competition with other relevant interaction parameters, such as performance. Long-term studies are suggested in order to investigate whether the short-term trade-off changes over time as to inform designers which should be the primary goal in interaction design. Lastly, as will be shown later, research on interaction modalities is providing increasing possibilities for both the robot as well as the human to communicate with the respective counterpart. Therefore, greater understanding on which impact these modalities have on trust is needed as to inform designers. It is posited herein that human to robot interaction possibly other meta-factors might be important to elicit trust than in robot to human communication. It is suggested that more focus on performance related aspects is relevant for human to robot interaction as the human wants to understand the robot’s task status and progress, whereas robot to human interaction might focus more on perceived intelligence to elicit trust as to demonstrate the human that the robot understands its environment and its human interactor. However, these suggestions might be dependent on the role distribution between human and robot.

RQ8: How does trust relate to control and the role of the human in HRI?

The aim of this RQ is to answer how trust is affected by the human being given different roles in relation to the robot. This is particularly interesting as robots are posited to transition from tool to teammate as is captured several times throughout this dissertation. Therefore, aspects of control over and within the interaction might be interesting aspects affecting trust and therefore human’s inclination to interact with the robot. The meta-analysis by Hancock et al. [203] does not directly find these aspects to influence trust. However, the antecedents in- group membership, and interdependence are perceived as associated with human and robot role. In-group membership is found a significant predictor of trust, whereas for interdependence no significant interaction is found. In a publication without empirical validation, Hancock et al. [215] posit that adaptive technology might be facilitating trust building. In line with elaborations made on mixed-initiative teams in the scoping literature review, control and tasks

104 can be dynamically allocated in human-robot teams. Thus, this RQs aims to investigate both the influence of role attribution as well as static and dynamic control allocation on trust. As for the influence of static control, two studies are found to be relevant for this literature review. In a preliminary experiment with presumably UK residents, Beton et al. [247] investigate a robot’s perceived intelligence in a collaborative manipulation task. As described above and stated by the authors, perceived intelligence is a strong predictor for trust making the insight of the study relevant in this context. The three conditions tested are, one, the robot leading the human disregarding the preferred strategy of the human, or, two, the robot following the human strategy fully, or, three, the robot not collaborating acting against the human strategy. Thus, the three conditions resemble the following roles devised in the literature review: in the first condition the human takes only the role of collaborator, whereas the robot is also a collaborator but also the team-leader. In the second condition, the human is both collaborator and team-leader, and, lastly, in the third condition, the human is a collaborator but no team- leader, thus, lacking goal-setting capabilities. The robot is perceived as most intelligent overall in the follower role when the human is both collaborator and team-leader. This indicates a human preference for willing to stay in control and rather leading than following a robot’s strategy as eliciting more trust in the robot. The second study, conducted with German native speakers, investigates whether a human’s trust in a robot increases if (s)he taught and programmed the robot his-/herself [248]. However, knowing that one taught the robot is not found to significantly increase the human’s trust in the robot. Thus, this study provides evidence to focus on the present interaction between a human and a robot to elicit trust rather than focusing on individual programming. As for dynamic control considerations, two experiments based on the previously mentioned table-carrying task are of relevance. In the first study, conducted with US residents, Nikolaidis et al. [54], investigate the control mechanisms on their influence of trust. The first condition is the robot not adjusting to the human’s strategy. The second condition uses a mutual adaptation approach in which the robot tries to suggest the optimal strategy by initiating it, however, retracting to the human strategy if the human does not adapt to the robot. Lastly, the human and the robot conduct training together in which the robot and human switch roles, thus, allowing the robot to learn the human’s strategic preference. Then the experimental runs are executed. While the first condition harnesses the optimal performance, the training condition is trusted the most. The mutual-adaptation condition is found to balance the trade-off increasing trust and performance of the collaborating human-robot team. The results are validated by another experiment by Nikolaidis et al. [249] in a table-clearing task. Mutual adaptation, the

105 second condition, is found to significantly increase trust compared to the condition of no adaptation, but not significantly decrease trust compared to the training, i.e., one-way adaptation, condition. However, there is a significant difference in performance between mutual and no adaptation in this experiment. In the second experiment by Nikolaidis et al. [250] featuring the table-carrying task, US residents are first conducting the collaborative table- carrying task and then a hallway crossing task. The experiment suggests a mediating role of social norms in mutual adaptation. While trust is not directly measured in this extension of the original experiment, the experiment gives an indication to investigate social norms of relevant task settings in order to understand the appropriateness of assigning control to the human and the robot in their interaction with each other. In another study set in a remote interaction scenario, Saeidi et al. [251] investigate the effect of mixed-initiative control assignments on trust. The participants, presumably residents in the USA, are subject to various adaptation mechanisms. One, the human can manually interfere in the control assignment, two, the human cannot interfere while an algorithm tries to find the optimal control assignment and lastly, the control assignment is computed by a trust-based algorithm. The trust-based condition is found to elicit the greatest trust in humans while featuring the lowest task load and the lowest error. The authors suggest the naturalness of the trust-based allocation algorithm to be the reason for these results. Thus, the work indicates that trust can be highest even if humans are not in control within a mixed-initiative system. In a last experiment included herein, Sadrfaridpour et al. [252] compare a manual, collaborative and autonomous mode with regards to trust in an assembly task through both simulation and experimentation with a single participant. Similar to the other introduced experiments the collaborative mode is found to increase trust and moderate workload. In conclusion on the dynamic role of the human in a collaborative setting with robots, collaboration and mixed-initiative systems seem to provide possibilities in applications where trust in the robot might stand in contrast with other desirable design goals, such as workload reduction or performance improvement. Humans seem to be willing to trust a robot collaborator, however more research is needed in order to understand, as stated, social norms mediating trust. Similarly, as seen in one experiment as well, the room to design mixed-initiative systems is manifold. Thus, further research is needed in order to understand which mixed-initiative system configurations in terms of the extent to which initiative is mixed (goal setting, planning, or just execution) are best suited for certain trade-offs. In total, results from both static as well as dynamic control attribution remain inconclusive on whether humans trust robots in HRI more if they remain in control during

106 collaborative, mixed-initiative settings or if control can be shared. Thus, this represents a relevant area for future research.

3.5.2.2.7 Limitations This literature review is constrained by several limitations. The meta-analysis is providing a list of trust antecedents. Thus, in order to be able to isolate the interaction between individual manipulations and trust, data on all other antecedents is needed from the participants in order to be able to fully control the interaction. Alternatively, the sample size needs to be large enough in order to be able to assume a controlled environment. However, this is not the case in many studies included herein. Thus, it is cautioned from overly generalizing results and certainly more research is needed as to validate suggestions and results. Furthermore, as indicated in the answer to RQ1, the measurement of trust varies and so it does across studies. It is beyond the scope to investigate the trust measurement in each study as to the amount of cognitive and affective aspects of trust. Closely related to this is that not every study states their employed definition of trust. Thus, this is another aspect cautioning generalization. Lastly, it is attempted to draw a transparent line throughout the selection process between aspects of social robotics relevant for the HRI contexts under investigation herein and pure social robotics research irrelevant for the context. Methodological aspects such as forward and backward search as well as categorization and data extraction are suggested in order to objectivize the process of selection. Nonetheless, this author conducts this process individually and inclusion of additional outside observers would increases objectivity further

3.5.2.3 Theoretical Literature Review – Multimodal Communication

3.5.2.3.1 Research Questions As mentioned in the scoping literature review, the work by Perzanowski et al. [67] is identified as the inception of multimodal communication attempts in the context of HRI. Given that no comprehensive review on multimodal communication can be identified to date, the initial empirical investigation, which is the work by Perzanowski et al. is used as the starting point for this literature review. Being cited by more than 250 publications it can further be seen as the devising piece of literature for this field of research. As no direct relation from the scenarios described above is perceivable yet, the literature review aims at answering RQs to elucidate the connection of communication to trust and the other aspects of HRI under investigation herein. Again, first off stands a general question on multimodal communication: RQ1: What is multimodal communication’s significance for HRI? 107 In subsequence further RQs aim to shed light on multimodal communication’s relation with the three other aspects of HRI: RQ2: How does multimodal communication relate to trust? RQ3: How does multimodal communication relate to control and the role of the human in HRI?

3.5.2.3.2 Search String Similar to the search string for trust, the search string for multimodal communication includes both variations of addressing HRI (i.e. “human robot” and “human-robot) without mentioning specific forms of the human-robot relationship. This is due to the same reasons as with the search string for trust. Furthermore, as to pay heed to the main research area of this literature review, different variations of multimodal are included, namely “multimodal”, “multi-modal”, and “multi modal”. All three variations are found in literature and thus are to be included in the search string interchangeably. However, trials reveal that searching the abstract in addition to the KWs and the title for “multimodal” and the other two variations did not find publications mainly concerned with multimodal communication. It is rather a side topic in those publications, leading to the circumvention of including those publications that only mention multimodal in the abstract. Lastly, as elaborated in the scoping literature review, several terms are used to refer to communication in literature. The used terms are "communication", "interface", "perception", "intention", or one of three forms of interaction modality, interaction mode, or interaction modes. In contrast to multimodal, those terms referring to communication are all searched in the title, KWs, and abstract. In consequence, this is the search string used on Scopus: ( TITLE ( "multimodal" OR "multi-modal" OR "multi modal" ) OR KEY ( "multimodal" OR "multi-modal" OR "multi modal" ) ) AND TITLE-ABS-KEY ( "communication" OR "interface" OR "perception" OR "intention" OR ( interact* AND ( modalit* OR mode OR modes ) ) ) AND TITLE-ABS-KEY ( "human robot" OR "human-robot" ) resulting in 598 entries, and the following on Web of Science: ( TI=( "multimodal" OR "multi-modal" OR "multi modal" ) OR KP=( "multimodal" OR "multi-modal" OR "multi modal" ) OR AK=( "multimodal" OR "multi-modal" OR "multi modal" )) AND ALL=( "communication" OR "interface" OR "perception" OR "intention" OR ( interact* AND ( modalit* OR mode OR modes ) ) ) AND ALL=( "human robot" OR "human- robot" ) resulting in 308 entries. The searches are conducted on August 26th, 2020.

108 3.5.2.3.3 Selection Criteria No Author, Duplicates and Language

As for the field of the author, the removal of duplicates and the limitation to German and English publications, this literature review applies the same selection criteria as the one on trust.

Year

Given that the starting point in the literature is set to Perzanowski et al.’s [67]work from 2001, works older than 2001 are excluded. As stated above, the numerous citations of the work exhibit the influence this piece of literature has and therefore justifies using it as the starting point of the investigation.

Keywords

In addition to the KW-based exclusion of social robotics, domestic robotics, military robotics, or medical robotics, also communication in the context of tele-operation is excluded. As in tele-operation the robot does not feature any autonomy according to LORA by Beer et al. [47] introduced in the scoping literature review. Thus, as the definition of robot herein is not met, those works are excluded. The same validation process for KW-based filtering is conducted in this literature review. Furthermore, just as above, title, KW, and abstract screening as well as forward and backward search are conducted to prevent exclusion of false positives and inclusion of false negatives.

3.5.2.3.4 Results from Screening Just like in the literature review on trust above, the remaining publications’ abstract, KWs and title are screened and publications are grouped in EndNote X9 according to the RQs. Apart from the classification along RQs, it is decided to not include publications focusing on one or two modalities of communication. Given the technological development, publications focusing on implementing only one or two modalities are not deemed relevant for advancing the knowledge on the posed RQs. Thus, for empirical studies, three or more modalities have to be mentioned in the abstract. In case no specific mentioning of the number of modalities is made, the publication is included and finally decided upon later during the data extraction. Results of the done analysis and classification are shown in Table 23.

109 Category # of categorized Publications All Publications from Scopus and Web of Science 906 No Author 16 Duplicates 282 Older than 2001 4 Not English or German 4 Remainder for keyword-based Selection 600 Keyword (KW) social robot(s), social (human(-)robot) interaction(s), socially assistive 62 KW humanoid 39 KW teaching, programming, education, school 47 KW medicine, medical, patient, rehabilitation, (health) (care), disability, nurse, surg(ery/ical), chair 31 KW assisted living, elderly (care), ag(e)ing, care home, home care, walking aid 17 KW home or domestic robot(s), domestic, home, household, living, (social) media, daily live(s) 12 KW army, military, soldier, fight, (search (&/and)) rescue 17 KW tele(-)operation, tele(-)presence 23 Remainder for title-, keyword- and abstracted-based Screening 352 Empty Abstract 3 Associated, but irrelevant Research 16 Irrelevant Use Case Military associated Robot 10 Social Robot 24 Service Robot 25 Domestic Robot 9 Health Robot 2 Search and Rescue Robot 1 Shopping Robot 4 Robot for Elderly 3 Without Category 4 Recognition Mapping of Environment 5 Intention Recognition 4 Emotion Recognition 7 Activity Recognition 2 Person Modelling 1 Learning for Recognition 7 Two or less Modalities Environment Mapping 1 Emotion Recognition 11 Intention Recognition 6 Speech Recognition 4 Gesture Recognition 6 Object Recognition 9 Reaching Prediction 1 People Tracking 6 Emotional Robot 2 Learning for Recognition 3 Without Category 10 HRI Applications Object Hand-Over 2 Control of Robot 15 Control of several Robots 1 Communication 24 Task Coordination 6 Multimodal Communication for Inclusion 1 Sensing for Multimodal Communication 2 Database/Dataset 12 Prgramming of Multimodal Communication 9 Data Annotation 4 Further Duplicates 8 Relevant Publications for Literature Review 82 Multimodal Communication General 16 Connection with Dynamic Role of Human 2 Connection with Trust 3 Further relevant HRI Applications Communication/Conversation 24 Static Role of Human in relation to an individual Robot 17 Static Role of Human in relation to several Robots 5 Task Coordination 9 User Study 6 Table 23 KW-, Title- and Abstract-based Screening - Multimodal Communication

110 In order to answer the above outlined RQs, categories are formed during the title, KW, and abstract screening. These categories are formed based on the subjective screening by this author. Coding with the help of further authors is out of scope for this dissertation. While the original publication used as a starting point for this literature review uses two modalities, this literature review focuses on publications in which more than two modalities are employed for information exchange. As can be seen in the naming of the categories on HRI applications both publications employing two or less, and publications employing more than two modalities cover the same aspects of HRI. Consequently, given time constraints of this dissertation the focus is set on the publications using more than two modalities as these are judged to be more advanced and thus more indicative of the state of the art. However, for investigations on dynamic role of human, trust, and user studies no such double coverage is found. Therefore, for those categories publications with two or more modalities are included. Furthermore, no publications focusing on recognition are included. This is due to those papers’ irrelevant focus for this literature review. The focus therein lies on technical realization of recognition mechanism as opposed to answering the above outlined RQs. Thus, the 82 papers classified as relevant are checked for accessibility with this author’s digital library access at KTH Royal Institute of Technology. Six publications cannot be accessed resulting in 76 papers for further analysis. Thus, the above described data extraction is applied to those publications.

3.5.2.3.5 Data Extraction Results Before the different RQs are addressed, it has to be said that based on the screening of the content of all publications further exclusions have to be made in the course of the data extraction. For one, some papers are found to be irrelevant to the RQs, despite the abstract, KW, and title screening. This concerns ten papers. These publications overly focus on aspects of HRI excluded in the KW screening, e.g., social robots. For another, studies or publications which report results on two pages, meaning they are usually a summary of a presentation at a conference, are not included herein. This concerns another four papers. The reason for which, is that statistical and contextual analysis is not possible for those papers in sufficient detail. Furthermore, one more duplicate is identified during the data extraction. Lastly, as the focus of this literature review is on multimodal ways of communication where more than two modalities are employed, as indicated above, publications found to limit themselves to one or two modalities are excluded as well. This concerns 22 papers. Forward and backward screening of the content-screened publications results in an additional four publications to undergo data extraction. Title, KW, and abstract screening is

111 conducted in order to determine whether potential publications fall into the relevant categories. Furthermore, in order to only include peer-reviewed publications, availability of potentially relevant publications is checked in the Scopus and Web of Science databases. If a publication is not available through those, the paper is not included. In conclusion, 43 publications are eventually included in this literature review. The subsequent section presents the results from the data extraction along the lines of the posed RQs.

RQ1: What is multimodal communication’s significance for HRI?

One approach to motivate the significance of multimodal communication in HRI is rooted in interactional theory. The Multiple Resources theory is devising the human attention as a number of different resources [217]. Each resource can be linked with different information sending, receiving, and processing modalities. The theory, therefore, posits that when tasks having to be conducted in parallel require different resources, performance will be better than if they require the same resource. Thus, if the interaction between humans and robots employs different interaction modalities which tap into different resources, the performance can be assumed to increase as more information can be conveyed without more effort. This also provides further argumentation to focus on publications featuring more than two modalities. Apart from this motivation rooted in theory to consider multimodal communication, the other ten methodological and theoretical literature pieces found in this literature review are investigated for their stated motivation to consider multimodal interaction in HRI. In order to access this information, all relevant literature pieces are meta-analytically coded for their qualitative data concerning the motivation for multimodal communication. It is found that the most prevalent motivation for multimodality in HRI in theoretical literature pieces is the provided flexibility through complementing modalities [66, 253-255]. This refers to human-to- robot (H2R) communication. The flexibility in communication from robot to human is only mentioned in two of the above referenced four publications [66, 254]. Generally, H2R communication related motivations are found more prevalently than robot-to-human (R2H) communication. The other most frequently mentioned motivations are robustness in H2R communication through combination of redundant information channels [20, 66, 255], and the intuitiveness, or naturalness which is achievable with multimodal H2R [20, 253]. Closely related to this argument of naturality is the argument of humans inherently being used to interact multimodally. Thus, two publications argue, interactions between humans and robots should also allow the human to interact multimodally with the robot [256, 257]. Table 24 summarizes these motivations and includes the remaining ones present in the theoretical literature.

112 Direction Reason # found in theoretical literature Reference H2R Complementing modalities/ Flexibility 4 [66, 253-255] H2R Redundant modalities/ Robustness 3 [20, 66, 255] Intrinsically multimodal Human-to-Human H2R 2 [256, 257] communication H2R Intuitiveness/ Naturality 2 [20, 253] R2H Human Resource Pools 2 [217, 258] R2H Complementing modalities/ Flexibility 2 [66, 254] R2H Naturality 2 [253, 259] H2R Usability/ Ease of Use 1 [260] R2H Accurate detection of Human 1 [260] R2H Redundant modalities/ Robustness 1 [66] Table 24 Motivation for Multimodal Communication in theoretical Literature Table 24 exhibits two dominant themes in the motivation to build multimodal communication into HRI. On one side, there are performance-related arguments of flexibility, robustness, and accurateness. On the other side, there are arguments related to the naturality, usability of multimodal interaction supported by the above devised argument of attentional resources and interhuman communication. Thus, a similar dichotomy as in the elaborations made on trust is found. The remaining two RQs will therefore aim to shed light on investigating the dichotomy in relation to trust and the role of the human in the interaction. However, before it is proceeded to the remaining two RQs, the found comparative user studies are summarized in order to gain an understanding for the empirical evidence behind these theoretical elaborations. Kaber et al. [261] experiment with visual and auditory cuing to support the human operator in remote interaction with a robot. The bi-modal cuing is found to significantly improve performance, measured as completion time, compared to single modal and no cuing conditions. Similarly, while bi-modal cuing significantly increases situational awareness, mono-modal cuing is not found to significantly increase awareness compared to no cuing. Workload is not found to increase across cuing conditions. Thus, these results are in line with the multiple resource theory mentioned above which is also referenced in Kaber et al.’s work. In another similar experiment, Aubert et al. [262] compare robot intent communication without cue, with motion, or display-based, or multimodally with both motion and display. Set in a cooperative manufacturing context, the study uses objective and subjective fluency measures to evaluate the different modalities. Multimodal intent communication of the robot is perceived as most fluent and transparent by the human participants with a significant difference to solely motion-based and no intent communication. A more detailed investigation of results shows that while motion-based intent communication improves the subjective teammate transparency and reduces conflicts, the display-based intent communication improves the participant’s idle time, as well as the perceived transparency and fluency. Thus, in line with arguments of complementary modalities above, this experiment provides evidence for

113 multimodal communication benefitting from combining the strengths of the individual modalities. Investigating robot feedback to humans, although in a conversational setting, Gonsior et al. [263] find speech and visual feedback provided through a display to be preferred by humans in terms of perceived importance. Additionally, comparing different combinations of speech, screen, and pointing gesture of the robot reveals that providing feedback beyond speech elicits greater comfort and reassurance in human participants. Thus, multimodality, also featuring non-human modalities, such as a screen, are perceived as useful communication modalities of the robot. In support of human communication being naturally multimodal, Jokinen & Wilcock [264] find that human’s combination of eye gaze, body posture, and facial expression during a conversation with a robot can predict with significant difference to monomodal predictors the human’s overall evaluation of the interaction with the robot. Furthermore, this multimodal combination of human behavior also performed well in predicting the human’s evaluation of the robot in terms of its usability and responsiveness. Similarly, Gross et al. [265] investigates which non-verbal modalities in human-human interaction are effective in resolving instruction ambiguities. It is found that a combination of gestures, hand pose and the semantic of the human speech are effective in resolving the ambiguities, however, also indicating the multimodality of human-human interaction. Hence, multimodal communication’s significance is explicated in theory and supported by empirical research. Which modalities are to be combined to serve different communication goals cannot be explicated and possibly is again dependent on specific task domains.

RQ2: How does multimodal communication relate to trust?

Two studies are found which investigate the relationship between modality and trust. The relationship between single modalities and trust is already investigated and exhibited in the literature review on trust. Thus, the study found in the previous literature review which is also found herein by Sanders et al. [241] is not summarized again. However, as an additional note, the motivation in this study to investigate multimodal communication is rooted in the human resource theory. The other study found herein is investigating the combination of gestures to command the robot and lights as feedback mechanism from robot to human [39]. The motivation to build multimodal interaction herein, is not rooted in human capabilities but rather in the argument of multimodal interaction being easy to use. Comparing results from two experiments in one of which gestures are used to command the robot without robot feedback until the command is executed and in the other of which the robot indicates whether the gesture

114 is detected by responding with a light signal until the command is executed. Trust is indirectly measured by asking respondents about the naturalness, reliability, usefulness, ease of use, and perceived response time, alluding to the above elaborations of trust being understood to be heavily influenced by the perceived performance of the robot. Employing only the pointing gesture without robot feedback is perceived as slower and less useful than when there is visual feedback from the robot. The fusion of gesture with feedback is also positively perceived in terms of naturalness and reliability. As no direct comparison between the two conditions is conducted in the same experimental setting, it is cautioned from generalizing these results. In conclusion, no final statement on the influence of multimodal communication on trust can be made. Statistical evidence provided is too weak to draw conclusions. Thus, rigorous research is motivated to investigate how combinations of different interactional modalities for both H2R as well as R2H communication elicit trust. Drawing from the previous literature review, this will likely be moderated by task environments. As seen in the answer to RQ1, multimodal communication already exhibited relevant and considerable impact on HRI. Thus, the implications of multimodal communication on trust might be a secondary concern and research in this context will also profit from understanding long term interactions between trust and performance in HRI addressed in the previous literature review.

RQ3: How does multimodal communication relate to control and the role of the human in HRI?

Similarly to the RQ8 of the literature review on trust, this RQ tries to examine the interaction between the various roles humans can be given and the control that is associated with them with multimodal interaction. Particularly, it is investigated how motives to implement multimodality are affected by the human role as well as which modalities are implemented in those multimodal interaction frameworks. Furthermore, it is differentiated between publications which use a static role attribution and others with investigate and compare different roles and control mechanisms. In order to gain these insights a meta-analytic, qualitative data extraction is conducted with the publications of interest. These are mainly concepts, proofs of concepts as well as individual proof of concepts which also feature a user evaluation. None of the above included theoretical publications, nor any of those related to trust are included herein. Two publications are found investigating different human role attributions and multimodal interaction. Kaber et al.’s [261] study on remote interaction is already mentioned in another context herein. As another aspect of their study, the authors compare manual, adaptive, and fully automated robot control in combination with multimodal cuing from system

115 to human. The cues are used to inform the operator about system state changes in the adaptive control mode. The adaptive control mode means variable switches from manual to supervisory control and vice versa. It is found that the performance loss due to adaptive control compared to supervisory control is significantly reduced with bimodal cuing of control changes compared to monomodal visual and no cuing. When the control mode is changed in adaptive control, cuing is not found to eradicate the entire situational awareness loss seen in no cuing condition. However, auditory combined with visual cuing reduced the awareness loss more than monomodal cuing. Lastly, as already mentioned above, workload is not found to significantly increase through cuing. Thus, multimodal information support to the operator can support trade- offs when operators switch roles throughout their interaction with robots. In another experiment with a specifically industrial setting, Giuliani & Knoll [266] compare two different robot roles. The robot either takes an instructive role, handing the human parts to assemble and beforehand explaining how to assemble the parts, or it takes a supportive role in which only the parts are handed to the human and explanations are only provided in case the human takes a wrong part. This crucially also influences the human role in the mutual interaction. Either the human is executing robotic commands in the instructive case or the human is guided towards the right solution without preventive control. In order to be able to take the different roles, the authors equip the robot with multimodal sensory capabilities so that the human can interact naturally with the robot. The robot can, furthermore, also interact with the human through multimodal communication. The study reveals that humans have no preference to work with either robot role and accept both robot roles, however humans develop more positive feelings for the supportive robot. As the experiment is not repeated without multimodal communication no definite conclusions can be drawn as to the importance of multimodality to enable different role attributions in HRI. According to the authors, multimodality of both the H2R as well as of the R2H interaction only allows the robot to take the described roles. Thus, further empirical research is needed to investigate the interaction between different control attributions and multimodal interaction. Whereas the just mentioned studies vary the human role as a condition in the study, more publications are found in which the human role remains static during the interaction. This is not implying that the relationship between human and robot is static as mixed-initiative systems are also covered in the included concepts and proofs of concept. Thus, control can vary even though role ascriptions are static. Furthermore, in all the following publications the suggested frameworks and concepts feature consistently more than two modalities, thus, allowing a comprehensive review of this segment of publications on multimodal

116 communication with relevance to industrial HRI. In total 24 publications are featured in the following meta-analysis. Details of those are provided in Table 25.

Communication Type of Year Author Context Role Use of Multimodality Reference Direction Literature Perzanowski human controlling robot & robot Proof of 2004 Bi-directional Remote Controller [267] et al. asking for clarification concept Hanafiah et human-robot conversation Proof of 2004 H2R General Controller [268] al. (disambiguation) concept human controlling robot & robot Proof of 2006 Foster et al. Bi-directional Industrial Collaborator [269] feedback to human concept human-robot conversation Proof of 2006 Hüwel et al. H2R General Instructor (disambiguation, information [270] concept adding) human-robot conversation Proof of 2006 Li et al. Bi-directional Service Co-operator [271] (disambiguation) concept human-robot conversation (robot Proof of 2007 Li & Wrede R2H Service Instructor [272] abilities) concept Proof of 2009 Tan et al. Bi-directional Industrial Collaborator support for human operator [273] concept human-robot conversation Proof of 2012 Chao Bi-directional General Co-operator (disambiguation, information [274] concept adding, reinforcement) Zaatri & Proof of 2013 H2R Remote Controller human controlling robot [275] Bouchemal concept Cherubini et Proof of 2015 H2R Industrial Collaborator human controlling robot [276] al. concept human-robot conversation Proof of 2015 Hagiwara H2R General Instructor [277] (disambiguation) concept Jacob & Proof of 2016 Bi-directional General Co-operator human controlling robot [278] Wachs concept Remote/ Proof of 2016 Höcherl et al. R2H Controller support for human operator [279] Multirobot concept 2017 Kane et al. Bi-directional General Controller human controlling robot Concept [280] Mortimer & 2017 H2R Industrial Co-operator human controlling robot Concept [281] Elliott human controlling robot & 2018 Horvath et al. Bi-directional Industrial Controller Concept [282] information provision 2018 Liu et al. H2R Industrial Controller human controlling robot Concept [283] 2018 Liu et al. H2R Multirobot Controller human controlling robot Concept [284] Proof of 2018 Cacace et al. Bi-directional Industrial Co-operator human controlling robot [285] concept human controlling robot & robot 2018 Kardos et al. Bi-directional Industrial Co-operator Concept [286] feedback to human Papanastasiou human controlling robot & Proof of 2019 Bi-directional Industrial Co-operator [287] et al. information provision concept Remote/ 2019 Petruck et al. H2R Collaborator human controlling robot Concept [288] Industrial human controlling robot & robot 2019 Shu et al. Bi-directional Industrial Controller Concept [289] feedback to human 2020 Liu et al. H2R Remote Controller human controlling robot Concept [290] Table 25 Publications used in the qualitative Meta-Analysis From findings in the trust literature review, it might be expected that different modalities are employed in different task contexts. Thus, the initial analysis involves the adopted interaction modalities which are shown in Table 26 for the different environments industrial, general, service, multirobot, and remote. While dedicatedly service robotics are excluded from

117 this review it is found that there are several publications with relevance for the industrial context. The general category refers to publications which do not explicitly mention an industrial use case, nor do they mention a particularly social or service-oriented use case. Multirobot refers to a human interacting with several robots simultaneously and remote refers to remote interaction while having excluded specifically remote supervisory control environments as stated in the section on selection criteria to this literature review.

Direction Modality Industrial General Service Multirobot Remote H2R Gesture/Hand motion 8 8 1 2 4 H2R Speech 68123 H2R Tactile/Force 42002 H2R Body motion 32100 R2H Speech synthesis 41101 R2H Display 3 0 0 0 0 H2R Eye gaze 1 3 0 0 0 H2R Buttons 3 0 0 0 1 H2R Body pose 20000 R2H Interactive display 2 0 0 0 0 R2H Text 2 0 0 0 1 H2R Hand pose 1 1 0 1 0 H2R Visual display 1 1 0 0 1 H2R Interactive interface (AR/VR) 11000 R2H Visual projection 1 0 0 0 0 R2H Non-verbal (not specified) 0 0 1 0 0 R2H Pictures/Graphics 2 0 0 0 1 H2R 1 0 0 2 1 H2R Head motion 1 0 0 0 0 R2H Wristband 1 0 0 0 0 R2H Facial expression 1 0 0 0 0 H2R Interactive display 1 0 0 0 0 H2R Force/Torque sensor 1 0 0 0 0 H2R Microphone 10000 H2R Camera as robot vision system 10000 H2R Smartwatch 1 0 0 0 0 H2R 6 Degrees-of-Freedom mouse 1 0 0 0 1 R2H Video 1 0 0 0 0 H2R Head pose 0 1 0 0 0 H2R Personal Digital Assistant 0 1 0 0 1 R2H Movement speed 0 1 0 0 1 Table 26 Chosen Interaction Modalities by Environment Table 26 exhibits the importance of H2R interaction modalities within multimodal frameworks as those dominate the top of the list across relevant environments. This is in line with the previous finding of motivations found in the theoretical literature and is persistent throughout the different identified environments. In the industrial environment 38 times H2R modalities are employed compared to 17 R2H. In a general context the difference is 28 H2R modality counts versus two R2H. In the service environment its three H2R to two R2H modality

118 counts, in multirobot settings its seven H2R compared to zero R2H modality counts and lastly in the remote environment H2R modalities are used 14 times compared to four times for R2H modalities. Table 26 also reveals the importance of human speech and gestures across environments. In general, however, a great variance on employed modalities is found numbering 19 different H2R and 13 R2H interaction modalities. Except for the application of H2R eye gaze in the general context no differences in a tendency towards using more implicit communication modalities in the general or service context are found compared to the industrial context of manufacturing and assembly. Hence, this stands in contrast with what might be expected from findings from the trust literature review which indicated a greater focus on affective trust in more social contexts. Next, Table 27 shows the usage of modalities for the different identified roles of humans in the included literature. The role descriptions devised in the descriptive literature review above are used to define the role the human is attributed in the different publications.

Direction Modality Controller Instructor Collaborator Cooperator H2R Gesture/Hand motion 10 2 2 5 H2R Speech 10 2 1 4 H2R Tactile/Force 3021 H2R Touch 3 0 0 0 H2R Body motion 2103 R2H Display 2 0 0 1 H2R Hand pose 2 0 1 0 R2H Speech synthesis 1122 H2R Eye gaze 1 0 1 2 H2R Buttons 1 0 1 1 H2R Body pose 1010 H2R Visual display 1 0 1 0 R2H Interactive display 1 0 0 1 R2H Wristband 1 0 0 0 R2H Visual projection 1 0 0 0 H2R Personal Digital Assistant 1 0 0 0 R2H Movement speed 1 0 0 0 H2R Interactive interface (AR/VR) 0101 H2R Head pose 0 1 0 0 R2H Non-verbal (not specified) 0 1 0 0 R2H Text 0 0 2 0 R2H Pictures/Graphics 0 0 2 0 R2H Facial expression 0 0 1 0 H2R 6 Degrees-of-Freedom mouse 0 0 1 0 R2H Video 0 0 1 0 H2R Head motion 0 0 0 1 H2R Interactive display 0 0 0 1 H2R Force/Torque sensor 0 0 0 1 H2R Microphone 0001 H2R Camera as robot vision system 0001 H2R Smartwatch 0 0 0 1 Table 27 Chosen Interaction Modalities by Human Role 119 For the controller role, as well as for the instructor and the cooperator role, speech, and gestures employed by the human seem to present the accumulated focus of research. Whereas for the collaborator role no such clear focus is discernible. However, these differences can only be deemed indicative as the sample size is too limited. Naturally the human being the controller, more often H2R modalities are employed than R2H modalities (35 to seven). Similarly expected, for the instructor role H2R modalities are also more often employed than R2H modalities (seven compared to two). In contrast, there is a greater balance for the collaborator role with eleven H2R compared to eight R2H modality counts. Interestingly, for the cooperator role an even starker contrast is found with 23 times a H2R modality being employed to only four R2H modality counts. This suggests that while human-robot teams are repeatedly mentioned to feature mixed-initiative shared between human and robot, multimodal interaction research indicates that the current focus still seems to be on the human acting upon the robot to achieve a goal. Thus, in order to harness the presumed benefits of truly mixed-initiative human- robot teams more focus is needed on how the robot can interact with the human. In order to get an understanding what the motivations to embed multimodal interaction in the different task contexts are, it is referred to Figure 23.

Remote

Multirobot

Service

General

Industrial

0246810121416 Industrial General Service Multirobot Remote H2R/Intuitiveness/Naturality 52111 H2R/Complementing modalities/Flexibility 41112 R2H/Adapt to the task context 2 0 0 0 0 None/No Motivation 1 1 0 0 1 R2H/Complementing modalities/Flexibility 11000 H2R/Human factors as center point 1 0 0 0 0 H2R/Usability/Ease of use 10000 H2H/Intrinsically multimodal 04100 H2H/Easy to understand/Naturality 0 1 1 0 0 H2R/Main channel of communication: Speech 0 0 1 0 0 H2R/Redundant modalities/Robustness 00021

Figure 23 Motivation for Multimodal Communication by Task Context In contrast to the modalities used, a stronger difference is perceivable in the motivation within the different contexts. Whereas the industrial context of manufacturing and assembly is more focused on flexibility and naturality for the human operator, the general and service context take greater inspiration from human-to-human interaction. Multirobot and remote interaction are motivating multimodal interaction with its robustness. Thus, while the

120 motivation differs between contexts the chosen modalities are not differing as much which hints towards the universality of multimodal interaction configurations. Lastly, the motivation for different roles of the human is analyzed. Therefore, it is referred to Figure 24.

Figure 24 Motivation for Multimodal Interaction by Human Role It is found that both the controller and collaborator role are more often associated with performance aspects, such as flexibility and robustness, whereas the instructor is almost exclusively and the cooperator role equally associated with human-to-human interactional aspects. As the collaborator role is quite limited by its requirement of joint action, it is found to be often implemented in industrial use cases. Whereas the cooperator role is more widely applicable and thus, is also applied in more social contexts. Thus, the stronger focus on human- to-human communication is to be expected. Interestingly, this does not reflect itself in the employed modalities for interaction indicating the universality of modalities and configurability of multimodal frameworks. Overall, while the motivational background is found to differ across task contexts and roles, the employed modalities are not found to overly differ. It is found that in contexts relevant to industry the current focus of the scientific community to be on humans acting upon robots rather than robots acting upon humans. This is supported by the higher counts of H2R modalities as well as the higher counts of H2R motivations in the theoretical literature. Moreover, more publications are found which place the human in the role of the controller rather than the collaborator or cooperator. It can be expected that with a greater focus on human

121 robot teams and HRC more focus will be laid on interpersonal interaction as well as feedback modalities for R2H interaction.

3.5.2.3.6 Limitations Similarly to the literature review on trust, also this literature review is limited by a potential for greater objectivity in the literature selection as well as the meta-analytic coding process. Both are conducted solely by this author and are not repeated by an independent researcher. Furthermore, this literature review includes several concepts and proof of concepts which are not yet evaluated through user studies. Thus, the empirical evidence for human’s positive perception of multimodal interaction frameworks with more than two modalities is limited. Thus, future research is motivated to initiate controlled user studies in which several configurations are tested to understand user’s perception of different possible multimodal interaction configurations. Lastly, no statistical tests are conducted on the meta-analytic results resulting in a lack of scientific rigor of the made conclusions and implications. Furthermore, no equal distribution of publications across environments and human roles is found, thus, limiting the generalizability further. This leads to motivating future research to investigate other roles for humans other than it discretely controlling the robot. In future research the coding employed should be extended to multimodal frameworks, concepts, and implementations featuring two modalities as well. The publications excluded during the data extraction with two modalities mainly feature gesture and voice command modalities for the human. By extending the coding scheme a comprehensive picture on all multimodal concepts can be drawn – also in terms of the motivation to limit the framework to two modalities. 3.5.3Results – Scenario Transfer to Aspects of Human- Robot Interaction 3.5.3.1 First Scenario – Workforce Emancipation

3.5.3.1.1 The Role of the Human and Control This scenario exhibits a gridlock between companies and its workforces on the matter of (re-)skilling. Due to a growing mistrust between the two, the control that is handed to the human workers will mainly be motivated by necessity rather than trust-based empowerment. The counter-movement from labor-unions, however, will try to increase control. Thus, the controller role is likely to be filled. The controller role allows the workers to maintain some degree of process knowledge necessary to sustain operations. The company is enabled to increase the level of autonomy while not being able to take the humans out of the control and design loop entirely. Efficiency gains are limited by the level of autonomy handed to the robots.

122 Thus, the interaction will be focused on task-specific control. It will not be necessary for the human and the robot to share their workspace. This might allow, in certain instances, for the work-life balance to increase for front-line workers as remote control while maintaining a certain level of robot autonomy. Regardless of the distance between humans and robots, human-robot teams as described in literature will be unlikely as the workforce will perceive robots as agents of the company for which dispositional trust has diminished leading up to the workforce emancipation.

3.5.3.1.2 Human Trust in Robots Besides the relation of trust to the human role and control as well as communication, the trust antecedents listed on the left of Table 28 are investigated. Trust Factor from [91] Associated Trend or Theme Workload Working intensity Age Demographic change Attitude towards robots, appearance, anthropomorphism SciFi taxonomy from [194] Expertise, prior experience (Re-)Skilling offers Level of automation Replacement Table 28 Association of Trust-relevant Factors and discussed Trends and Themes As working intensity is posited to increase in this scenario, workload can be assumed to increase to certain degree as well. However, no perfect correlation can be assumed as work- and cognitive load in particular are found to be reduceable by appropriate interaction and task design as seen in the theoretical literature review. Nonetheless, using the enframing notion of humans being enframed by technology, robotics might pose a challenge in the sense that task- related workload can be attenuated while overall perceived working intensity increases. Again though, as workload does not need to increase while trust is increased, this scenario is not assumed to decrease trust based on higher working intensity. As demographic change is assumed to continue, trust might decrease as older humans are found to trust robots less. However, this assumes that workers now young and inclined to trust robots more will have lower trust as they grow older. This assumption is not found to be investigated and, thus, also not validated. Therefore, based on demographic change no prediction on trust can be made. As humans are projected to mainly act upon and not with the robot, robot form and anthropomorphism are expected to be focused more on performance-based indicators rather than eliciting teammate affect. Thus, this considerable antecedent of trust can be expected to become less influential for trust or distrust building. However, humans are likely to perceive the robots as agents of the company in this scenario. Thus, attitudes towards robots might shift

123 given the trust relationship between the workforce and companies. The low inclination of workers to trust robots based on their perceived agency is compensated by the notion that robots in the controller role are being controlled and are thus subordinate to their human interactors. Consequently, humans are projected to base their trust in robots less on affect but more on their performance as posited in the literature review. In conclusion, dispositional trust is projected to be sufficient for the interaction likely to dominate in this scenario. Trust calibration for appropriate situational and interactional trust to be built will nonetheless be an issue but can be moderated by adequate interaction design. As (re-)skilling is aimed at enabling control as the compromise between workforce and companies, trust might be developed adequately for the interaction relationships. However, this scenario does not allow for a seamless transition of robots from tools to teammates. Prior experiences and expertise will not be developed at scale to allow for such transitions. Similar to fears of replacement which might affect the attitude towards robots, as indicated above, higher levels of automation and in the robot’s case levels of autonomy, are suggested by literature to decrease trust. Interaction design can alleviate the decrease by ensuring transparency. The focus on which is posited to be rather on invisibility, i.e., access to the robot’s environment, than understandability, i.e., access to the robot’s decision making, as the robot is projected to neither have direct nor indirect control over the human.

3.5.3.1.3 Multimodal Communication As the roles ascribed to this scenario only include static control allocation, the literature review on trust indicates that adequate trust establishment should be focused on the ongoing interaction. Algorithms aimed at optimizing human trust by varying the level of autonomy will be less relevant. Rather, communication design will be used to provide the necessary transparency to elicit the needed trust for interaction. While more natural and intuitive interaction modalities provide possibilities for control, the communication is focused on H2R communication. As understandability is of less relevance in this scenario, R2H communication is less relevant. R2H communication only becomes relevant in remote interaction. Thus, multimodal communication serves the purpose of ensuring both robust and flexible interaction, however, in the form of the human acting upon the robot. Communication of the robot to the human is, if necessary, limited to providing the human with awareness of the robot’s environment. Depending on the complexity of the environment this might be multimodal but not necessarily.

124 3.5.3.2 Second Scenario – Moderated Innovation

3.5.3.2.1 The Role of the Human and Control It is easy to perceive that in this scenario, in which collaborative HRI is projected to become the most prevalent relationship, humans are likely going to take on the roles of collaborator, cooperator, or in brief, teammate and in some cases team-leader. This rests on the design promises of collaborative HRI to enable human skills to be optimally leveraged through the robot by teaming both up. Thus, uniquely human abilities are employed both to the benefit of the employers by enabling a coagulation of efficiency and flexibility as well as the employees by empowering them to leverage their unique abilities. Control resides with the human-robot team. In settings of morale importance ultimate control may remain with a human team-leader, whereas otherwise the human-robot team will operate in a mixed-initiative mode on the planning and execution level.

3.5.3.2.2 Human Trust in Robots As it comes to workload, the nature of human-robot teams will provide possibilities to dynamically alleviate excessive workload through the mentioned mixed-initiative systems. Thus, workload is not expected to negatively influence trust in this scenario. In addition, tasks which cause non-stimulating workload to the human can be offloaded to the robot. Thus, further benefitting the workload reduction. Similar to the other scenario, the influence of age on trust is hardly predictable as little is understood about how trust changes over a lifetime. However, as is indicated in the literature review on trust, as the intrinsic trust calibration of older humans might be overly sensitive, age- attenuated trust calibration mechanisms might have to be developed. This is especially relevant for human-robot teams since literature agrees that in this relationship, human trust in robots grows more complex and has increased importance for the functioning of the team. As interaction will likely take place more often physically, considerations on the form as well as the degree of anthropomorphism will play a larger role in the trust relationship. As the design space of bodily as well as behavioral anthropomorphism is found to be significant, research is highly motivated to rigorously investigate the relationship between trust and anthropomorphism further. It should be considered that behavioral anthropomorphism is found to have greater influence on trust than bodily anthropomorphism. Thus, an initial design recommendation is to focus on a bodily form which elicits intelligence and is adequate for the spectrum of tasks considered for the robot, whereas behavioral anthropomorphism can be designed to elicit trust. Here, it is to be investigated how much dissonance between bodily and behavioral anthropomorphism is still perceived as accepted by humans before attitudes worsen. 125 Attitudes towards robots are increasingly benevolent as the scientific reasoning of their benefits permeates through society. Further benefitting from exposure of best practice companies in which the ideal of liberating and empowering robots are observable. Robots in this scenario are less perceived as agents of the company, but peers to the human workers by supporting a way out of tedious, dirty, and perilous work. (Re-)skilling plays a crucial role as to ensure the necessary level of prior experience as well as expertise to work in human-robot teams. A greater understanding of the workings of robots will be necessary. Similarly to successful human-human teams requiring understanding of the other human, a completely subservient and adapting robot would not allow an empowerment of the human, but would rather have to remain in the role of a tool. Educating humans on the workings of robots will also have positive repercussions. By increasing robotic literacy in society, it is made less susceptible to simplistic, negative attitudes toward the technology. Education on robotics can also be seen as an empowerment of the educated humans as they will be able to contribute to the design of human-robot teams. Lastly, (re-)skilling is also crucial to prevent a backlash of the workforce. As is indicated in the scenario narrative, jobs with low qualification profiles will be automated along this innovation trajectory. To avoid this automation to cause societal repercussions is of high importance. Otherwise, it could cause a dominance of negative attitudes towards the technological trajectory which might induce a shift towards the first scenario. The level of automation is not fixed in task domains in which humans directly interact with robots. For a certain number of tasks robots will take over from humans for which the system will be automated. Thus, it might be that trust in those parts might reduce, however, as these become staples in daily operation and humans are not projected to heavily interact with those, this reduced trust is not projected to be an impairment. To the contrary, the task domains in which humans team up with robots, the level of autonomy is flexible and adapts to the human. Thus, in combination with a necessary shift in transparency towards understandability of the robot, trust can be adequately calibrated. As the relationship between human and robot becomes dynamic, trust elicited from the robot becoming a teammate becomes a significant design challenge. While dispositional trust is projected to be positive in this scenario, situation- and task-domain-dependent trust calibration of interactional and situational trust is of high priority for HRI design engineers. In that regard, mixed-initiative algorithms will be challenged to ensure a calibrated level of human trust. Research has, thus, to continue to investigate which methods are suited for different task domains and environmental situations in order to maintain the trust relationship.

126 3.5.3.2.3 Multimodal Communication Multimodal communication will be vital in both ways of communication. As human- robot teams become mixed-initiative both the human as well as the robot need to be able to understand their counterpart making expressivity, both active and passive, an important design aspect. However, research is needed as to which forms of interaction and communication provide the best-suited modality to calibrate trust as well as negotiate task-related aspects such as planning and execution.

3.5.3.3 Third Scenario – Economics of Replacement

3.5.3.3.1 Role of the Human and Control As this scenario is quite similar to the first one in its outcome, human roles will be similar. However, as the moment of realization is missing in this scenario, the humans remaining in work are increasingly taken out of the control as well as the design loop. Thus, the diminishing workforce dominantly takes the role of supervisors and instructors. Supervisory roles allow workers to claim some degree of goal-setting responsibility while instructor roles ensure some degree of process-related work. Both roles allow the company to harness efficiency gains provided by the high levels of robot autonomy associated with those roles. Similarly to the first scenario, humans perceive the robots as agents of the company. However, as opposed to intervention, workers and companies in this scenario are ensnared in an environment of intensive competition causing the economic rationale to dominate in this scenario. Critically, in industries in which robotics have a long tradition and due to technological constraints have only been able to be implemented as automated devices such dominance of economic rationale is more likely to unfold.

3.5.3.3.2 Human Trust in Robots In general, trust calibration will be less important in this scenario than in the other as the interaction is increasingly detached. Robot technology is becoming more and more capable of emulating human capabilities. Thus, human intervention is less and less needed with reduced possibilities for mis- and disuse. Working conditions are projected to not intensify significantly. As humans are taken out of the control loop, they are less enframed by robotics and thus their work is increasingly detached from the robots. Whereas in the first scenario workload has to be traded off with trust, situational awareness is of greater importance as the interaction will only take place occasionally.

127 Similar to other scenarios, no definitive assumption on age-related trust can be made also herein. However, demographic change is a beneficial force in this scenario as it reduces the pressure on the active labor force to find employment. Humans entering the workforce can be assumed to have increased affinity for technology which enables them to take supervisory and instructor roles. As the direct interaction between humans and robots is projected to be at a low level in this scenario, aspects of appearance and anthropomorphism are projected to be less important to build trust. Despite them being found as significant antecedents, as literature indicates, but not finally agrees that the influence of these factors is larger when humans interact face to face with the robot [235]. Thus, behavioral and bodily design aim for maximum performance as opposed to emulating human traits. Similarly to the first scenario, robots are perceived as agents of the company. This also causes negative attitudes towards robots in this scenario. At the same time, some workers will be transitioning into supervisory roles which is seen by the beneficiaries as a positive effect of robotics. (Re-)skilling is first and foremost aimed at ensuring that the workforce is able to intervene in automated operations to ensure the pursuit of given goals. Similarly, for instruction roles, workers are trained to convey instructions. Literacy in robotic technology is of secondary concern. Neither is interaction training. Thus, robot literacy is not supported by companies and rather kept to a viable minimum. However, this only enables the leveraging of both human as well as robotic skills to a certain extent. Efficiency gains rely on continuous development of more advanced robotics. Consequently, interactional trust might be very volatile in the beginning due to a lack of prior experience, expertise, and a tendency to distrust robots might prevail in the long run. Lastly, as said the degree of automation and the robots’ level autonomy are projected to significantly increase. Hence, the detachment indicated in the paragraph above can be expected to also be a result of this shift towards autonomous robots. Moreover, as robots in this scenario are granted much higher autonomy, they might also be perceived negatively as studies exhibited in the literature review show. As robots remain very capable tools, however, with higher autonomy than in the first scenario, trust will be affected by both invisibility of the interaction as well as the robot’s understandability. In conclusion, dispositional trust will be low, however, due to the little exposure of humans to robots this is projected to not matter as much as in the other scenarios. Furthermore, situational and interactional trust will be hard to calibrate as humans cannot collect continuous

128 experience working with robots. In general, trust is projected to be more dependent on cognitive aspects such as performance than affective aspects.

3.5.3.3.3 Multimodal Communication Multimodal communication can be assumed to facilitate understandability of the robot’s decision making which has to be accessible to the human worker. As instructions and supervision will happen at higher, more abstract levels, H2R interaction modalities have to provide sufficient flexibility. Thus, multimodal communication in this scenario can be expected to be needed in both ways. Control over goals will only reside with the human to correct robotic behavior instead of continuous coordination in scenario one. R2H communication will thus only serve as a feedback channel.

3.5.3.4 Common Denominator

As becomes evident from the scenario transfer to the three aspects of HRI under investigation herein, in the long run, socio-economic development and HRI are intertwined. Similarly evident is the importance of skilling, be it reskilling or upskilling. Robotics present a number of novel ways humans can interact with technology. Their embodiment offers a vast design space. As to be able to harness the certainly existing benefits, people affected by robotics have to be educated on the possibilities this technology offers. But also on what differentiates them from the technology in order to be able to understand how interaction between them can work and how interaction between them can be beneficial. In automation research, investigations evolve around keeping the human in the control loop. The common denominator in this scenario transfer is indicated to be that for robotics to be able to be teamed up with humans, affected humans will have to be kept in the design loop.

4 Normative Outlook

All three scenarios draw realistic futures which are plausible and are consistent. However, during the development and the formulation of their respective implications for HRI, the made observations are embedded with values. Thus, this section aims to provide a normative outlook, discuss the key dilemma observed and pose open questions. Initially focusing more on ethical questions, several of these questions are currently perceivable based on the current operational design approaches. Investigating the impact of robots on the meaningfulness of human work, Smids et al. [291] argue that it will depend on the design of robots whether human work will become more or less meaningful. However, one of the stated dimensions of meaningful work, maintaining good social relationships at work,

129 necessitates profound ethical considerations. On one hand, it is argued that robots should remain tools and subservient to humans, whereas on the other hand, the argument is made that the relationship between humans and robots should be at the center of our understanding of robots rather viewing robots only in terms of what they can be tasked with. Thus, in the former line of thought robots would be designed to make human work more meaningful in all aspects other than trying to become good colleagues. In the latter line of thought, robots, among other aspects, will be designed to be able to be good colleagues by employing human behavior which in turn enables humans to interact with robots as with good human colleagues. Therefore, one question to be answered is whether it is necessary for functional human-robot teams that robots become seen as colleagues equal to humans? This question also arises in a similar form when it comes to trust: is it necessary for functional human-robot teams that we trust robots the same way we trust humans? Research, also included herein in the literature review on trust, indicates that human trust will transition from a mere cognitive based concept to also include affect as robots become human teammates. Thus, it is suggested to consider two ways of designing robots [222]. One in which robots are modelled along a human blueprint, or one in which robots are designed along an independent development, possibly animalistic or purely functional. The relevance of these questions arises from the ensuing open question about the ramifications of such interpersonal relationships with robots on human-human relationships. This question being, whether we are eroding human-human relationships by providing artificial partners that can be modelled according to individual desires? In addition, and regardless of the response given to the previous question, another dilemma arises from designing robots to allow for meaningful work. Robots, as intrinsically non-human agents, will likely rely on extensive sensorics and data to accommodate a human in a team [291]. However, these characteristics are described to also allow for monitoring the human. Consequently, taking up the argument from above, humans, while being liberated to interact naturally with robots, will be working in an environment which might be primarily designed for the robot, not the human – in turn, subjecting the human. One technological development where such a development is already perceivable is the development of cobot technology. Cobots in their original design are passive, not actuated devices to navigate human movement as to increase human’s precision, support faster operation, and improve ergonomics [81, 82]. They are interacted with by physical, haptic interfaces as the human moves a part and the cobot steers the precise trajectory. The latest generation of cobots, in contrast, are actuated devices which can work autonomously in the sense that they do not

130 need human actuation and guidance [12]. They can be hand-guided as well, however, as is posited by Villani et al. [12], mainly for programming purposes. Thus, a concern not relevant for the original version of cobots, safety, is a main concern for the latest version. As indicated in the scoping literature review, collaborative use-cases, in which humans and cobots manipulate objects jointly are rare [65]. Cobots are inherently limited in force to not hurt the human co-worker when both actively operate in close proximity [12]. This, however, inherently prevents the precise guidance of the human’s naturally unprecise movement. The obstacles of safety and joint action is, hence, tried to be worked around by equipment of recognition algorithms, sensorics, and natural human communication interfaces. The example of cobotic systems exemplifies two dilemmas. It is firstly pledged to consider cooperation as a viable option to combine human cognitive abilities with robot’s physical abilities without continuously striving for the purportedly holy grail – collaboration. As is shown, cooperation also enables human-robot teamwork. But, crucially, with less physical interaction, simplifying safety aspects. Nonetheless, much coordination will be needed in true human-robot cooperation. Thus in order to pursue the seamless leveraging of robotic as well as human skills, the extent of sensing required as to enable CoCo is already perceivable [65]. Thus, it is cautioned to consider the privacy implications of advancing rigorously with ever more sensorics in order to operate ever more complex cobot systems as to pursue CoCo. Secondly, besides the described dilemmas around the operational design of HRI, another one arises when looking at HRI as part of its socio-economic environment. Companies’ organizations play a crucial role in the implementation of HRI. It is suggested to consider continuous learning, up- and re-skilling for competences, decentralized decision making, and participative organization as the degree of automation increases in companies [292]. Similarly, examples from the past across industries has shown that success in the implementation of new technology lies not in designing around human features but integrating them, making them a fundamental aspect of holistic system design [293]. This is very much in line with results from the scenario transfer which exhibit that skilling is a basic need to enable technological innovation and ensure adoption. With the adequate skills, the workforce becomes a source of innovation itself if given the freedom to participate in decision making. Thus, it is cautioned from becoming obsessed with operational goals promised by new technology but also always to reflect and observe the requirements for bringing stakeholders on board – to which end knowledge sharing and knowledge equipment is a highly promising approach. If such steps are not taken early enough, technological developments are on track to substantially disrupt social

131 and economic structures. The results of initial research on robotics can be observed at almost fully automated production lines in the automotive sector. Thus, the current precedence for robotics is set to automation. Hence, if organizational structures and attitudes towards up- and re-skilling are not revised, it is projected to become unnecessarily hard to break with the current precedence and move towards true CoCo. This author perceives an inclination to use the small steps of advances research provides on its way towards CoCo as being used in industry under the dogma of the currently prevailing precedence of automation rather than true CoCo. Thus, fundamental re-orientation is suggested for industry. The second scenario provides the description of such a paradigm shift. Additionally, external shocks like the current COVID-19 pandemic and accelerating climate action across industries are perceived to increase the urgency for such a paradigm shift. Lastly, again arising from the currently prevalent design approach in HRI, two crucial and highly fundamental questions are posited. As CoCo between humans and robots is hailed to provide a platform to combine flexibility with efficiency (e.g., [12, 13]), research design suggests to embed the robot with human features as to allow the human to seamlessly interact with the robot in a team (e.g., [201, 278]). This includes, behavior, bodily appearance as well as interaction modalities which are designed to emulate human aspects. The ultimate goal of doing so is desirable. By embedding the robot with human features humans do not have to adapt to technology, hence, preventing the robot from subjecting the human by requiring unnatural interaction styles. This becomes even more intriguing as it is observed that for example SciFi is likely to influence public perception to favoring robots which feature anthropomorphized bodily, behavioral as well as cognitive abilities [294]. Especially as robots enter more social settings like human-robot teams, the umbrella term for CoCo. However therefore the question arises, whether we can say for sure at which degree of assimilating human features the robot might become able to replicate human abilities? Even more fundamentally, another question seems appropriate. It concerns human abilities and humans’ believe in their uniqueness. Uniquely human abilities include social interaction, such as conversation, empathy, and collaboration, furthermore, creativity which means curiosity, lateral thinking, and innovativeness, and finally cognition which refers to emotions, gut feeling and intuition, experience, and the body-mind connection [6]. The findings herein exhibit that current research on robotics is geared towards enabling conversation, empathy, collaboration as well as emotions – all on a level similar to human-human interactions. The goal behind it is, as described, to allow for meaningful human work and natural interaction which does not subject the human to technology, and efficient teamwork. While these abilities

132 might now be uniquely human, it is cautioned from believing that research at some point will not be able to replicate those abilities when given enough time. Thus, another question regarding current approaches to human-robot teaming is, whether we are sufficiently knowledgeable about prospective opportunities and risks as we move closer to being able to replicate four out of ten uniquely human abilities? It is obvious that these questions tie back to the original thoughts on the human HRI design engineer as both homo faber and homo sapiens. The questions underscore the importance of design engineers to consider both their being as homo faber as well as that of the homo sapiens in order to find ethically conscious answers to the posed questions and dilemmas.

5 Limitations

This dissertation is constrained by several limitations. Some of which are already addressed in the specific chapters and sections. However, there are also some overarching limitations which have to be addressed. First of all, this research is not free of values. Throughout this dissertation this author is making judgements on importance, relevance as well as in- and exclusion. These judgements are consistently influenced by the values of this author. Thus, while trying to approach all decisions objectively, not in all cases could this author ensure through methodological hedging that values are not influencing the research. Secondly, it is acknowledged that given the wide scope of the scenario analysis not all mechanism can be illustrated to their full extent. As with many other considerations, also here a balance between efficiency and effectiveness had to be struck. No focus for example is laid on organizational developments other than trends related to the workforce. Thus, as to understand mechanisms within each scenario, further levels of investigation for organizational and technology related developments and projections are suggested. Nonetheless, the scenario analysis as it stands serves its purpose of shedding light on high level trends and their relation to HRI. In itself, this dissertation provided a steep learning curve for this author enabling differently scoped scenario analysis in the future. Thirdly, in several instances throughout this thesis a balance between efficiency and comprehensiveness has to be struck. Especially the methodological rigor is loosened several times throughout this thesis. As time constraints of master thesis projects does not allow for thorough investigation of all aspects closer investigations are motivated in order to ensure increased objectivity and higher levels of comprehensiveness.

133 Lastly, elaborations made are also bound by cultural context. Concepts such as trust are explicitly mentioned to be influenced by cultural background. But also underlying assumptions about socio-economic mechanism are bound by culture. Thus, four aspects related to cultural background constrain this dissertation. For one, data used to support elaborations made during the trend analysis are mostly for the OECD. For another, this author’s German origin and existing yet limited exposure to Asian and US American culture also constrain the cultural generalizability of this dissertation. Moreover, included SciFi works are mainly from European or North American background. As a last constraint, a substantial part of the research included is originating in the USA. Thus, also those researchers conduct their work with a certain extent of cultural bias. In summary, generalizations of this dissertation should be limited to the developed, western part of the world.

6 Conclusion

This dissertation, conducts a scoping literature review. Through it, this dissertation describes the fundamental design considerations necessary to achieve CoCo. Communication, coordination, dynamic autonomy, and the interaction setting are described as interaction design parameters. Dynamic autonomy is found to be crucial to enable the robot’s adaptation to the human counterpart in interaction. Multimodal communication is found to be particularly promising to allow for the information exchange necessary during CoCo. Thus, coordination as third component of interaction needs to be designed accordingly in order to allow for flexibility and robustness of the system. Flexibility in that it allows for adaptation of responsibilities and robustness in that it ensures that all necessary tasks are accomplished by one of the agents. Safety is described as the only technical design aspect whereas anthropomorphism and trust are considered important psychosocial acceptance design considerations. By exhibiting the multidimensionality of evaluation metrics for HRI the interdisciplinary nature of HRI becomes crucially evident. Subsequently, a scenario analysis with a socio-economic scope is conducted. This serves to widen the understanding of the embedding of HRI as a socio-technical system in socio-economic environments, i.e., companies. Initially, driving forces and trends relevant to the context herein are identified through desk-based literature search and brainstorming based on attentive media consumption. Future scenarios are then developed with the help of investigating themes in SciFi as well as by conducting interviews with experts from industry and academia. The themes of those scenarios are developed along a continuum between social constructivism and technological determinism. Both represent extremes in the understanding

134 of technological development embedded in socio-economic structures. Three scenario narratives are subsequently developed. One which lies more on the social constructivism side of the continuum in which the workforce is described to increase its influence on technological adoption. However, as a certain degree of economic competitiveness cannot be disregarded, a lack of investment in up- and re-skilling impedes the desired upward social mobility. In the second scenario which balances social constructivism and technological determinism, the workforce and companies agree on innovation led growth which lets the workforce participate by wide-scale up- and re-skilling efforts. The third scenario has a stronger focus on forces related to technological determinism, i.e., economic competition. While companies try to introduce innovation to remain competitive the skills of their workforce prove as a bottleneck. Thus, only the necessary up- and re-skilling is conducted. Therefore, the outcome of this scenario can be regarded similar to the first one. Different, however, in that the workforce is not granted influence on the design of socio-technical systems. Lastly, the design aspects trust, multimodal communication, and the human role in HRI are used to build an understanding of the relation between socio-economic developments and future scenarios with specific design aspects of HRI. The scenario narratives are transferred to the findings of the three literature reviews. While the first and third scenario feature similar trust development with a focus on cognitive trust and low trust levels in general as robots are being perceived as agents of the company, the second scenario features greater focus on affective trust as robots are transitioning to teammates. The perception shifts to robots as independent agents which support the human to leverage their skillsets. The first and third scenario also do not require as accurate trust calibration as the second scenario. The scenario of workforce emancipation renders the workforce into roles of controllers as the workforce pushes hard to not be left outside the control loop. However, closer interaction is stymied by insufficient training. In the third scenario, humans are left outside the control loop due to lacking resistance and thus, higher levels of robot autonomy can be achieved while humans are left with supervisory and instructor roles. The scenario of consumer emancipation and innovation led growth sees the human workforce in roles of collaborator or cooperator to robots. This keeps them inside the control as well as the design loop and retains their operational importance. As for multimodal communication, the first scenario is projected to require multimodality for the interaction from human to robot whereas the other way around little communication will be necessary. The second scenario will feature extensive bi-directional multimodal communication to enable teamwork. In the third scenario, increasingly remote interaction will necessitate R2H communication to enable transparency of the robot’s

135 environment and facilitate situational awareness. H2R communication will be focused on enabling flexibility in the interaction. In conclusion, it is found that all future scenarios have distinct but also partly similar implications for HRI. However, it is also shown that even though research on CoCo is producing results, CoCo is not a guaranteed future outcome. More profoundly though, a number of ethical and profound open philosophical questions arise from the scenario transfer to HRI. What happens if progress on CoCo is too slow to enable a paradigm shift away from automation through robotics? How much are we willing to subject ourselves to digital technology in order to enable natural interaction with robots? How close do we want to let robots come to replicate human abilities? And are we sufficiently knowledgeable about prospective opportunities and risks as we move closer to being able to replicate a considerable number of uniquely human abilities? With these questions, this dissertation aims to elucidate the HRI academic as well as corporate community on the wider considerations necessary for a truly human-centric future of HRI. Education is posited as a crucial stepping stone to enable such a future. Future research is motivated to consider the posed questions. It is not argued that research on HRI should not progress without answers to these questions. Rather, implications of future research is motivated to consider the wider implications on those questions. Furthermore, future engineering research on HRI is motivated to state implicit assumptions made on the above-described questions. This serves to shed light on tacit ethical as well as philosophical assumptions while informing the wider research community about the importance of philosophical considerations in mundane engineering research. Moreover, the mechanisms described in the scenarios are to be investigated more closely. In particular the macroeconomic functioning of up- and re-skilling is to be investigated as to substantiate its importance in currently unfolding structural disruptions associated with technological developments. In connection to this future research is motivated to investigate ways to educate people on robotics and their capabilities. As technological advances are unfolding continuous education is required. Thus, new and better ways of education should be striven for. Lastly, as the scenario analysis conducted herein takes a long-term perspective, analyses with less extensive future extrapolations are motivated to examine and consider organizational aspects more closely. This is based on this authors repeated findings of indications in the examined literature towards the organization’s responsibility to ensure acceptance of technologies.

136 7 Bibliography

[1] H. Kagermann, W. Wahlster, and J. Helbig, "Recommendations for implementing the strategic initiative Industrie 4.0: Final report of the Industrie 4.0 Working Group," Acatech, München, pp. 19-26, 2013. [2] B. Mrugalska and M. K. Wyrwicka, "Towards Lean Production in Industry 4.0," in 7th International Conference on Engineering, Project, and Production Management, Bialystok, Poland, K. Halicka and L. Nazarko, Eds., 2017, vol. 182, pp. 466-473, doi: 10.1016/j.proeng.2017.03.135. [3] M. Bortolini, E. Ferrari, M. Gamberi, F. Pilati, and M. Faccio, "Assembly system design in the Industry 4.0 era: a general framework," IFAC-PapersOnLine, vol. 50, no. 1, pp. 5700-5705, 2017, doi: 10.1016/j.ifacol.2017.08.1121. [4] L. D. Evjemo, T. Gjerstad, E. I. Grøtli, and G. Sziebig, "Trends in Smart Manufacturing: Role of Humans and Industrial Robots in Smart Factories," Current Robotics Reports, vol. 1, no. 2, pp. 35-41, 2020, doi: 10.1007/s43154-020-00006-5. [5] A. Pereira and F. Romero, "A review of the meanings and the implications of the Industry 4.0 concept," in MESIC 2017: Manufacturing Engineering Society International Conference, Vigo (Pontevedra), , J. Salguero and E. Ares, Eds., 2017, vol. 13, pp. 1206-1214, doi: 10.1016/j.promfg.2017.09.032. [6] I. Fries, "Uniquely Human Abilities in the Digital Age - A qualitative exploration for a Successful Transformation of the Workplace during the Fourth Industrial Revolution," Master of Science in International Business & Politics Master Thesis, Department of Digitalization and Institute for Manufacturing, Copenhagen Business School and University of Cambridge, Copenhagen and Cambridge, 2019. [7] G. Culot, G. Nassimbeni, G. Orzes, and M. Sartor, "The future of manufacturing: a Delphi-based scenario analysis on Industry 4.0," Technological Forecasting and Social Change, 157, p. 120092, 2020, doi: 10.1016/j.techfore.2020.120092. [8] I. Moon, G. M. Lee, J. Park, D. Kiritsis, and G. Von Cieminski, "APMS: IFIP International Conference on Advances in Production Management Systems - Advances in Production Management Systems. Smart Manufacturing for Industry 4.0 - Part I," in IFIP WG 5.7 International Conference - APMS 2018, Seoul, South Korea, 2018, vol. 536: Springer Nature, doi: 10.1007/978-3-319-99707-0. [9] F. Ameri, K. E. Stecke, G. Von Cieminski, and D. Kiritsis, "APMS: IFIP International Conference on Advances in Production Management Systems - Advances in Production Management Systems. Production Management for the Factory of the Future - Part I," in IFIP WG 5.7 International Conference - APMS 2019, Austin TX, USA, 2019, vol. 566: Springer Nature, doi: 10.1007/978-3-030-30000-5. [10] M. A. Goodrich and A. C. Schultz, "Human-Robot Interaction: A Survey," Foundations and Trends® in Human-Computer Interaction, vol. 1, no. 3, pp. 203-275, 2007, doi: 10.1561/1100000005. [11] A. Bauer, D. Wollherr, and M. Buss, "Human-robot Collaboration: A survey," International Journal of Humanoid Robotics, vol. 5, no. 1, pp. 47-66, 2008, doi: 10.1142/S0219843608001303. [12] V. Villani, F. Pini, F. Leali, and C. Secchi, "Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications," Mechatronics, vol. 55, pp. 248-266, 2018, doi: 10.1016/j.mechatronics.2018.02.009. [13] L. Wang et al., "Symbiotic human-robot collaborative assembly," CIRP Annals, vol. 68, no. 2, pp. 701-726, 2019, doi: 10.1016/j.cirp.2019.05.002. [14] S. R. Fletcher, T. L. Johnson, and J. Larreina, "Putting people and robots together in manufacturing: are we ready?," in Robotics and Well-Being, vol. 95, A. F. M., S. S. J., S. V. G., T. M., and K. E. Eds. Cham, Switzerland: Springer, Cham, 2019, pp. 135-147.

137 [15] R. R. Hoffman, P. J. Feltovich, K. M. Ford, and D. D. Woods, "A rose by any other name... would probably be given an acronym [cognitive systems engineering]," IEEE Intelligent Systems, vol. 17, no. 4, pp. 72-80, 2002, doi: 10.1109/MIS.2002.1024755. [16] P. M. Fitts et al., "Human engineering for an effective air-navigation and traffic-control system," National Research Council, Columbus OH, USA, 1951. [17] L. Probst, L. Frideres, B. Pedersen, and C. Caputi, "Service innovation for smart industry: human–robot collaboration," European Commission, Luxembourg, 2015. [18] U. E. Ogenyi, J. Liu, C. Yang, Z. Ju, and H. Liu, "Physical human-robot collaboration: Robotic systems, learning methods, collaborative strategies, sensors, and actuators," IEEE Transactions on Cybernetics, pp. 1-14, 2019, doi: 10.1109/TCYB.2019.2947532. [19] D. Rücker, R. Hornfeck, and K. Paetzold, "International Conference on Applied Human Factors and Ergonomics - AHFE 2018: Advances in Human Factors in Robots and Unmanned Systems," in International Conference on Applied Human Factors and Ergonomics, Universal Studios, Orlando FL, USA, J. Chen, Ed., 2018, vol. 784: Springer, Cham, pp. 127-135, doi: 10.1007/978-3-319-94346-6_12. [20] L. Wang, S. Liu, H. Liu, and X. V. Wang, "Overview of human-robot collaboration in manufacturing," in Proceedings of 5th International Conference on the Industry 4.0 Model for Advanced Manufacturing Belgrade, Serbia, L. Wang, V. D. Majstorovic, D. Mourtzis, E. Carpanzano, G. Moroni, and L. M. Galantucci, Eds., 2020: Springer, Cham, pp. 15-58, doi: 10.1007/978-3-030-46212-3_2. [21] S. R. Fletcher and P. Webb, "Industrial robot ethics: The challenges of closer human collaboration in future manufacturing systems," in A World with Robots - International Conference on Robot Ethics: ICRE 2015, Lisbon, Portugal, M. Aldinhas-Ferreira, J. S. Sequeira, M. E. Tokhi, E. Kadar, and G. Virk, Eds., 2017, vol. 84: Springer, Cham, in Intelligent Systems, Control and Automation: Science and Engineering, pp. 159-169, doi: 10.1007/978-3-319-46667-5_12. [22] G. Paré, M.-C. Trudel, M. Jaana, and S. Kitsiou, "Synthesizing information systems knowledge: A typology of literature reviews," Information & Management, vol. 52, no. 2, pp. 183-199, 2015, doi: 10.1016/j.im.2014.08.008. [23] H. M. Cooper, "Organizing knowledge syntheses: A taxonomy of literature reviews," Knowledge in Society, vol. 1, pp. 104-126, 1988, doi: 10.1007/BF03177550. [24] Y. Levy and T. J. Ellis, "A Systems Approach to Conduct an Effective Literature Review in Support of Information Systems Research," Informing Science, vol. 9, pp. 181-212, 2006, doi: 10.28945/479. [25] G. A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation and Control (Intelligent Robotics and Autonomous Agents). MIT Press, 2005. [26] K. A. Demir, G. Döven, and B. Sezen, "Industry 5.0 and human-robot co-working," Procedia Computer Science, vol. 158, pp. 688-695, 2019, doi: 10.1016/j.procs.2019.09.104. [27] E. Cheon and N. M. Su, "Integrating roboticist values into a Value Sensitive Design framework for humanoid robots," in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 2016: IEEE Press, pp. 375-382, doi: 10.1109/HRI.2016.7451775. [28] N. M. Richards and W. D. Smart, "How should the law think about robots?," in Robot Law. Cheltenham, UK: Edward Elgar Publishing, 2016, pp. 3-22. [29] H. J. Wilson, "What is a robot, anyway?," Harvard Business Review, vol. 15, pp. 2-5, 2015. [Online]. Available: https://hbr.org/2015/04/what-is-a-robot- anyway?autocomplete=true. [30] ISO 8373: Robots and robotic devices–Vocabulary, ISO, Geneva, Switzerland, 2012.

138 [31] H. A. Yanco and J. Drury, "Classifying human-robot interaction: an updated taxonomy," in 2004 IEEE International Conference on Systems, Man and Cybernetics, The Hague, Netherlands, 2004: IEEE Press, doi: 10.1109/icsmc.2004.1400763. [32] I. Aaltonen, T. Salmi, and I. Marstio, "Refining levels of collaboration to support the design and evaluation of human-robot interaction in the manufacturing industry," in 51st CIRP Conference on Manufacturing Systems, Stockholm, Sweden, L. Wang, Ed., 2018, vol. 72, pp. 93-98, doi: 10.1016/j.procir.2018.03.214. [33] A. A. Malik and A. Bilberg, "Developing a reference model for human–robot interaction," International Journal on Interactive Design and Manufacturing, vol. 13, no. 4, pp. 1541-1547, 2019, doi: 10.1007/s12008-019-00591-6. [34] F. Vicentini, "Terminology in safety of collaborative robotics," Robotics and Computer- Integrated Manufacturing, vol. 63, p. 101921, 2020. [35] B. Mutlu, A. Terrell, and C.-M. Huang, "Coordination mechanisms in human-robot collaboration," in HRI '13: Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction - Proceedings of the Workshop on Collaborative Manipulation, Tokyo, Japan, 2013: IEEE Press, pp. 1-6. [36] A. De Santis, B. Siciliano, A. De Luca, and A. Bicchi, "An of physical human– robot interaction," Mechanism and Machine Theory, vol. 43, no. 3, pp. 253-270, 2008, doi: 10.1016/j.mechmachtheory.2007.03.003. [37] A. Cherubini, R. Passama, A. Crosnier, A. Lasnier, and P. Fraisse, "Collaborative manufacturing with physical human–robot interaction," Robotics and Computer- Integrated Manufacturing, vol. 40, pp. 1-13, 2016, doi: 10.1016/j.rcim.2015.12.007. [38] A. Pervez and J. Ryu, "Safe physical human robot interaction-past, present and future," Journal of Mechanical Science and Technology, vol. 22, no. 3, p. 469, 2008, doi: 10.1007/s12206-007-1109-3. [39] I. Maurtua, A. Ibarguren, J. Kildal, L. Susperregi, and B. Sierra, "Human-robot collaboration in industrial applications: Safety, interaction and trust," International Journal of Advanced Robotic Systems, vol. 14, no. 4, pp. 1-10, 2017, doi: 10.1177/1729881417716010. [40] J. M. Bradshaw, R. R. Hoffman, D. D. Woods, and M. Johnson, "The Seven Deadly Myths of "Autonomous Systems"," IEEE Intelligent Systems, vol. 28, no. 3, pp. 54-61, 2013, doi: 10.1109/MIS.2013.70. [41] L. M. Ma, T. Fong, M. J. Micire, Y. K. Kim, and K. Feigh, "Human-robot teaming: Concepts and components for design," in Field and service robotics, Zurich, Switzerland, M. Hutter and R. Siegwart, Eds., 2018, vol. 5: Springer, Cham, pp. 649- 663, doi: 10.1007/978-3-319-67361-5_42. [42] J. M. Bradshaw et al., "From tools to teammates: Joint activity in human-agent-robot teams," in International Conference on Human Centered Design - HCD 2009: Human Centered Design - First International Conference, San Diego CA, USA, M. Kurosu, Ed., 2009, vol. 5619: Springer, Berlin, Heidelberg, in Lecture Notes in Computer Science, pp. 935-944, doi: 10.1007/978-3-642-02806-9_107. [43] G. Hoffman and C. Breazeal, "Collaboration in human-robot teams," in AIAA 1st Intelligent Systems Technical Conference, Chicago IL, USA, 2004: American Institute of Aeronautics and Astronautics, pp. 1-18, doi: 10.2514/6.2004-6434. [44] A. Kolbeinsson, E. Lagerstedt, and J. Lindblom, "Classification of collaboration levels for human-robot cooperation in manufacturing," in Advances in Manufacturing Technology XXXII: Proceedings of the 16th International Conference on Manufacturing Research, Skövde, Sweden, P. Thorvald and K. Case, Eds., 2018, pp. 151-156, doi: 10.3233/978-1-61499-902-7-151.

139 [45] T. B. Sheridan and W. L. Verplank, "Human and computer control of undersea teleoperators," Massachusetts Institute of Technology Cambridge Man-Machine Systems Lab, Cambridge MA, USA, 1978. [46] M. Vagia, A. A. Transeth, and S. A. Fjerdingen, "A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed?," Applied Ergonomics, vol. 53, Part A, pp. 190-202, 2016, doi: 10.1016/j.apergo.2015.09.013. [47] J. M. Beer, A. D. Fisk, and W. A. Rogers, "Toward a framework for levels of robot autonomy in human-robot interaction," Journal of Human-Robot Interaction, vol. 3, no. 2, pp. 74–99, 2014, doi: 10.5898/JHRI.3.2.Beer. [48] P. A. Hancock, "Imposing limits on autonomous systems," Ergonomics, vol. 60, no. 2, pp. 284-291, 2017, doi: 10.1080/00140139.2016.1190035. [49] E. J. de Visser, R. Pak, and T. H. Shaw, "From ‘automation’to ‘autonomy’: the importance of trust repair in human–machine interaction," Ergonomics, vol. 61, no. 10, pp. 1409-1427, 2018, doi: 10.1080/00140139.2018.1457725. [50] D. Kortenkamp, D. Keirn-Schreckenghost, and R. P. Bonasso, "Adjustable control autonomy for manned space flight," in 2000 IEEE Aerospace Conference Proceedings, Big Sky MT, USA, 2000: IEEE Press, pp. 629-640, doi: 10.1109/AERO.2000.879330. [51] S. Music and S. Hirche, "Control sharing in human-robot team interaction," Annual Reviews in Control, vol. 44, pp. 342-354, 2017, doi: 10.1016/j.arcontrol.2017.09.017. [52] D. Riedelbauch and D. Henrich, "Exploiting a Human-Aware World Model for Dynamic Task Allocation in Flexible Human-Robot Teams," in 2019 International Conference on Robotics and Automation (ICRA), Montreal QC, Canada, 2019: IEEE Press, pp. 6511-6517, doi: 10.1109/ICRA.2019.8794288. [53] C.-M. Huang, M. Cakmak, and B. Mutlu, "Adaptive Coordination Strategies for Human-Robot Handovers," in Robotics: Science and Systems XI, Rome, Italy, L. E. Kavraki, D. Hsu, and J. Buchli, Eds., 2015. [54] S. Nikolaidis, A. Kuznetsov, D. Hsu, and S. Srinivasa, "Formalizing human-robot mutual adaptation: A bounded memory model," in HRI '16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, Christchurch, New Zealand, 2016: IEEE Press, pp. 75-82, doi: 10.1109/HRI.2016.7451736. [55] J. Y. Chen and M. J. Barnes, "Human–agent teaming for multirobot control: A review of human factors issues," IEEE Transactions on Human-Machine Systems, vol. 44, no. 1, pp. 13-29, 2014, doi: 10.1109/THMS.2013.2293535. [56] S. Jiang and R. C. Arkin, "Mixed-Initiative Human-Robot Interaction: Definition, Taxonomy, and Survey," in 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, SAR Hong-Kong, 2015: IEEE Press, pp. 954-961, doi: 10.1109/SMC.2015.174. [57] D. J. Bruemmer, D. D. Dudenhoeffer, and J. L. Marble, "Dynamic-Autonomy for Urban Search and Rescue," in AAAI Mobile Robot Competition, Edmonton AB, Canada, 2002, pp. 33-37. [58] C. D. Wickens, J. G. Hollands, S. Banbury, and R. Parasuraman, Engineering psychology and human performance, 4th ed. New York NY, USA: Psychology Press, 2015, pp. 377-404. [59] J. Sauer, C.-S. Kao, and D. Wastell, "A comparison of adaptive and adaptable automation under different levels of environmental stress," Ergonomics, vol. 55, no. 8, pp. 840-853, 2012, doi: 10.1080/00140139.2012.676673. [60] E. de Visser and R. Parasuraman, "Adaptive Aiding of Human-Robot Teaming:Effects of Imperfect Automation on Performance, Trust, and Workload," Journal of Cognitive Engineering and Decision Making, vol. 5, no. 2, pp. 209-231, 2011, doi: 10.1177/1555343411410160.

140 [61] J. W. Crandall and M. A. Goodrich, "Experiments in adjustable autonomy," in 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace, Tucson AZ, USA, 2001: IEEE Press, pp. 1624-1629, doi: 10.1109/ICSMC.2001.973517. [62] G. Michalos, S. Makris, J. Spiliotopoulos, I. Misios, P. Tsarouchi, and G. Chryssolouris, "ROBO-PARTNER: Seamless Human-Robot Cooperation for Intelligent, Flexible and Safe Operations in the Assembly Factories of the Future," in 5th CATS 2014 - CIRP Conference on Assembly Technologies and Systems, Dresden, Germany, M. Putz, Ed., 2014, vol. 23, pp. 71-76, doi: 10.1016/j.procir.2014.10.079. [63] L. Peternel, N. Tsagarakis, D. Caldwell, and A. Ajoudani, "Robot adaptation to human physical fatigue in human-robot co-manipulation," Autonomous Robots, vol. 42, no. 5, pp. 1011-1021, 2018, doi: 10.1007/s10514-017-9678-1. [64] A. Thobbi, Y. Gu, and W. Sheng, "Using human motion estimation for human-robot cooperative manipulation," in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco CA, USA, 2011: IEEE Press, pp. 2873-2878, doi: 10.1109/IROS.2011.6094904. [65] A. Ajoudani, A. M. Zanchettin, S. Ivaldi, A. Albu-Schäffer, K. Kosuge, and O. Khatib, "Progress and prospects of the human–robot collaboration," Autonomous Robots, vol. 42, no. 5, pp. 957-975, 2018, doi: 10.1007/s10514-017-9677-2. [66] P. Gustavsson, M. Holm, A. Syberfeldt, and L. Wang, "Human-robot collaboration - Towards new metrics for selection of communication technologies," in 51st CIRP Conference on Manufacturing Systems, Stockholm, Sweden, L. Wang, Ed., 2018, vol. 72, pp. 123-128, doi: 10.1016/j.procir.2018.03.156. [67] D. Perzanowski, A. C. Schultz, W. Adams, E. Marsh, and M. Bugajska, "Building a multimodal human-robot interface," IEEE Intelligent Systems, vol. 16, no. 1, pp. 16-21, 2001, doi: 10.1109/MIS.2001.1183338. [68] N. Jordan, "Allocation of functions between man and machines in automated systems," Journal of Applied Psychology, vol. 47, no. 3, pp. 161-165, 1963, doi: 10.1037/h0043729. [69] P. Hancock and S. Scallen, "The future of function allocation," Ergonomics in Design, vol. 4, no. 4, pp. 24-29, 1996. [70] T. B. Sheridan, "Function allocation: algorithm, alchemy or apostasy?," International Journal of Human-Computer Studies, vol. 52, no. 2, pp. 203-216, 2000, doi: 10.1006/ijhc.1999.0285. [71] J. C. de Winter and D. Dodou, "Why the Fitts list has persisted throughout the history of function allocation," Cognition, Technology & Work, vol. 16, no. 1, pp. 1-11, 2014, doi: 10.1007/s10111-011-0188-1. [72] M. M. Cummings, "Man versus machine or man + machine?," IEEE Intelligent Systems, vol. 29, no. 5, pp. 62-69, 2014, doi: 10.1109/MIS.2014.87. [73] F. Ranz, V. Hummel, and W. Sihn, "Capability-based task allocation in human-robot collaboration," in CLF 2017: 7th Conference on Learning Factories, Darmstadt, Germany, J. Metternich and R. Glass, Eds., 2017, vol. 9, pp. 182-189, doi: 10.1016/j.promfg.2017.04.011. [74] N. Nikolakis, N. Kousi, G. Michalos, and S. Makris, "Dynamic scheduling of shared human-robot manufacturing operations," in 51st CIRP Conference on Manufacturing Systems, Stockholm, Sweden, L. Wang, Ed., 2018, vol. 72, pp. 9-14, doi: 10.1016/j.procir.2018.04.007. [75] M. S. Malvankar-Mehta and S. S. Mehta, "Optimal task allocation in multi-human multi-robot interaction," Optimization Letters, vol. 9, no. 8, pp. 1787-1803, 2015, doi: 10.1007/s11590-015-0890-7.

141 [76] S. M. Rahman and Y. Wang, "Mutual trust-based subtask allocation for human–robot collaboration in flexible lightweight assembly in manufacturing," Mechatronics, vol. 54, pp. 94-109, 2018, doi: 10.1016/j.mechatronics.2018.07.007. [77] R. Parasuraman, T. B. Sheridan, and C. D. Wickens, "Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs," Journal of Cognitive Engineering and Decision Making, vol. 2, no. 2, pp. 140-160, 2008, doi: 10.1518/155534308X284417. [78] X. V. Wang, Z. Kemény, J. Váncza, and L. Wang, "Human–robot collaborative assembly in cyber-physical production: Classification framework and implementation," CIRP Annals, vol. 66, no. 1, pp. 5-8, 2017, doi: 10.1016/j.cirp.2017.04.101. [79] ISO 10218-1: Robots and robotic devices - Safety requirements for industrial robots - Part 1: Robots, ISO, Geneva, Switzerland, 2011. [80] ISO 10218-2: Robots and robotic devices - Safety requirements for industrial robots - Part 2: Robot systems and integration, ISO, Geneva, Switzerland, 2011. [81] J. E. Colgate, J. Edward, M. A. Peshkin, and W. Wannasuphoprasit, "Cobots: Robots for collaboration with human operators," in Proceedings of the 1996 ASME International Mechanical Engineering Congress and Exposition, Atlanta GA, USA, 1996, vol. 58, pp. 433-439, doi: 10.1.1.37.7236. [82] M. Peshkin and J. E. Colgate, "Cobots," Industrial Robot, vol. 26, no. 5, pp. 335-341, 1999, doi: 10.1108/01439919910283722. [83] F. Vicentini, "Collaborative robotics: a survey," Journal of Mechanical Design, vol. 134, no. 4, p. 040802, 2020, doi: 10.1115/1.4046238. [84] F. D. Davis, R. P. Bagozzi, and P. R. Warshaw, "User acceptance of computer technology: a comparison of two theoretical models," Management Science, vol. 35, no. 8, pp. 982-1003, 1989, doi: 10.1287/mnsc.35.8.982. [85] J. P. Galan, M. Giraud, and L. Meyer-Waarden, "A Theoretical extension of the technology acceptance model to explain the adoption and the usage of new digital services," in 29ème Congrès International de I’Association Française de Marketing, La Rochelle, , 2013. [86] V. Venkatesh and F. D. Davis, "A theoretical extension of the technology acceptance model: Four longitudinal field studies," Management Science, vol. 46, no. 2, pp. 186- 204, 2000, doi: 10.1287/mnsc.46.2.186.11926. [87] V. Venkatesh, M. G. Morris, G. B. Davis, and F. D. Davis, "User acceptance of information technology: Toward a unified view," MIS Quarterly, vol. 27, no. 3, pp. 425- 478, 2003, doi: 10.2307/30036540. [88] C. Bröhl, J. Nelles, C. Brandl, A. Mertens, and V. Nitsch, "Human–Robot Collaboration Acceptance Model: Development and Comparison for Germany, Japan, China and the USA," International Journal of Social Robotics, vol. 11, no. 5, pp. 709-726, 2019, doi: 10.1007/s12369-019-00593-0. [89] G. Charalambous, S. Fletcher, and P. Webb, "Development of a human factors roadmap for the successful implementation of industrial human-robot collaboration," in Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future - Proceedings of the AHFE 2016 International Conference on Human Aspects of Advanced Manufacturing, vol. 490, S. Trzcielinski and C. Schlick Eds., (Advances in Intelligent Systems and Computing. Walt Disney World FL, USA: Springer, Cham, 2016, pp. 195-206. [90] A. Freedy, E. DeVisser, G. Weltman, and N. Coeyman, "Measurement of trust in human-robot collaboration," in 2007 International Symposium on Collaborative Technologies and Systems, Orlando FL, USA, 2007: IEEE Press, pp. 106-114, doi: 10.1109/CTS.2007.4621745.

142 [91] P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. C. Chen, E. J. De Visser, and R. Parasuraman, "A meta-analysis of factors affecting trust in human-robot interaction," Human Factors, vol. 53, no. 5, pp. 517-527, 2011, doi: 10.1177/0018720811417254. [92] G. Charalambous, S. Fletcher, and P. Webb, "The development of a scale to evaluate trust in industrial human-robot collaboration," International Journal of Social Robotics, vol. 8, no. 2, pp. 193-209, 2016, doi: 10.1007/s12369-015-0333-8. [93] M. Mara and M. Appel, "Effects of lateral head tilt on user perceptions of humanoid and android robots," Computers in Human Behavior, vol. 44, pp. 326-334, 2015, doi: 10.1016/j.chb.2014.09.025. [94] M. Mori, "The uncanny valley," Energy, vol. 7, no. 4, pp. 33-35, 1970/2005. [95] S. Wang, S. O. Lilienfeld, and P. Rochat, "The uncanny valley: Existence and explanations," Review of General Psychology, vol. 19, no. 4, pp. 393-407, 2015, doi: 10.1037/gpr0000056. [96] S. Wang and P. Rochat, "Human perception of animacy in light of the uncanny valley phenomenon," Perception, vol. 46, no. 12, pp. 1386-1411, 2017, doi: 10.1177/0301006617722742. [97] S. Stadler, A. Weiss, N. Mirnig, and M. Tscheligi, "Anthropomorphism in the factory- a paradigm change?," in HRI '13: Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, Tokyo, Japan, 2013: IEEE Press, pp. 231-232, doi: 10.1109/HRI.2013.6483586. [98] J. Goetz, S. Kiesler, and A. Powers, "Matching robot appearance and behavior to tasks to improve human-robot cooperation," in 2003 12th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Millbrae CA, USA, 2003: IEEE Press, pp. 55-60, doi: 10.1109/ROMAN.2003.1251796. [99] J. Złotowski, D. Proudfoot, K. Yogeeswaran, and C. Bartneck, "Anthropomorphism: opportunities and challenges in human–robot interaction," International Journal of Social Robotics, vol. 7, no. 3, pp. 347-360, 2015, doi: 10.1007/s12369-014-0267-6. [100] J. Fink, "Anthropomorphism and human likeness in the design of robots and human- robot interaction," in International Conference on Social Robotics - ICSR 2012: Social Robotics - 4th International Conference, Chengdu, China, S. S. Ge, O. Khatib, J.-J. Cabibihan, R. Simmons, and M.-A. Williams, Eds., 2012, vol. 7621: Springer, Cham, in Lecture Notes in Computer Science, pp. 199-208, doi: 10.1007/978-3-642-34103- 8_20. [101] B. Busch et al., "Evaluation of an industrial robotic assistant in an ecological environment," in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 2019: IEEE Press, pp. 1-8, doi: 10.1109/RO-MAN46459.2019.8956399. [102] A. Weiss, D. Wurhofer, M. Lankes, and M. Tscheligi, "Autonomous vs. tele-operated: How people perceive human-robot collaboration with HRP-2," in HRI '09: Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, La Jolla CA, USA, 2009: IEEE Press, pp. 257-258, doi: 10.1145/1514095.1514164. [103] S. A. Elprama, C. I. Jewell, A. Jacobs, I. El Makrini, and B. Vanderborght, "Attitudes of factory workers towards industrial and collaborative robots," in HRI '17: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 2017: Association for Computing Machinery, pp. 113-114, doi: 10.1145/3029798.3038309. [104] A. Kolbeinsson, E. Lagerstedt, and J. Lindblom, "Foundation for a classification of collaboration levels for human-robot cooperation in manufacturing," Production and Manufacturing Research, vol. 7, no. 1, pp. 448-471, 2019, doi: 10.1080/21693277.2019.1645628.

143 [105] G. Michalos, S. Makris, P. Tsarouchi, T. Guasch, D. Kontovrakis, and G. Chryssolouris, "Design considerations for safe human-robot collaborative workplaces," in CIRPe 2015 - Understanding the life cycle implications of manufacturing, Web Conference, J. Erkoyuncu, Ed., 2015, vol. 37, pp. 248-253, doi: 10.1016/j.procir.2015.08.014. [106] S. Thrun, "Toward a framework for human-robot interaction," Human-Computer Interaction, vol. 19, no. 1-2, pp. 9-24, 2004, doi: 10.1207/s15327051hci1901&2_2. [107] A. Sauppé and B. Mutlu, "The social impact of a robot co-worker in industrial settings," in CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 2015: Association for Computing Machinery, pp. 3613-3622, doi: 10.1145/2702123.2702181. [108] B. Elprama, I. El Makrini, and A. Jacobs, "Acceptance of collaborative robots by factory workers: a pilot study on the importance of social cues of anthropomorphic robots," in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York NY, USA, 2016: IEEE Press, pp. 919-919. [109] A. M. Zanchettin, L. Bascetta, and P. Rocco, "Acceptability of robotic manipulators in shared working environments through human-like redundancy resolution," Applied Ergonomics, vol. 44, no. 6, pp. 982-989, 2013, doi: 10.1016/j.apergo.2013.03.028. [110] M. Si and J. D. McDaniel, "Using facial expression and body language to express attitude for non-humanoid robot," in AAMAS '16: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, Singapore, 2016: International Foundation for Autonomous Agents and Multiagent Systems, pp. 1457- 1458. [111] V. Weistroffer, A. Paljic, L. Callebert, and P. Fuchs, "A methodology to assess the acceptability of human-robot collaboration using virtual reality," in VRST '13: Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology, Singapore, Singapore, 2013: Association for Computing Machinery, pp. 39-48, doi: 10.1145/2503713.2503726. [112] N. Savela, T. Turja, and A. Oksanen, "Social acceptance of robots in different occupational fields: A systematic literature review," International Journal of Social Robotics, vol. 10, no. 4, pp. 493-502, 2018, doi: 10.1007/s12369-017-0452-5. [113] C. F. DiSalvo, F. Gemperle, J. Forlizzi, and S. Kiesler, "All robots are not created equal: the design and perception of humanoid robot heads," in DIS '02: Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques, London, UK, 2002: Association for Computing Machinery, pp. 321-326, doi: 10.1145/778712.778756. [114] A. Steinfeld et al., "Common metrics for human-robot interaction," in HRI '06: Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, Salt Lake City UT, USA, 2006: Association for Computing Machinery, pp. 33-40, doi: 10.1145/1121241.1121249. [115] J. A. Marvel, S. Bagchi, M. Zimmerman, and B. Antonishek, "Towards Effective Interface Designs for Collaborative HRI in Manufacturing: Metrics and Measures," ACM Trans. Hum.-Robot Interact., vol. 9, no. 4, p. Article 25, 2020, doi: 10.1145/3385009. [116] P. Dias, "The Engineer’s Identity Crisis: Homo Faber or Homo Sapiens?," in Philosophy and Engineering: Reflections on Practice, Principles and Process, vol. 15, M. D., M. N., and G. D. Eds., (Philosophy of Engineering and Technology. Dordrecht, The Netherlands: Springer, Dordrecht, 2013, pp. 139-150. [117] M. Arntz, T. Gregory, and U. Zierahn, "Digitization and the Future of Work: Macroeconomic Consequences," in Handbook of Labor, Human Resources and Population Economics, K. F. Zimmermann Ed. Cham, Switzerland: Springer International Publishing, 2019, pp. 1-29.

144 [118] P. Fleming, "Robots and organization studies: Why robots might not want to steal your job," Organization Studies, vol. 40, no. 1, pp. 23-38, 2019, doi: 10.1177/0170840618765568. [119] C. B. Frey and M. A. Osborne, "The future of employment: How susceptible are jobs to computerisation?," Technological forecasting and social change, vol. 114, pp. 254-280, 2017, doi: 10.1016/j.techfore.2016.08.019. [120] A. López Peláez, "From the Digital Divide to the Robotics Divide? Reflections on Technology, Power, and Social Change," in The Robotics Divide - A New Frontier in the 21st Century?, A. López Peláez Ed. London, UK: Springer, London, 2014, pp. 5- 24. [121] A. López Peláez and S. S. Sánchez-Cabezudo, "From “Singularity” to Inequality: Perspectives on the Emerging Robotics Divide," in The Robotics Divide - A New Frontier in the 21st Century?, A. López Peláez Ed. London, UK: Springer, London, 2014, pp. 195-217. [122] H. Courtney, J. Kirkland, and P. Viguerie, "Strategy under uncertainty," Harvard Business Review, vol. 75, no. 6, pp. 67-79, 1997. [Online]. Available: https://hbr.org/1997/11/strategy-under-uncertainty. [123] R. Alizadeh and L. Soltanisehat, "Stay competitive in 2035: A scenario-based method to foresight in the design and manufacturing industry," Foresight, vol. 22, no. 3, pp. 309-330, 2020, doi: 10.1108/FS-06-2019-0048. [124] A. López Peláez and D. Kyriakou, "Robots, genes and bytes: technology development and social changes towards the year 2020," Technological Forecasting and Social Change, vol. 75, no. 8, pp. 1176-1201, 2008, doi: 10.1016/j.techfore.2008.01.002. [125] H. Kosow and R. Gaßner, Methods of future and scenario analysis: overview, assessment, and selection criteria. Bonn, Germany: German Development Institute (DIE), 2008. [126] M. Amer, T. U. Daim, and A. Jetter, "A review of scenario planning," Futures, vol. 46, pp. 23-40, 2013, doi: 10.1016/j.futures.2012.10.003. [127] G. Burt, G. Wright, R. Bradfield, G. Cairns, and K. Van Der Heijden, "The Role of Scenario Planning in Exploring the Environment in View of the Limitations of PEST and Its Derivatives," International Studies of Management & Organization, vol. 36, no. 3, pp. 50-76, 2006, doi: 10.2753/IMO0020-8825360303. [128] D. Idier, "Science fiction and technology scenarios: comparing Asimov's robots and Gibson's cyberspace," Technology in Society, vol. 22, no. 2, pp. 255-272, 2000, doi: 10.1016/S0160-791X(00)00004-X. [129] M. Hussain, E. Tapinos, and L. Knight, "Scenario-driven roadmapping for technology foresight," Technological Forecasting and Social Change, vol. 124, pp. 160-177, 2017, doi: 10.1016/j.techfore.2017.05.005. [130] J. Rifkin, "The third industrial revolution: How the internet, green electricity, and 3-d printing are ushering in a sustainable era of distributed capitalism," World Financial Review, vol. 1, no. 1, pp. 4052-4057, 2012. [131] J. Bjerklie, "The end of work—The decline of the global labor force and the dawn of the post-market era," Voluntas, vol. 7, no. 3, pp. 318-323, 1996. [132] E. More, S. Evans, P. McCaffrey, D. Probert, and R. Phaal, "UK Manufacturing Foresight: Future Drivers of Change," in 2nd Annual EPSRC Manufacturing the Future Conference, Bedford, England, 2013. [133] S. Mazumdaru. "Is angst about China behind Germany's stricter foreign investment rules?" Deutsche Welle. https://p.dw.com/p/3AKSx (accessed 20.07., 2020). [134] "How to safeguard national security without scaring off investment." The Economist. https://www.economist.com/leaders/2018/08/11/how-to-safeguard-national-security- without-scaring-off-investment (accessed 20.07., 2020).

145 [135] "America and the EU are both toughening up on foreign capital." The Economist. https://www.economist.com/finance-and-economics/2018/07/26/america-and-the-eu- are-both-toughening-up-on-foreign-capital (accessed 20.07., 2020). [136] "Tariffs may well bring some high-tech manufacturing back to America." The Economist. https://www.economist.com/finance-and-economics/2018/09/13/tariffs- may-well-bring-some-high-tech-manufacturing-back-to-america (accessed 20.07., 2020). [137] "Brexit triggers a round of reshoring." The Economist. https://www.economist.com/britain/2017/10/19/brexit-triggers-a-round-of-reshoring (accessed 20.07., 2020). [138] "The fight with Huawei means America can’t shape tech rules." The Economist. https://www.economist.com/united-states/2020/04/23/the-fight-with-huawei-means- america-cant-shape-tech-rules (accessed 20.07., 2020). [139] "Coming home." The Economist. https://www.economist.com/special- report/2013/01/17/coming-home (accessed 21.07., 2020). [140] "Adidas’s high-tech factory brings production back to Germany." The Economist. https://www.economist.com/business/2017/01/14/adidass-high-tech-factory-brings- production-back-to-germany (accessed 21.07., 2020). [141] C. Cottrell. "Manufacturers move back to US from China." Deutsche Welle. https://p.dw.com/p/1DaCM (accessed 21.07., 2020). [142] R. C. Feenstra, "Integration of trade and disintegration of production in the global economy," Journal of Economic Perspectives, vol. 12, no. 4, pp. 31-50, 1998, doi: 10.1257/jep.12.4.31. [143] J. J. Yun, D. Won, E. Jeong, K. Park, J. Yang, and J. Park, "The relationship between technology, business model, and market in autonomous car and intelligent robot industries," Technological Forecasting and Social Change, vol. 103, pp. 142-155, 2016, doi: 10.1016/j.techfore.2015.11.016. [144] E. F. Villaronga, "Robots, standards and the law: Rivalries between private standards and public policymaking for robot governance," Computer Law & Security Review, vol. 35, no. 2, pp. 129-144, 2019, doi: 10.1016/j.clsr.2018.12.009. [145] M. Dalle Mura and G. Dini, "Designing assembly lines with humans and collaborative robots: A genetic approach," CIRP Annals, vol. 68, no. 1, pp. 1-4, 2019, doi: 10.1016/j.cirp.2019.04.006. [146] I. P. McCarthy, "Special issue editorial: the what, why and how of mass customization," Production Planning & Control, vol. 15, no. 4, pp. 347-351, 2004, doi: 10.1080/0953728042000238854. [147] D. J. Webb, L. A. Mohr, and K. E. Harris, "A re-examination of socially responsible consumption and its measurement," Journal of Business Research, vol. 61, no. 2, pp. 91-98, 2008, doi: 10.1016/j.jbusres.2007.05.007. [148] N. García-de-Frutos, J. M. Ortega-Egea, and J. Martínez-del-Río, "Anti-consumption for environmental sustainability: conceptualization, review, and multilevel research directions," Journal of Business Ethics, vol. 148, no. 2, pp. 411-435, 2018, doi: 10.1007/s10551-016-3023-z. [149] R. Belk, "You are what you can access: Sharing and collaborative consumption online," Journal of Business Research, vol. 67, no. 8, pp. 1595-1600, 2014, doi: 10.1016/j.jbusres.2013.10.001. [150] M. Attaran, "The rise of 3-D printing: The advantages of additive manufacturing over traditional manufacturing," Business Horizons, vol. 60, no. 5, pp. 677-688, 2017, doi: 10.1016/j.bushor.2017.05.011. [151] D. Méda, "The future of work: The meaning and value of work in Europe," Working paper 2016. [Online]. Available: https://hal.archives-ouvertes.fr/hal-01616579

146 [152] P. Nolan, "Shaping the future: the political economy of work and employment," Industrial Relations Journal, vol. 35, no. 5, pp. 378-387, 2004, doi: 10.1111/j.1468- 2338.2004.00321.x. [153] OECD, OECD Economic Outlook, Volume 2019 Issue 1. 2019. [154] "Work-Life Balance and the Economics of Workplace Flexibility," Executive Office of the President Council of Economic Affairs, 2014. [155] F. Hecklau, M. Galeitzke, S. Flachs, and H. Kohl, "Holistic approach for human resource management in Industry 4.0," in 6th CIRP Conference on Learning Factories, Gjovik, Norway, K. Martinsen, Ed., 2016, vol. 54, pp. 1-6, doi: 10.1016/j.procir.2016.05.102. [156] "Karriere und Weiterbildung." Bundesagentur für Arbeit. https://www.arbeitsagentur.de/karriere-und-weiterbildung (accessed 21.07., 2020). [157] I. Kofler, E. Innerhofer, A. Marcher, M. Gruber, and H. Pechlaner, "Global Trends Shaping the World of Work," in The Future of High-Skilled Workers. Cham, Switzerland: Palgrave Pivot, Cham, 2020, pp. 13-28. [158] J. Mokyr, C. Vickers, and N. L. Ziebarth, "The history of technological anxiety and the future of economic growth: Is this time different?," Journal of Economic Perspectives, vol. 29, no. 3, pp. 31-50, 2015, doi: 10.1257/jep.29.3.31. [159] Y. Cohen, S. Shoval, and M. Faccio, "Strategic View on Cobot Deployment in Assembly 4.0 Systems," IFAC-PapersOnLine, vol. 52, no. 13, pp. 1519-1524, 2019, doi: 10.1016/j.ifacol.2019.11.415. [160] D. Autor, D. Dorn, L. F. Katz, C. Patterson, and J. Van Reenen, "The fall of the labor share and the rise of superstar firms," The Quarterly Journal of Economics, vol. 135, no. 2, pp. 645-709, 2020, doi: 10.1093/qje/qjaa004. [161] R. R. Nelson, "The market economy, and the scientific commons," Research policy, vol. 33, no. 3, pp. 455-471, 2004. [162] B. Van Looy, J. Callaert, and K. Debackere, "Publication and patent behavior of academic researchers: Conflicting, reinforcing or merely co-existing?," Research Policy, vol. 35, no. 4, pp. 596-608, 2006, doi: 10.1016/j.respol.2006.02.003. [163] N. C. Thompson, A. A. Ziedonis, and D. C. Mowery, "University licensing and the flow of scientific knowledge," Research Policy, vol. 47, no. 6, pp. 1060-1069, 2018, doi: 10.1016/j.respol.2018.03.008. [164] A. Arora, S. Belenzon, and A. Patacconi, "The decline of science in corporate R&D," Strategic Management Journal, vol. 39, no. 1, pp. 3-32, 2018, doi: 10.1002/smj.2693. [165] O. Gretsch, F. Tietze, and A. Kock, "Firms' intellectual property ownership aggressiveness in university–industry collaboration projects: Choosing the right governance mode," Creativity and Innovation Management, vol. 29, no. 2, pp. 359-370, 2020, doi: 10.1111/caim.12354. [166] J. Henkel, S. Schöberl, and O. Alexy, "The emergence of openness: How and why firms adopt selective revealing in open innovation," Research Policy, vol. 43, no. 5, pp. 879- 890, 2014, doi: 10.1016/j.respol.2013.08.014. [167] A. Arora, S. Athreye, and C. Huang, "The paradox of openness revisited: Collaborative innovation and patenting by UK innovators," Research Policy, vol. 45, no. 7, pp. 1352- 1361, 2016, doi: 10.1016/j.respol.2016.03.019. [168] OECD, Main Science and Technology Indicators, Volume 2019 Issue 2. 2020. [169] P. Næss and L. Price, Crisis System: A critical realist and environmental critique of economics and the economy. Routledge, 2016. [170] A. Marshall, Principles of economics: unabridged eighth edition, 8 ed. New York NY, USA: Cosimo, Inc., 2009. [171] C. Bassett, E. Steinmueller, and G. Voss, "Better made up: The mutual influence of science fiction and innovation," Nesta Working Paper, vol. 13, no. 7, 2013.

147 [172] "About Goodreads." Goodreads, Inc. https://www.goodreads.com/about/us (accessed 23.07., 2020). [173] "goodreads.com June 2020 Overview." SimilarWeb. https://www.similarweb.com/website/goodreads.com/ (accessed 23.07., 2020). [174] "Robots Books." Goodreads. https://www.goodreads.com/shelf/show/robots (accessed 24.07., 2020). [175] I. Asimov, I, Robot (Robot Series). New York City NY, USA: Gnome Press, 1950. [176] B. Czarniawska and B. Joerges, "Do robots and companies get along in science fiction?," Entreprises et histoire, vol. 96, no. 3, pp. 72-82, 2019, doi: 10.3917/eh.096.0072. [177] "Do Androids Dream of Electric Sheep? by Philip K. Dick." Goodreads, Inc. https://www.goodreads.com/book/show/36402034-do-androids-dream-of-electric- sheep?ac=1&from_search=true&qid=70AehNtiDz&rank=1 (accessed 23.07., 2020). [178] "I, Robot by Isaac Asimov." Goodreads, Inc. https://www.goodreads.com/book/show/41804.I_Robot?ac=1&from_search=true&qid =GKWzEh6J6f&rank=1 (accessed 23.07., 2020). [179] "Robopocalypse by Daniel H. Wilson." Goodreads, Inc. https://www.goodreads.com/book/show/9634967- robopocalypse?ac=1&from_search=true&qid=UVoBz2rmK5&rank=1 (accessed 23.07., 2020). [180] "Machines Like Me by Ian McEwan." Goodreads, Inc. https://www.goodreads.com/book/show/42086795-machines-like- me?ac=1&from_search=true&qid=ZA42H6S1Sl&rank=1 (accessed 23.07., 2020). [181] "Genesis by Bernard Beckett." Goodreads, Inc. https://www.goodreads.com/book/show/6171892- genesis?from_search=true&from_srp=true&qid=QbNN6UrX0z&rank=1 (accessed 23.07., 2020). [182] "R.U.R. by Karel Čapek, Paul Selver (Translator), Nigel Playfair (Translator)." Goodreads, Inc. https://www.goodreads.com/book/show/436562.R_U_R_?ac=1&from_search=true&q id=LkQPyA5JIC&rank=4 (accessed 23.07., 2020). [183] R. Murphy and D. D. Woods, "Beyond Asimov: The Three Laws of Responsible Robotics," IEEE Intelligent Systems, vol. 24, no. 4, pp. 14-20, 2009, doi: 10.1109/MIS.2009.69. [184] A. L. Patterson, "Cyborg Futures: An Examination of Asimov, Gibson, and Sawyer," State University of New York at Binghamton, 2017. [185] K. Kinyon, "The Phenomenology of Robots: Confrontations with Death in Karel Čapek's" RUR"," Science Fiction Studies, vol. 26, no. 3, pp. 379-400, 1999. [186] B. King, "The Testaments; Quichotte; Machines Like Me and People Like You," Journal of Postcolonial Writing, vol. 55, no. 6, pp. 866-871, 2019, doi: 10.1080/17449855.2019.1693787. [187] S. Giffney, "The Impossibilities of Fiction: Narrative Power in Beckett's' Genesis'," English in Aotearoa, no. 74, pp. 64-70, 2011. [188] A. Szugajew, "The Human in Posthuman: Man Prevailing in the Posthuman Context of Bernard Beckett’s Genesis," FOLIO. A Students’ Journal vol. 15, no. 2, pp. 45-51, 2016. [189] J. S. Page, "Cyborgs and Robots: A Logically Ordered Existence?," Inquiries Journal, vol. 2, no. 12, 2010. [Online]. Available: http://www.inquiriesjournal.com/articles/340/cyborgs-and-robots-a-logically-ordered- existence. [190] W. Senior, "Blade Runner and visions of humanity," Film Criticism, vol. 21, no. 1, pp. 1-12, 1996.

148 [191] D. Kellner, F. Leibowitz, and M. Ryan, "Blade Runner: a diagnostic critique," Jump Cut: A Review of Contemporary Media, vol. 29, pp. 6-8, 1984. [192] M. Grech, "Technological Appendages and Organic Prostheses: Robo-Human Appropriation and Cyborgian Becoming in Daniel H. Wilson’s Robopocalypse," Word and Text, A Journal of Literary Studies and Linguistics, vol. 3, no. 2, pp. 85-95, 2013. [193] E. Soofastaei, H. Kaur, and S. A. Mirenayat, "Technocentrism and Technological Dehumanization in Daniel H. Wilson’s Robopocalypse," Theory and Practice in Language Studies, vol. 6, no. 1, pp. 34-39, 2016, doi: 10.17507/tpls.0601.04. [194] B. Czarniawska and B. Joerges, "Robotization - Then and Now," Gothenburg Research Institute, Gothenburg, ISSN 1400-4801, 2018, vol. 1. [195] A. Dafoe, "On technological determinism: a typology, scope conditions, and a mechanism," Science, Technology, & Human Values, vol. 40, no. 6, pp. 1047-1076, 2015, doi: 10.1177/0162243915579283. [196] D. Denyer and D. Tranfield, "Producing a systematic review," in The Sage handbook of organizational research methods., D. A. Buchanan and A. Bryman Eds. Thousand Oaks CA, USA: Sage Publications Ltd, 2009, pp. 671-689. [197] J. Scholtz, "Theory and evaluation of human robot interactions," in HICSS '03: Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03) Big Island HI, USA, 2003: IEEE Computer Society, pp. 125-135, doi: 10.1109/HICSS.2003.1174284. [198] T. Moulières-Seban, D. Bitonneau, J.-M. Salotti, J.-F. Thibault, and B. Claverie, "Human Factors Issues for the Design of a Cobotic System," in Advances in Intelligent Systems and Computing - AHFE 2016 International Conference on Human Factors in Robots and Unmanned Systems, Walt Disney World FL, USA, P. Savage-Knepshield and J. Chen, Eds., 2017, vol. 499: Springer, Cham, pp. 375-385, doi: 10.1007/978-3- 319-41959-6_31. [199] K. W. Ong, G. Seet, and S. K. Sim, "An implementation of seamless human-robot interaction for telerobotics," International Journal of Advanced Robotic Systems, vol. 5, no. 2, pp. 167-176, 2008, doi: 10.5772/5647. [200] L. Onnasch and E. Roesler, "A Taxonomy to Structure and Analyze Human–Robot Interaction," International Journal of Social Robotics, 2020, doi: 10.1007/s12369-020- 00666-5. [201] J. M. Bradshaw, P. J. Feltovich, H. Jung, S. Kulkarni, W. Taysom, and A. Uszok, "Dimensions of Adjustable Autonomy and Mixed-Initiative Interaction," in International Workshop on Computational Autonomy - AUTONOMY 2003: Agents and Computational Autonomy - Potential, Risks, and Solutions, Melbourne, , M. Nickles, M. Rovatsos, and G. Weiss, Eds., 2004, vol. 2969: Springer Berlin Heidelberg, in Lecture Notes in Computer Science, pp. 17-39, doi: 10.1007/978-3-540-25928-2_3. [202] M. Smithson, "Trusted autonomy under uncertainty," in Studies in Systems, Decision and Control - Foundations of Trusted Autonomy, vol. 117, H. A. Abbass, J. Scholz, and D. J. Reid Eds. Cham, Switzerland: Springer, Cham, 2018, pp. 185-201. [203] P. A. Hancock, T. T. Kessler, A. D. Kaplan, J. C. Brill, and J. L. Szalma, "Evolving Trust in Robots: Specification Through Sequential and Comparative Meta-Analyses," Human Factors, In Press, doi: 10.1177/0018720820922080. [204] F. Fossa, ""I Don't Trust You, You Faker!" On Trust, Reliance, and Artificial Agency," Teoria, vol. 39, no. 1, pp. 63-80, 2019, doi: 10.4454/teoria.v39i1.57. [205] M. Lewis, K. Sycara, and P. Walker, "The Role of Trust in Human-Robot Interaction," in Studies in Systems, Decision and Control - Foundations of Trusted Autonomy, vol. 117, H. A. Abbass, J. Scholz, and D. J. Reid Eds. Cham, Switzerland: Springer, Cham, 2018, pp. 135-159.

149 [206] J. Kirkpatrick, E. N. Hahn, and A. J. Haufler, "Trust and human-robot interactions," in Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, P. Lin, K. Abney, and R. Jenkins Eds. New York NY, USA: Oxford University Press, 2017, pp. 142-156. [207] S. M. M. Rahman, "Cognitive Cyber-Physical System (C-CPS) for human-robot collaborative manufacturing," in 2019 14th Annual Conference System of Systems Engineering (SoSE), Anchorage AK, USA, 2019: IEEE Press, pp. 125-130, doi: 10.1109/SYSOSE.2019.8753835. [208] M. Chen, H. Soh, D. Hsu, S. Nikolaidis, and S. Srinivasa, "Trust-aware decision making for human-robot collaboration: Model learning and planning," ACM Trans. Hum.-Robot Interact., vol. 9, no. 2, p. Article 9, 2020, doi: 10.1145/3359616. [209] H. Saeidi and Y. Wang, "Trust and self-confidence based autonomy allocation for robotic systems," in 2015 IEEE 54th Annual Conference on Decision and Control (CDC), Osaka, Japan, 2015: IEEE Press, pp. 6052-6057, doi: 10.1109/CDC.2015.7403171. [210] M. Coeckelbergh, "Can we trust robots?," Ethics and Information Technology, vol. 14, pp. 53-60, 2012, doi: 10.1007/s10676-011-9279-1. [211] S. Ososky, D. Schuster, E. Phillips, and F. Jentsch, "Building appropriate trust in human-robot teams," in 2013 AAAI Spring Symposium Series, Stanford CA, USA, 2013, pp. 60-65. [212] J. Wu, E. Paeng, K. Linder, P. Valdesolo, and J. C. Boerkoel, Jr., "Trust and cooperation in human-robot decision making," in AAAI 2016 Fall Symposium Series, Arlington VA, USA, 2016: AAAI Press, pp. 110-116. [213] T. Sanders, A. Kaplan, R. Koch, M. Schwartz, and P. A. Hancock, "The Relationship Between Trust and Use Choice in Human-Robot Interaction," Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 61, no. 4, pp. 614-626, 2019, doi: 10.1177/0018720818816838. [214] K. E. Schaefer, S. G. Hill, and F. G. Jentsch, "Trust in human-autonomy teaming: A review of trust research from the US army research laboratory robotics collaborative technology alliance," in AHFE 2018: Advances in Human Factors in Robots and Unmanned Systems, Orlando FL, USA, J. Chen, Ed., 2019, vol. 784: Springer, Cham, pp. 102-114, doi: 10.1007/978-3-319-94346-6_10. [215] P. A. Hancock, D. R. Billings, and K. E. Schaefer, "Can you trust your robot?," Ergonomics in Design, vol. 19, no. 3, pp. 24-29, 2011, doi: 10.1177/1064804611415045. [216] Y. Xie, I. P. Bodala, D. C. Ong, D. Hsu, and H. Soh, "Robot Capability and Intention in Trust-Based Decisions Across Tasks," in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea (South), 2019: IEEE Press, pp. 39-47, doi: 10.1109/HRI.2019.8673084. [217] G. Teo, R. Wohleber, J. C. Lin, and L. Reinerman-Jones, "The Relevance of Theory to Human-Robot Teaming Research and Development," in Advances in Human Factors in Robots and Unmanned Systems - Proceedings of the AHFE 2016 International Conference on Human Factors in Robots and Unmanned Systems, Walt Disney World FL, USA, P. Savage-Knepshield and J. Chen, Eds., 2016, vol. 499: Springer, Cham, in Advances in Intelligent Systems and Computing, pp. 175-185, doi: 10.1007/978-3-319- 41959-6_15. [218] T. Sanders, K. E. Oleson, D. R. Billings, J. Y. C. Chen, and P. A. Hancock, "A model of human-robot trust: Theoretical model development," Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 55, no. 1, pp. 1432-1436, 2011, doi: 10.1177/1071181311551298. [219] C. P. Janssen, S. F. Donker, D. P. Brumby, and A. L. Kun, "History and future of human- automation interaction," International Journal of Human Computer Studies, vol. 131, pp. 99-107, 2019, doi: 10.1016/j.ijhcs.2019.05.006.

150 [220] G. Matthews et al., "Evolution and revolution: Personality research for the coming world of robots, artificial intelligence, and autonomous systems," Personality and Individual Differences, In Press, doi: 10.1016/j.paid.2020.109969. [221] G. Matthews, J. Lin, A. R. Panganiban, and M. D. Long, "Individual Differences in Trust in Autonomous Robots: Implications for Transparency," IEEE Transactions on Human-Machine Systems, vol. 50, no. 3, pp. 234-244, 2020, doi: 10.1109/THMS.2019.2947592. [222] T. Kessler, K. Stowers, J. C. Brill, and P. A. Hancock, "Comparisons of human-human trust with other forms of human-technology trust," Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 61, no. 1, pp. 1303-1307, 2017, doi: 10.1177/1541931213601808. [223] S. Nyholm and J. Smids, "Can a Robot Be a Good Colleague?," Science and Engineering Ethics, vol. 26, no. 4, pp. 2169-2188, 2020, doi: 10.1007/s11948-019- 00172-6. [224] F. Alaieri and A. Vellino, "Ethical Decision Making in Robots: Autonomy, Trust and Responsibility," in International Conference on Social Robotics - ICSR 2016: Social Robotics - 8th International Conference, Kansas City MO, USA, A. Agah, J.-J. Cabibihan, A. M. Howard, M. A. Salichs, and H. He, Eds., 2016, vol. 9979: Springer, Cham, in Lecture Notes in Computer Science, pp. 159-168, doi: 10.1007/978-3-319- 47437-3_16. [225] Y. Razin and K. Feigh, "Toward interactional trust for humans and automation: Extending interdependence," in 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation, Leicester, UK, 2019: IEEE Press, pp. 1348-1355, doi: 10.1109/SmartWorld-UIC-ATC- SCALCOM-IOP-SCI.2019.00247. [226] M. Kraus, N. Wagner, and W. Minker, "Effects of Proactive Dialogue Strategies on Human-Computer Trust," in UMAP '20: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, Genoa, Italy, 2020: Association for Computing Machinery, pp. 107-116, doi: 10.1145/3340631.3394840. [227] R. Pak, J. J. Crumley-Branyon, E. J. de Visser, and E. Rovira, "Factors that affect younger and older adults' causal attributions of robot behaviour," Ergonomics, vol. 63, no. 4, pp. 421-439, 2020, doi: 10.1080/00140139.2020.1734242. [228] K. E. Schaefer, T. L. Sanders, R. E. Yordon, D. R. Billings, and P. A. Hancock, "Classification of robot form: Factors predicting perceived trustworthiness," Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 56, no. 1, pp. 1548-1552, 2012, doi: 10.1177/1071181312561308. [229] J. Wright, T. Sanders, and P. A. Hancock, "Identifying the role of attributions in human perceptions of robots," Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 57, no. 1, pp. 1288-1292, 2013, doi: 10.1177/1541931213571285. [230] W. G. Volante, T. Sanders, D. Dodge, V. A. Yerdon, and P. A. Hancock, "Specifying influences that mediate trust in human-robot interaction," Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 60, no. 1, pp. 1753-1757, 2016, doi: 10.1177/1541931213601402. [231] K. Liaw, S. Driver, and M. R. Fraune, "Robot Sociality in Human-Robot Team Interactions," in International Conference on Human-Computer Interaction - HCII 2019: HCI International 2019 – Late Breaking Posters, Orlando FL, USA, C. Stephanidis and M. Antona, Eds., 2019, vol. 1088: Springer, Cham, pp. 434-440, doi: 10.1007/978-3-030-30712-7_53.

151 [232] D. Zanatto, M. Patacchiola, A. Cangelosi, and J. Goslin, "Generalisation of Anthropomorphic Stereotype," International Journal of Social Robotics, vol. 12, pp. 163-172, 2020, doi: 10.1007/s12369-019-00549-4. [233] D. Bryant, J. Borenstein, and A. Howard, "Why should we gender? the effect of robot gendering and occupational stereotypes on human trust and perceived competency," in HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human- Robot Interaction, Cambridge, UK, 2020: Association for Computing Machinery, pp. 13-21, doi: 10.1145/3319502.3374778. [234] S. You and L. P. Robert, "Human-Robot Similarity and Willingness to Work with a Robotic Co-worker," in HRI '18: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago IL, USA, 2018: Association for Computing Machinery, pp. 251-260, doi: 10.1145/3171221.3171281. [235] M. Natarajan and M. Gombolay, "Effects of anthropomorphism and accountability on trust in human robot interaction," in HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, United Kingdom, 2020: Association for Computing Machinery, pp. 33-42, doi: 10.1145/3319502.3374839. [236] E. Schniter, T. W. Shields, and D. Sznycer, "Trust in humans and robots: Economically similar but emotionally different," Journal of Economic Psychology, vol. 78, p. 102253, 2020, doi: 10.1016/j.joep.2020.102253. [237] J. Złotowski, K. Yogeeswaran, and C. Bartneck, "Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources," International Journal of Human-Computer Studies, vol. 100, pp. 48-54, 2017, doi: 10.1016/j.ijhcs.2016.12.008. [238] T. L. Sanders et al., "Trust and prior experience in human-robot interaction," Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 61, no. 1, pp. 1809-1813, 2017, doi: 10.1177/1541931213601934. [239] S. Ososky, T. Sanders, F. Jentsch, P. Hancock, and J. Y. C. Chen, "Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems," in Proceedings of SPIE 9084 Unmanned Systems Technology XVI, Baltimore MD, United States, 2014, vol. 9084, doi: 10.1117/12.2050622. [240] S. Nikolaidis, M. Kwon, J. Forlizzi, and S. Srinivasa, "Planning with Verbal Communication for Human-Robot Collaboration," ACM Trans. Hum.-Robot Interact., vol. 7, no. 3, 2018, doi: 10.1145/3203305. [241] T. L. Sanders, T. Wixon, K. E. Schafer, J. Y. C. Chen, and P. A. Hancock, "The influence of modality and transparency on trust in human-robot interaction," in 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), San Antonio TX, USA, 2014: IEEE Press, pp. 156-159, doi: 10.1109/CogSIMA.2014.6816556. [242] C. Stanton and C. J. Stevens, "Robot pressure: The impact of robot eye gaze and lifelike bodily movements upon decision-making and trust," in International Conference on Social Robotics - ICSR 2014: Social Robotics - 6th International Conference, Sydney, Australia, M. Beetz, M. A. Williams, and B. Johnston, Eds., 2014, vol. 8755: Springer, Cham, in Lecture Notes in Computer Science, pp. 330-339, doi: 10.1007/978-3-319- 11973-1_34. [243] C. J. Stanton and C. J. Stevens, "Don't Stare at Me: The Impact of a Humanoid Robot's Gaze upon Trust During a Cooperative Human-Robot Visual Task," International Journal of Social Robotics, vol. 9, no. 5, pp. 745-753, 2017, doi: 10.1007/s12369-017- 0422-y. [244] M. Bergman, E. De Joode, M. De Geus, and J. Sturm, "Human-cobot teams: Exploring design principles and behaviour models to facilitate the understanding of non-verbal communication from cobots," in 3rd International Conference on Computer-Human

152 Interaction Research and Application (CHIRA), Vienna, Austria, H. P. Silva, A. J. Ramirez, A. Holzinger, M. Helfert, and L. Constantine, Eds., 2019, pp. 191-198, doi: 10.5220/0008363201910198. [245] S. D. Ciocirlan, R. Agrigoroaie, and A. Tapus, "Human-Robot Team: Effects of Communication in Analyzing Trust," in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 2019: IEEE Press, doi: 10.1109/RO-MAN46459.2019.8956345. [246] L. Fan, M. Scheutz, M. Lohani, M. McCoy, and C. Stokes, "Do we need emotionally intelligent artificial agents? First results of human perceptions of emotional intelligence in humans compared to robots," presented at the International Conference on Intelligent Virtual Agents, Stockholm, Sweden, 2017. [247] L. Beton, P. Hughes, S. Barker, M. Pilling, L. Fuente, and N. T. Crook, "Leader- follower strategies for robot-human collaboration," in A World with Robots - International Conference on Robot Ethics: ICRE 2015, vol. 84, M. Aldinhas-Ferreira, J. S. Sequeira, M. E. Tokhi, E. Kadar, and G. Virk Eds., (Intelligent Systems, Control and Automation: Science and Engineering Cham, Switzerland: Springer, Cham, 2017, pp. 145-158. [248] S. Stadler, A. Weiss, and M. Tscheligi, "I Trained this robot: The impact of pre- experience and execution behavior on robot teachers," in 2014 23th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Edinburgh, UK, 2014: IEEE Press, pp. 1030-1036, doi: 10.1109/ROMAN.2014.6926388. [249] S. Nikolaidis, Y. X. Zhu, D. Hsu, and S. Srinivasa, "Human-Robot Mutual Adaptation in Shared Autonomy," in HRI '17: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 2017: Association for Computing Machinery, pp. 294-302, doi: 10.1145/2909824.3020252. [250] S. Nikolaidis, D. Hsu, and S. Srinivasa, "Human-robot mutual adaptation in collaborative tasks: Models and experiments," International Journal of Robotics Research, vol. 36, no. 5-7, pp. 618-634, 2017, doi: 10.1177/0278364917690593. [251] H. Saeidi, J. R. Wagner, and Y. Wang, "A mixed-initiative haptic teleoperation strategy for mobile robotic systems based on bidirectional computational trust analysis," IEEE Transactions on Robotics, vol. 33, no. 6, pp. 1500-1507, 2017, doi: 10.1109/TRO.2017.2718549. [252] B. Sadrfaridpour, H. Saeidi, Y. Wang, and J. Burke, "Modeling and control of trust in human and robot collaborative manufacturing," in Robust Intelligence and Trust in Autonomous Systems R. Mittu, D. Sofge, A. Wagner, and W. F. Lawless Eds. Boston MA, USA: Springer, Boston, MA, 2016, pp. 64-70. [253] H. Kaindl, "Human-machine interaction," in International Conference on Human Interaction and Emerging Technologies - IHIET 2019: Human Interaction and Emerging Technologies, Nice, France, T. Ahram, R. Taiar, S. Colson, and A. Choplin, Eds., 2019, vol. 1018: Springer, Cham, pp. 428-433, doi: 10.1007/978-3-030-25629- 6_66. [254] A. S. Yuschenko, "Control and Ergonomic Problems of Collaborative Robotics," in Robotics: Industry 4.0 Issues & New Intelligent Control Paradigms, vol. 272, A. G. Kravets Ed., (Studies in Systems, Decision and Control Cham, Switzerland: Springer, Cham, 2020, pp. 43-53. [255] A. Bannat, J. Gast, T. Rehrl, W. Rösel, G. Rigoll, and F. Wallhoff, "A multimodal human-robot-interaction scenario: Working together with an industrial robot," in International Conference on Human-Computer Interaction - HCI 2009: Human- Computer Interaction. Novel Interaction Methods and Techniques, Pt. II - 13th International Conference, San Diego CA, USA, J. A. Jacko, Ed., 2009, vol. 5611:

153 Springer-Verlag Berlin Heidelberg, in Lecture Notes in Computer Science, pp. 303-311, doi: 10.1007/978-3-642-02577-8_33. [256] R. A. Knepper, C. I. Mavrogiannis, J. Proft, and C. Liang, "Implicit Communication in a Joint Action," in HRI '17: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 2017: Association for Computing Machinery, pp. 283-292, doi: 10.1145/2909824.3020226. [257] D. Kontogiorgos, "Multimodal language grounding for improved human-robot collaboration : Exploring spatial semantic representations in the shared space of attention," in ICMI '17: Proceedings of the 19th ACM International Conference on Multimodal Interaction, Glasgow, UK, 2017: Association for Computing Machinery, pp. 660-664, doi: 10.1145/3136755.3137038. [258] S. Lakhmani, J. I. Abich, D. Barber, and J. Chen, "A Proposed Approach for Determining the Influence of Multimodal Robot-of-Human Transparency Information on Human-Agent Teams," in International Conference on Augmented Cognition - AC 2016: Foundations of Augmented Cognition: Neuroergonomics and Operational Neuroscience, Pt. II, Toronto ON, Canada, D. D. Schmorrow and C. M. Fidopiastis, Eds., 2016, vol. 9744: Springer, Cham, in Lecture Notes in Computer Science, pp. 296- 307, doi: 10.1007/978-3-319-39952-2_29. [259] N. Mavridis, "A review of verbal and non-verbal human–robot interactive communication," Robotics and Autonomous Systems, vol. 63, pp. 22-35, 2015, doi: 10.1016/j.robot.2014.09.031. [260] G. Rigoll, "Multimodal interaction: Methods and applications for joint cooperation between humans and cognitive systems," in Proceedings of the 2011 conference on Neural Nets WIRN10: Proceedings of the 20th Italian Workshop on Neural Nets, Vietri sul Mare, Italy, B. Apolloni, S. Bassis, A. M. Esposito, and C. F. Morabito, Eds., 2011, vol. 226: IOS Press, in Frontiers in Artificial Intelligence and Applications, pp. 273-283, doi: 10.3233/978-1-60750-692-8-273. [261] D. B. Kaber, M. C. Wright, and M. A. Sheik-Nainar, "Investigation of multi-modal interface features for adaptive automation of a human-robot system," International Journal of Human Computer Studies, vol. 64, no. 6, pp. 527-540, 2006, doi: 10.1016/j.ijhcs.2005.11.003. [262] M. Aubert, H. Bader, and K. Hauser, "Designing Multimodal Intent Communication Strategies for Conflict Avoidance in Industrial Human-Robot Teams," in 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO- MAN), Nanjing, China, J. J. Cabibihan, F. Mastrogiovanni, A. K. Pandey, S. Rossi, and M. Staffa, Eds., 2018: IEEE Press, pp. 1018-1025, doi: 10.1109/ROMAN.2018.8525557. [263] B. Gonsior et al., "Impacts of multimodal feedback on efficiency of proactive information retrieval from task-related HRI," Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 16, no. 2, pp. 313-326, 2012, doi: 10.20965/jaciii.2012.p0313. [264] K. Jokinen and G. Wilcock, "Modelling user experience in human-robot interactions," in International Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction MA3HMI 2014: Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction, Singapore, Singapore, R. Böck, F. Bonin, N. Campbell, and R. Poppe, Eds., 2014, vol. 8757: Springer, Cham, in Lecture Notes in Computer Science, pp. 45-56, doi: 10.1007/978-3-319-15557-9_5. [265] H. M. Gross, J. Richarz, S. Mueller, A. Scheidig, and C. Martin, "Probabilistic multi- modal people tracker and monocular pointing pose estimator for visual instruction of mobile robot assistants," in 2006 IEEE International Joint Conference on Neural

154 Network Proceedings (IJCNN), Vancouver BC, Canada, 2006: IEEE Press, pp. 4209- 4217, doi: 10.1109/ijcnn.2006.246971. [266] M. Giuliani and A. Knoll, "Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction," International Journal of Social Robotics, vol. 5, no. 3, pp. 345-356, 2013, doi: 10.1007/s12369-013-0194-y. [267] D. Perzanowski et al., "Toward multimodal human-robot cooperation and collaboration," in AIAA 1st Intelligent Systems Technical Conference, Chicago IL, USA, 2004, vol. 2, pp. 702-710, doi: 10.2514/6.2004-6366. [268] Z. M. Hanafiah, C. Yamazaki, A. Nakamura, and Y. Kuno, "Human-robot speech interface understanding inexplicit utterances using vision," in CHI EA '04: CHI '04 Extended Abstracts on Human Factors in Computing Systems, Vienna, Austria, 2004: Association for Computing Machinery, pp. 1321-1324, doi: 10.1145/985921.986054. [269] M. E. Foster, T. By, M. Rickert, and A. Knoll, "Human-Robot dialogue for joint construction tasks," in ICMI '06: Proceedings of the 8th international conference on Multimodal interfaces, Banff AB, Canada, 2006: Association for Computing Machinery, pp. 68-71, doi: 10.1145/1180995.1181009. [270] S. Hüwel, B. Wrede, and G. Sagerer, "Robust speech understanding for multi-modal human-robot communication," in 2006 15th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Hatfield, UK, 2006: IEEE Press, pp. 45-50, doi: 10.1109/ROMAN.2006.314393. [271] S. Li, B. Wrede, and G. Sagerer, "A computational model of multi-modal grounding for human robot interaction," in SigDIAL '06: Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue, Sydney, Australia, J. Alexandersson and A. Knott, Eds., 2006: Association for Computational Linguistics, pp. 153-160, doi: 10.3115/1654595.1654626. [272] S. Li and B. Wrede, "Why and how to model multi-modal interaction for a mobile robot companion," in AAAI spring symposium: interaction challenges for intelligent assistants, Stanford University CA, USA, 2007, pp. 72-79. [273] J. T. C. Tan, F. Duan, Y. Zhang, K. Watanabe, R. Kato, and T. Arai, "Human-robot collaboration in cellular manufacturing: Design and development," in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis MO, USA, 2009: IEEE Press, pp. 29-34, doi: 10.1109/IROS.2009.5354155. [274] C. Chao, "Timing multimodal turn-taking for human-robot cooperation," in ICMI '12: Proceedings of the 14th ACM international conference on Multimodal interaction, Santa Monica CA, USA, 2012: Association for Computing Machinery, pp. 309-312, doi: 10.1145/2388676.2388744. [275] A. Zaatri and B. Bouchemal, "Some interactive control modes for planar cable-driven robots," World J. Eng., vol. 10, no. 5, pp. 485-489, 2013, doi: 10.1260/1708- 5284.10.5.485. [276] A. Cherubini, R. Passama, P. Fraisse, and A. Crosnier, "A unified multimodal control framework for human-robot interaction," Robotics and Autonomous Systems, vol. 70, pp. 106-115, 2015, doi: 10.1016/j.robot.2015.03.002. [277] Y. Hagiwara, "Cloud based VR system with immersive interfaces to collect multimodal data in human-robot interaction," in 2015 IEEE 4th Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 2015: IEEE Press, pp. 256-259, doi: 10.1109/GCCE.2015.7398709. [278] M. G. Jacob and J. P. Wachs, "Optimal Modality Selection for Cooperative Human- Robot Task Completion," IEEE Transactions on Cybernetics, vol. 46, no. 12, pp. 3388- 3400, 2016, doi: 10.1109/tcyb.2015.2506985. [279] J. Höcherl, T. Schlegl, T. Berlehner, H. Kuhn, and B. Wrede, "Smart workbench: Toward adaptive and transparent user assistance in industrial human-robot

155 applications," in Proceedings of ISR 2016: 47st International Symposium on Robotics, Munich, Germany, 2016: VDE, pp. 271-278. [280] S. Kane, K. McGurgan, M. Voshell, C. Monnier, S. German, and A. Ost, "A multi- modal interface for natural operator teaming with autonomous robots (MINOTAUR)," in International Conference on Applied Human Factors and Ergonomics AHFE 2017: Advances in Human Factors in Robots and Unmanned Systems Los Angeles CA, USA, J. Chen, Ed., 2017, vol. 595: Springer, Cham, pp. 99-108, doi: 10.1007/978-3-319- 60384-1_10. [281] B. J. P. Mortimer and L. R. Elliott, "Information transfer within human robot teams: Multimodal attention management in human-robot interaction," in 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Savannah GA, USA, 2017: IEEE Press, pp. 1-3, doi: 10.1109/COGSIMA.2017.7929608. [282] G. Horvath, C. Kardos, Z. Kemeny, A. Kovacs, B. E. Pataki, and J. Vancza, "Multi- Modal Interfaces for Human-Robot Communication in Collaborative Assembly," Ercim News, no. 114, pp. 15-16, 2018. [283] H. Liu, T. Fang, T. Zhou, and L. Wang, "Towards Robust Human-Robot Collaborative Manufacturing: Multimodal Fusion," IEEE Access, vol. 6, pp. 74762-74771, 2018, doi: 10.1109/ACCESS.2018.2884793. [284] H. Liu, T. Fang, T. Zhou, Y. Wang, and L. Wang, "Deep Learning-based Multimodal Control Interface for Human-Robot Collaboration," in 51st CIRP Conference on Manufacturing Systems, Stockholm, Sweden, L. Wang and T. Kjellberg, Eds., 2018, vol. 72, pp. 3-8, doi: 10.1016/j.procir.2018.03.224. [285] J. Cacace, A. Finzi, and V. Lippiello, "Robust multimodal command interpretation for human-multirobot interaction," in AIRO 2018 THE 5TH ITALIAN WORKSHOP ON ARTIFICIAL INTELLIGENCE AND ROBOTICS, Trento, Italy, F. Mastrogiovanni, A. Finzi, S. Anzalone, and A. Farinelli, Eds., 2018, vol. 2054, pp. 27-33. [286] C. Kardos, Z. Kemény, A. Kovács, B. E. Pataki, and J. Váncza, "Context-dependent multimodal communication in human-robot collaboration," in 51st CIRP Conference on Manufacturing Systems, Stockholm, Sweden, L. Wang and T. Kjellberg, Eds., 2018, vol. 72, pp. 15-20, doi: 10.1016/j.procir.2018.03.027. [287] S. Papanastasiou et al., "Towards seamless human robot collaboration: integrating multimodal interaction," International Journal of Advanced Manufacturing Technology, vol. 105, no. 9, pp. 3881-3897, 2019, doi: 10.1007/s00170-019-03790-3. [288] H. Petruck et al., "Human-Robot Cooperation in Manual Assembly – Interaction Concepts for the Future Workplace," in International Conference on Applied Human Factors and Ergonomics AHFE 2019: Advances in Human Factors in Robots and Unmanned Systems, Washington D.C., USA, J. Chen, Ed., 2019: Springer, Cham, pp. 60-71, doi: 10.1007/978-3-030-20467-9_6. [289] B. Shu, G. Sziebig, and R. Pieters, "Architecture for Safe Human-Robot Collaboration: Multi-Modal Communication in Virtual Reality for Efficient Task Execution," in 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), Vancouver, Canada, 2019: IEEE Press, pp. 2297-2302, doi: 10.1109/ISIE.2019.8781372. [290] S. Liu, L. Wang, and X. V. Wang, "Symbiotic human-robot collaboration: multimodal control using function blocks," 53rd CIRP Conference on Manufacturing Systems 2020, vol. 93, pp. 1188-1193, 2020, doi: 10.1016/j.procir.2020.03.022. [291] J. Smids, S. Nyholm, and H. Berkers, "Robots in the Workplace: a Threat to—or Opportunity for—Meaningful Work?," Philosophy & Technology, vol. 33, no. 3, pp. 503-522, 2020, doi: 10.1007/s13347-019-00377-4.

156 [292] A. B. Moniz and B.-J. Krings, "Robots working with humans or humans working with robots? Searching for social dimensions in new human-robot interaction in industry," Societies, vol. 6, no. 3, pp. 131-153, 2016, doi: 10.3390/soc6030023. [293] T. Kochan, S. Helper, A. Kowalski, and J. Van Reenen. "Interdependence of Technology and Work Systems: A Baseline Memo for the MIT Work of the Future Task Force." Good Companies, Good Jobs - Intiative at MIT Sloan. https://gcgj.mit.edu/interdependence-technology-and-work-systems-baseline-memo- mit-work-future-task-force (accessed 23.10., 2020). [294] K. E. Schaefer, J. K. Adams, J. G. Cook, A. Bardwell-Owens, and P. A. Hancock, "The Future of Robotic Design: Trends From the History of Media Representations," Ergonomics in Design, vol. 23, no. 1, pp. 13-19, 2015, doi: 10.1177/1064804614562214.

157 Appendix

Appendix A Presentation after semi-structured Interviews

Delineation of the Topic

Service Robots Industrial Robots Field of Robotics

1:n m:1 m:n 1:1 Interaction Setting

Co- Synchro- Co- Collabo- Cell Modes of Interaction existence nization operation ration

How are the Modes of Interaction differentiated? Synchronization/ Cell Co-existence Cooperation Collaboration Interaction “Collaboration is generally defined as the 05 June 2020 Open workspace ✔ ️ ✔ ️ ✔ ️ ✔ ️ Shared workspace ✔ ️ ✔ ️ ✔ ️ interaction between humans and robots to Presentation of Human-Machine/Robot Collaboration Direct contact ✔ ️ ✔ ️ achieve a mutual goal by sharing Shared working task ✔ ️ ✔ ️ respective resources and intrinsic skills Thomas Bohné & Benedikt Krieger Shared resource ✔ ️ ✔ ️ Simultaneous process ✔ ️ ✔ ️ ✔ ️ ✔ ️ through joint action” Sequential process ✔ ️ ✔ ️

1 2

Interest in Human-Robot Collaboration Autonomy vs Collaboration

Fitts List Humans are good at vs. Machines are good at HUMAN INVOLVEMENT ROBOT INVOLVEMENT

Sense From competing to complementary skills Plan Act Control Un-Fitts List Human build machines to vs. Machines need humans for “The extent to which a robot can sense its environment, plan based on that environment, and act upon that environment with the intent of reaching some task-specific goal (either given to From complementary skills to interdependence or created by the robot) without external control.”

Human-Robot Teams Coordination – Communication – Collaboration Autonomy Collaboration

Reaching an individual goal individually VS Reaching a shared goal together

Necessity of “natural and Autonomy is needed to realize Collaboration effective modes of Joint Action Collaboration but should not be an end in itself when both human and robot skills are to be leveraged interaction” complementary

3 4

Design of Collaboration Trust as Precursor of Acceptance

Autonomy Safety Trust-related Themes*

Communication Appearance Robot Themes Human Themes External Themes

Attention In-group Behaviour capacity membership

Coordination Anthropomorphism Dependability & Expertise & Team Performance Abilities Culture Predictability Competency collaboration Operator Reliability Communication Workload

Proximity Demographics Task type Influences the Charac- Personality Technology Acceptance Attributes Personality Task Task complexity teristics Traits Attitudes Physical Adaptability Technology Acceptance Model towards Robots Environment

5 * Only a selection of relevant aspects is pictured 6

Evaluation of Interaction

Task Re late d Me trics Biasin g E ffects Perfo rmance M etrics

Navig a tio n - Global navigation Com m u n ica tio n - Local navigation - Delay Syste m Perform a n ce - Obstacle encounter - Jitter - Quantitative performance - Bandwidth - Subjective ratings - Appropriate utilization of Perce p tion mixed-initiative - Passive Perception - Active Perception

Rob o t Resp o nse Ope ra to r Perfo rm a n ce - System lag - Situation awareness Man a g em e n t - Update rate - Workload - Fan out - Accuracy of mental models of - Intervention response time device operation - Level of autonomy discrepancies

Man ip u la tio n Rob o t P e rfo rm a nce - Degree of mental computation User - Self-awareness - Performance shaping factors - Contact errors - Human awareness (operational, equipment, - Autonomy task, personnel & environmental factors) Socia l - Human role - Interaction - Trust characteristics - Engagement - Persuasiveness - Compliance

7

158 Appendix B Cross-wise Comparison Results for the Spread of HRI

Author 1st Evaluation Author 2nd Evaluation 1st external Evaluation 2nd external Evaluation Compiled Evaluation

Technological Protectionism Sum Technological Protectionism Sum Technological Protectionism Sum Technological Protectionism Sum Technological Protectionism Sum Foreign direct investment resistance 2 Foreign direct investment resistance 2 Foreign direct investment resistance 2 Foreign direct investment resistance 3 2,3 Reshoring production 4 Reshoring production 4 Reshoring production 6 Reshoring production 3 Reshoring production 4,3 Technological standards and regulation 6 Technological standards and regulation 6 Technological standards and regulation 4 Technological standards and regulation 6 Technological standards and regulation 5,3

Consumption Pattern Sum Consumption Pattern Sum Consumption Pattern Sum Consumption Pattern Sum Consumption Pattern Sum Mass-customization consumption 9 Mass-customization consumption 8 Mass-customization consumption 9 Mass-customization consumption 9 Mass-customization consumption 8,8 Dematerializing consumption 7 Dematerializing consumption 8 Dematerializing consumption 6 Dematerializing consumption 7 Dematerializing consumption 6,8 Conscious consumption 5 Conscious consumption 3 Conscious consumption 3 Conscious consumption 4 3,7 Decentralizing production 3 Decentralizing production 5 Decentralizing production 6 Decentralizing production 4 4,7

Workforce Emancipation Sum Workforce Emancipation Sum Workforce Emancipation Sum Workforce Emancipation Sum Workforce Emancipation Sum Organization of labor 8 Organization of labor 9 Organization of labor 6 Organization of labor 7 Organization of labor 7,2 Rising real salaries/wages 3 Rising real salaries/wages 6 Rising real salaries/wages 8 Rising real salaries/wages 5 Rising real salaries/wages 5,8 Growing demand for work life balance 5 Growing demand for work life balance 5 Growing demand for work life balance 3 Growing demand for work life balance 3 3,7 (Re-)Skilling offers 8 (Re-)Skilling offers 4 (Re-)Skilling offers 7 (Re-)Skilling offers 9 (Re-)Skilling offers 7,3

Economics of Replacement Sum Economics of Replacement Sum Economics of Replacement Sum Economics of Replacement Sum Economics of Replacement Sum Rising real salaries/wages 20 Rising real salaries/wages 20 Rising real salaries/wages 11 Rising real salaries/wages 13 14,7 Demographic change 15 Demographic change 16 Demographic change 18 Demographic change 15 Demographic change 16,2 Intensifying working conditions 7 Intensifying working conditions 8 Intensifying working conditions 11 Intensifying working conditions 13 10,5 Falling cost of technology 17 Falling cost of technology 20 Falling cost of technology 15 Falling cost of technology 20 Falling cost of technology 17,8 Maturing technology 20 Maturing technology 15 Maturing technology 14 Maturing technology 20 Maturing technology 17,2 Centralizing of (economic) power 11 Centralizing of (economic) power 8 Centralizing of (economic) power 13 Centralizing of (economic) power 12 11,5 Reshoring production 12 Reshoring production 14 Reshoring production 19 Reshoring production 11 14,3 Privatization of research 10 Privatization of research 11 Privatization of research 11 Privatization of research 8 9,8

Driving Forces Sum Driving Forces Sum Driving Forces Sum Driving Forces Sum Driving Forces Sum Technological Protectionism 3 Technological Protectionism 3 Technological Protectionism 3 Technological Protectionism 3 3,0 Consumption Pattern 5 Consumption Pattern 5 Consumption Pattern 8 Consumption Pattern 5 Consumption Pattern 6,0 Workforce Emancipation 7 Workforce Emancipation 8 Workforce Emancipation 7 Workforce Emancipation 7 Workforce Emancipation 7,2 Economics of Replacement 9 Economics of Replacement 8 Economics of Replacement 6 Economics of Replacement 9 Economics of Replacement 7,8

159 Appendix C Cross-wise Comparison Results for the Design of HRI

Author 1st Evaluation Author 2nd Evaluation 1st external Evaluation 2nd external Evaluation Compiled Evaluation

Technological Protectionism Sum Technological Protectionism Sum Technological Protectionism Sum Technological Protectionism Sum Technological Protectionism Sum Foreign direct investment resistance 2 Foreign direct investment resistance 2 Foreign direct investment resistance 2 Foreign direct investment resistance 3 2,3 Reshoring production 4 Reshoring production 4 Reshoring production 4 Reshoring production 3 Reshoring production 3,7 Technological standards and regulation 6 Technological standards and regulation 6 Technological standards and regulation 6 Technological standards and regulation 6 Technological standards and regulation 6,0

Consumption Pattern Sum Consumption Pattern Sum Consumption Pattern Sum Consumption Pattern Sum Consumption Pattern Sum Mass-customization consumption 7 Mass-customization consumption 6 Mass-customization consumption 6 Mass-customization consumption 9 Mass-customization consumption 7,2 Dematerializing consumption 3 Dematerializing consumption 3 Dematerializing consumption 6 Dematerializing consumption 5 4,7 Conscious consumption 7 Conscious consumption 9 Conscious consumption 6 Conscious consumption 5 Conscious consumption 6,3 Decentralizing production 7 Decentralizing production 6 Decentralizing production 6 Decentralizing production 5 5,8

Workforce Emancipation Sum Workforce Emancipation Sum Workforce Emancipation Sum Workforce Emancipation Sum Workforce Emancipation Sum Organization of labor 8 Organization of labor 8 Organization of labor 6 Organization of labor 9 Organization of labor 7,7 Rising real salaries/wages 3 Rising real salaries/wages 4 Rising real salaries/wages 6 Rising real salaries/wages 5 4,8 Growing demand for work life balance 5 Growing demand for work life balance 4 Growing demand for work life balance 5 Growing demand for work life balance 5 4,8 (Re-)Skilling offers 8 (Re-)Skilling offers 8 (Re-)Skilling offers 7 (Re-)Skilling offers 5 (Re-)Skilling offers 6,7

Economics of Replacement Sum Economics of Replacement Sum Economics of Replacement Sum Economics of Replacement Sum Economics of Replacement Sum Rising real salaries/wages 8 Rising real salaries/wages 8 Rising real salaries/wages 14 Rising real salaries/wages 11 11,0 Demographic change 15 Demographic change 15 Demographic change 14 Demographic change 14 Demographic change 14,3 Intensifying working conditions 9 Intensifying working conditions 11 Intensifying working conditions 14 Intensifying working conditions 14 12,7 Falling cost of technology 10 Falling cost of technology 8 Falling cost of technology 14 Falling cost of technology 20 Falling cost of technology 14,3 Maturing technology 20 Maturing technology 19 Maturing technology 14 Maturing technology 20 Maturing technology 17,8 Centralizing of (economic) power 17 Centralizing of (economic) power 15 Centralizing of (economic) power 14 Centralizing of (economic) power 9 13,0 Reshoring production 16 Reshoring production 19 Reshoring production 14 Reshoring production 9 13,5 Privatization of research 17 Privatization of research 17 Privatization of research 14 Privatization of research 15 Privatization of research 15,3

Driving Forces Sum Driving Forces Sum Driving Forces Sum Driving Forces Sum Driving Forces Sum Technological Protectionism 3 Technological Protectionism 3 Technological Protectionism 6 Technological Protectionism 4 4,3 Consumption Pattern 6 Consumption Pattern 7 Consumption Pattern 6 Consumption Pattern 7 Consumption Pattern 6,5 Workforce Emancipation 6 Workforce Emancipation 9 Workforce Emancipation 6 Workforce Emancipation 4 Workforce Emancipation 5,8 Economics of Replacement 9 Economics of Replacement 5 Economics of Replacement 6 Economics of Replacement 9 Economics of Replacement 7,3

160 Appendix D Science Fiction Robots Works Goodreads Analysis

# of times Relevant for Year of Time SciFi Work Title # Ratings tagged SciFi? Industrial Reason Summary Publication Frame "Robots" Context The Hitchhiker's Guide to the Galaxy Quest to answer the ultimate question, to which the 1.397.276 31 1979 - Yes No Interplanetary travel/life context by Douglas Adams answer is 42

Ready Player One A race to inherit the fortune of a dead man, people 831.993 11 2011 33 years Yes No Video game context by Ernest Cline have to win a video game set in the 1980s

Cinder During a pandemic, a species from another planet 658.950 110 2012 - Yes No Interplanetary travel/life context by Marissa Meyer waits to attack, a cyborg rises as the heroine

Clockwork Angel A girl wants to find her missing brother and finds 644.659 10 2010 1878 Yes No Vampires and romance context by Cassandra Clare that her only allies are demons

Do Androids Dream of Electric Helps to inform us on how humans are Person is hired to find androids gone rogue, a Sheep? (Blade Runner) 336.565 117 1968 53 years Yes Yes treating robots that were made to help relationship between an android and the hunter by Philip K. Dick them. develops.

Scarlet A cyborg mechanic tries to break out of prison, 300.739 56 2013 - Yes No Farytale set in the future by Marissa Meyer fugitive story, fleeing from a "vicious queen"

Universal character of the three laws of I, Robot Short stories illustrating the shortcomings and 269.251 254 1950 near future Yes Yes robotics that can be applied to industrial by Isaac Asimov potential conflicts arising from the three laws. context as well.

Cress A cyborg and a captain try to overthrough a queen 264.436 36 2014 - Yes No Farytale set in the future by Marissa Meyer and her army Love and adventure story between two creatures Saga, Vol. 1 240.976 18 2012 - Yes No Interplanetary travel/life context from different planets in midst of those two planets by Brian K. Vaughan going to war aginst each other The Restaurant at the End of the Fleeing from the "warlike Vogons" in search of Universe 227.494 16 1980 - Yes No Interplanetary travel/life context somewhere to eat while travelling through space by Douglas Adams Life, the Universe and Everything 192.200 14 1982 - Yes No Interplanetary travel/life context Saving life in face of annhiliation by robots by Douglas Adams

Winter A princess and a cyborg form an alliance to end a 183.717 40 2015 - Yes No Farytale set in the future by Marissa Meyer war After the death of his father, Hugo lives in the train The Invention of Hugo Cabret 158.106 12 2007 - Yes No Personal tragedy context station with a booklet describing an automaton, story by Brian Selznick evolves around this booklet After a personal love story break up, the planet is Illuminae 108.000 15 2015 - Yes No Interplanetary travel/life context invaded, more problems surface, the Ex is the only by Amie Kaufman, Jay Kristoff help Saga, Vol. 2 Fleeing from an interplanetary conflict, meeting 87.861 13 2013 - Yes No Interplanetary travel/life context by Brian K. Vaughan more characters, among them a "Prince Robot IV" Detective has to uncover murder on earth of a The Caves of Steel 1000+ 77.696 142 1954 Yes No Interplanetary travel/life context "spacer" together with a robot which looks like the by Isaac Asimov years victim Ancillary Justice 77.634 24 2013 - Yes No Interplanetary travel/life context Human seeks vengance against AI starships Ann Leckie

Saga, Vol. 3 75.610 11 2014 - Yes No Interplanetary travel/life context continuation of other Saga works by Brian K. Vaughan The Long Way to a Small, Angry Adventure built around space travel and the travel Planet 69.229 11 2014 - Yes No Interplanetary travel/life context through wormholes by Becky Chambers While analysing interesting ethical dilemas, A girl falls in a hole, lands on a robotic hand, her Sleeping Giants 67.631 75 2016 - Yes No those have no relation to industrial context life mission from then on is to find the rest of the by Sylvain Neuvel because robots in book are not created by robot Prelude to Foundation 1000+ Search of a human able to predict the future in order 65.151 16 1988 Yes No Interplanetary travel/life context by Isaac Asimov years to protect own power (in the hands of robots)

Robot not used as entity to reflect A human wants to discover food thought to be lost, The Windup Girl 63.229 18 2009 - Yes No philosophical questions, rather mere side- story explores post-human dilemmas, role of robot by Paolo Bacigalupi kick not clear Two kids loose their father, move to house of The Stonekeeper 61.614 19 2008 - Yes No Kids' life context grandfather and embark on a mystery with a robot by Kazu Kibuishi rabbit against demons All Systems Red An android gone rogue has to collaborte with 61.438 127 2017 - Yes No Interplanetary travel/life context by Martha Wells humans on a space mission

Stars Above 59.629 11 2016 - Yes No Farytale set in the future Continuation of other Marissa Meyer works by Marissa Meyer Every being can speak for all—and feel for all. It is Foundation and Earth 53.995 12 1986 - Yes No Interplanetary travel/life context a realm in which privacy is not only undesirable, it by Isaac Asimov is incomprehensible. Is this right? Answer lies back Lock In Virus, that locks people's body physically, spirals in 53.515 27 2014 Near future Yes No Health story by John Scalzi a story on human culture A kid set in a world of only a few humans and Partials 52.162 12 2012 - Yes No War between species mostly other beings wants to answer questions about by Dan Wells the origin of the war before it coming to life We are Legion (We are Bob) A dead human's mind is uploaded in a robot for 50.676 11 2016 100+ years Yes No Interplanetary travel/life context by Dennis E. Taylor extraterrestrial life A young girl is sent to a boarding school to learn Etiquette & Espionage 42.745 17 2013 - Yes No Kids' life context manners, it turns out not only etiquette are taught by Gail Carriger but also espionage The Naked Sun 1000+ A murder story playing with Asimov's three laws of 41.942 110 1956 Yes No Interplanetary travel/life context by Isaac Asimov years robotics

Chobits, Vol. 1 Somebody takes a broken robot home, discovers the 38.223 11 2001 - Yes No Domestic robot (android) context by CLAMP robot is actually more capable than initially thought

The Robots of Dawn 1000+ A murder story playing with Asimov's three laws of 37.720 92 1983 Yes No Interplanetary travel/life context by Isaac Aimov years robotics

Implications for technology that surrounds Robots unite for a war against humans, some Robopocalypse 34.970 133 2011 Near future Yes Yes us, analogy could fit well in industrial individual humans recognize glitches leading up to by Daniel H. Wilson context. the war. The android wants to find out why it killed the Artificial Condition 33.639 68 2018 - Yes No Violence between humans and robots humans, the discovery of which changes the way it by Martha Wells thinks Absolute , Vol. 1 33.611 10 2003 - Yes No Romantic context Love story between human and robot

Waking Gods continuation of Sleeping Giants, Robots land on 33.067 45 2017 - Yes No Interplanetary travel/life context by Sylvain Neuvel earth for war after being waked

A Closed and Common Orbit 33.023 22 2016 - Yes No Interplanetary travel/life context sequel to The Long Way to a Small, Angry Planet by Becky Chambers

Saga, Vol. 7 31.834 12 2017 - Yes No Interplanetary travel/life context continuation of other Saga works by Brian Vaughan

The Stepford Wives A society pursuing youth and beauty at all costs, 28.318 15 1972 - Yes No Domestic robot (android) context by Ira Levin, Peter Straub women are robotozised

Rogue Protocol 26.507 65 2018 - Yes No Violence between humans and robots Continuation of other Martha Wells works by Martha Wells

161 Appendix C Continuation

# of times Relevant for Year of Time SciFi Work Title # Ratings tagged SciFi? Industrial Reason Summary Publication Frame "Robots" Context The Wild Robot A cast away version with a robot being cast away on 25.532 105 2016 - Yes No Robot adventure by Peter Brown an island with only animals

Robots and Empire 25.465 60 1985 - Yes No War between species Robots plan to destroy earth by Isaac Asimov Ending Martha Wells work in that series; the Exit Strategy 23.880 60 2018 - Yes No Violence between humans and robots android wants to return to humans, but will anybody by Martha Wells befriend/protect him when they discover he killed Variant Kids in an academy cannot escape, but there is a 21.422 17 2011 - Yes No Kids' life context by Robinson Wells bigger secret

Glitches 20.618 11 201 - Yes No Violence between humans and robots Continuation of other Martha Wells works by Marissa Meyer

Only Human see Waking Gods, the main human character returns 20.519 30 2018 - Yes No Interplanetary travel/life context by Sylvain Neuvel to earth to resolve dispute among former colleagues Kid is sent to special academy, realizes there are Ungifted 17.994 19 2012 - Yes No Kids' life context more "talents" than intelligence equally important by Gordon Korman for life Approaches philosophical questions arising A guy in love buys one of the first robots and shapes Machines Like Me 16.477 12 2019 1980 Yes Yes from newly advancing technologies from a its personality, hints of romance are included as by Ian McEwan personal perspective. well. A friend is abducted by alien cult, kid leaps with the Zita the Spacegirl 15.573 14 2011 - Yes No Interplanetary travel/life context abducters and finds herself in a world of robots and by Ben Hatke humanoids Robots are set to take the head off each other and Head On 15.384 10 2018 Near future Yes No Sports context carry it through goal posts as entertainment for by John Scalzi humans Descender, Vol. 1: Tin Stars Androids are outlawed on all planets, androids try to 15.014 45 2015 - Yes No Interplanetary travel/life context by Jeff Lemire survive regardless As the three laws by Asimov are already The Complete Robot 13.812 40 1982 - Yes No included, it cannot perceived how this work - by Isaac Asimov would add extra perspectives City Bunch of short stories surrounding dogs that can talk 13.187 13 1952 - Yes No Set after end of human civilization by Clifford D. Simak and robots as their servants and Other A domestic robot aspires for more and accomplishes Stories 12.104 19 1976 - Yes No Robot adventure in a world of humans fame and fortune by Isaac Asimov Autonomous Epidemic caused by a drug, in the middle love story 11.503 36 2017 - Yes No Romantic context by Annalee Newitz between seargent and military robot A small kid is actually a programmed robot, people MILA 2.0 11.308 35 2013 - Yes No Domestic robot (android) context start chasing her because of the technology and her by Debra Driza knowledge about being programmed A person has to take a history test for an academy. Genesis Poses philosophical questions arising from 11.122 14 2006 - Yes Yes Through the test she discovers old questions about by Bernard Beckett newly advancing technologies. technology and philosophy still hold relevance.

Network Effect 11.072 33 2020 - Yes No Violence between humans and robots Continuation of other Martha Wells works by Martha Wells

The Cyberiad bunch of short stories involving love stories, and 10.028 19 1965 - Yes No Interplanetary travel/life context by Stanislaw Lem other societal topics on various planets Collective work of Asimov, as the three Robot Dreams 9.728 28 1986 - Yes No laws by Asimov are already included, it - by Isaac Asimov cannot perceived how this work would add Enemies in an interstellar war, forced to work Defy the Stars 9.515 22 2017 - Yes No Interplanetary conflict together, the longer they work together the more by Claudia Gray they question what they have been taught Robot Dreams Story follows a dog and a robot through their 9.078 25 2007 - Yes No Robot adventure by Sara Varon friendship The Vision, Vol. 1: Little Worse Than Story evolves aroun a robot family that resides A Man 9.034 14 2016 - Yes No A family of robots among us and if that augurs something bad by Tom King Collective work of Asimov, as the three Robot Visions 8.759 29 1990 - Yes No laws by Asimov are already included, it - by Isaac Asimov cannot perceived how this work would add The Iron Man 8.736 11 1968 - Yes No Robot adventure in a world of humans Robot saves the world from outer space dangers by Ted Hughes A robot is entering the life of a human, even though Alex + Ada, Vol. 1 8.545 39 2013 Near future Yes No Romantic context the human initially does not like the robot, he soon by Jonathan Luna discovers the robot is more than what it seens On who becomes class representative depends Nothing Can Possibly Go Wrong 8.426 23 2013 - Yes No Kids' life context whether a new cheerleading uniform or a robotics by Prudence Shen competition will be funded Robots are solely used for dull labor. At some point R.U.R. 7.949 26 1920 - Yes Yes Robots built to replace humans as labor. they emancipate and revolt but struggle to discover by Karel Capek how to replicate themselves.

LIFEL1K3 Story on a bunch of different types of robots going 7.679 26 2018 - Yes No Robot adventure by Jay Kristoff on an adventure An empty robot is created for a 16 year old girl, Beta 7.428 10 2012 - Yes No Robot as clone suddenly she experiences emotions and has to hide by Rachel Cohn them Sea of Rust 7.339 38 2017 - Yes No Set after end of human civilization Robot is searching for meaning by C. Robert Cargill Can the robot which adapted to animals living on the The Wild Robot Escapes 7.331 30 2018 - Yes No Kids' life context island also find its way in civilized world; by Peter Brown continuation of The Wild Robot Saturn's Children A sex robot, now wihtout purpose, agrees to embark 7.000 18 2008 200+ years Yes No Set after end of human civilization by Charles Stross on an adventure of interplanetary travel

Descender, Vol. 2: Machine Moon 6.984 24 2016 - Yes No Interplanetary travel/life context Continuation of Descender, Vol. 1 by Jeff Lemire An android is unlocked, they fall in love but the Alex + Ada, Vol. 2 6.740 26 2015 Near future Yes No Romantic context world becomes hostile to androids; Continuation of by Jonathan Luna Alex + Ada, Vol. 1 PLUTO: Naoki Urasawa x Ozamu Humanoid robots that can pass for humans, the distant Tezuka, Band 001 6.654 19 2004 Yes No Violence between humans and robots seven great robots are targets of murder, the future by Naoki Ursawa investigator is himself a target World leaders have to give one of their children as The Scorpion Rules 6.179 13 2015 - Yes No Robot as watcher over children hostage, when they go to war, the kid dies, machines by Erin Bow watch the kids The Mechanical see Reason, at some point robots realize they should 5.930 13 2015 - Yes No Robots as fighers and servants by Ian Tregillis be free though

The Positronic Man 5.695 30 1992 10+ years Yes No Domestic robot (android) context Robot wants to become fully human by Isaac Asimov

The Boy Who Crashed to Earth Hilo fell from the sky, friendship story with other 5.631 15 2015 - Yes No Kids' life context by Judd Winick kids involved The superpower AI has survived the war and is Robogenesis 5.386 34 2014 - Yes No War between species spread in bits and pieces; Continuation of by Daniel H. Wilson Robopocalypse, more dystopian Boy + Bot They try to understand each other while they are 5.179 77 2012 - Yes No Kid and Robot friendship by Ame Dyckman "switched off"

162 Appendix E Notes of Interview #1

A – Author I – Interviewee

Introduction

A: Thanks! I aim to shed light on the currently observable trends shaping the future of HRI on a mid to long term perspective of approximately 20 years by conducting a scenario analysis. In order to gain a more detailed understanding of how different future scenarios might impact HRI specifically, relevant design aspects of HRI will be analyzed from the perspective of the respective future scenarios. Driving Forces: - Technology Protectionism - Workforce Emancipation: Organization of Labor, (Re-)skilling offers - Consumption Pattern: Mass customization, conscious consumption - Economics of Replacement: falling costs of technology, rising wages, maturing technology 8 Questions, so pithy answers, two back-up questions if we finish early or if we have more time. Let’s start. I: With Terminator as the touchstone and the inception of dystopian stories on robots. But Terminator is not to be seen as stationary. In its several movies, an evolution of thought on AI in terms of religion as well as the dystopian storyline is occurring. SciFi changes through their own meta-narratives. Religion in SciFi: religion changes over time, and its relation to SciFi; If I want to get an idea of the impact of SciFi on reactions, I have to do a broad archeology over time and the areas; Golden age: strong scientific approach then diluted. Researcher at Cambridge, Interview work on how and whether SciFi has a direct impact on AI research; I unpicked the influences, what do AI researchers read and what impact does it have on their work

Technology Trajectories

A: Are we currently witnessing a change in our value system? What does it take for a change in values and ethics to be introduced? I: Yes, but more intricate than superficially obvious. Artisanal movement as an example, privileged class buying hand-made artifacts; Goods and services that promote using non- algorithmic products. TV Series “Human”: character calls another human that is actually a robot;

163 When the human realizes, raises questions of could there be disclaimers? Do I interact with a real human? People want a more tailored experience, but uneven push-back, primarily of those privileged; those that are not privileged are in no situation to omit that technology. UK government blames algorithm but transparency stops there. Level of transparency only goes so far: robotic systems are not visible they become very commonplace: artisanal experience is not valued enough (e.g., washing machine). It is very much about where we can make sacrifices to efficiency and where people are privileged enough. So for some people or countries it just won’t be possible. SciFi shapes the narrative: assumptions that there’ll be humanoid robots makes roboticists go for humanoid form. Movie: human character arriving at kings cross. Handing ticket over to a person instead of a ticket machine: Why would we take the step back in that instance? Some themes are too unrealistic: - R.U.R. deemed to unrealistic by my standards. - War robots holding guns: human essentialist, Some assumptions are shaped by the human form: what we think, what we would aspire to produce rather than what is actually logical. A: SciFi creating images of human-looking robots, shapes human fears about them. I: Theme: If humans take the place of god as creator, we will have a problem. Frankenstein: if you create something as smart as us, we are in trouble; However, we forget that when we raise children it’s what we actually do. Parents have this moment, realize I have produced a separate human; that’s echoed in our stories of robot, echoed in our fears about robots. Those visceral reactions will shape robotic forms; either fulfill android dystopian form or a cute form like “Mirror”, cross between dog and bunny, , . Idea of hacking our emotions and anthropomorphism. A: Does the proliferation of the next level of technology (the one related to I40) manifest current power structures or bring about mass-scale disruption? I: The colloquialism is rubbish in, rubbish out. When data is broadly construed and fed into an algorithm, you get back out, what you fed in; There is a danger in that, in that it doesn’t push beyond: paradigm shift of Kuhn: you have to step beyond what everyone is thinking, to make steps forward. Problem with narrow AI and robotic systems that implement them: it isn’t possible to make those leaps. Story: AlphaGo beat Lee Seedol, doing a move that was “so so so unexpected” -> stories broke saying: Lee Seedol was so shock he left the room, but he actually didn’t, couple of moves later he went outside to take a smoke. The way I see it: the move was still in the data; it is unexpected for us, but it actually is in the data -> not a “divine” move -> story-telling. In this case: it didn’t supersede us. AI in future: more likely to perform

164 actions that we react to extraordinarily, but it still only replicates data in unexpected ways to us, so the problem still lies in the data. Therefore, it is actually not possible for the algorithm to shift our perceptions, because it is still grounded in the same system. Boundary: data. My greater concern: narrow smaller versions, which maintain the status quo in smaller corporations. Example from tech conference: they used visual processing to recognize when crops are failing. Question: Are you worried about any other applications of this technology? No, there is no concern. Think outside your own patterns. I call it algorithmic thinking in humans: R&D, product, sell, R&D, product, sell, ... It depends on what you’re working on: if you’re working on general AI, data might be boundary, but for many this is not the concern. Others aim for: how much data can I get a hold of to make money?

Spread/Design

A: Are ideas on the design of AI/HRI/HRC converging globally? I: There is a lot of data supporting a dichotomous world western: eastern. But it is much more complicated than that. In the talk: “techno-orientalism what it comes down to, Japanese culture, Shinto, they believe in spirits in everything, therefore, obviously they have a more positive view on robots”. BUT: there are also dystopian stories in Japan; plans of Japanese government, socio-political aspects are more dystopian. Example: Interviews at robot hotels Talked to manager of hotel: Japanese culture they like robots: Why do you think Japanese like robots? 1. Students at schools are encouraged to study STEM subjects more. 2. Kids like dinosaurs (Hotel is also located in proximity to Disney land). There is a dinosaur robot in the foyer, the first one you meet entering the hotel, and it is actually controlled by a human in the back: illusion of reaction. So the robot hotel is less about robots but more about dinosaurs -> indicative why some cultures like robots more than others is more complicated. Techno orientalism is also about the denying of the animistic past in Europe; we here have this narrative that we went through the enlightenment and became rationale, got rid of animistic stories, all other cultures didn’t go through that, so we are free of this thinking. But it’s nonsense A: How confident are you that technology will reach human sophistication and on what time horizon? I: I am clued but not too clued; General public is even less clued. There are claims that Boston Dynamics videos are not entirely accurate; rather aspirational manifestation: want it to be true, but doesn’t: robot Hansen/hanson. The ones that seem particularly advanced that lead the conversation. Agnostic that we’ll ever get the agility of humans: The physical agility is currently inversely proportional to the mental one: can beat us in Go, but can’t pick up a mug.

165 Conversation with taxi driver: His understanding of AI: Artificial Insemination -> Lack of public awareness for the topic. He can think about automated cars: they would never replace taxi driver, because they would never have the local knowledge humans have. Example: best hotel; star ratings? Taxi driver negates that because humans know the “local culture”. Disjoint in public discourse of what is possible now. Public discourse is driven by press stories of fringe successes rather than everyday successes.

Science Fiction

A: How is society evolving given the evolution of AI? I: Common response: technology transforms far quicker than we realize, which is certainly true; In general, we should be concerned with advances at the average insurance company which are not visible. Project at the royal society of arts: citizen jury; Like focus groups almost. Questions for the jury: If you went to a doctor and the decision was made by the system, would you want to know? -> Investigates the break-down of transparency. When those people were given an introduction to AI and robots without SciFi elements, they had good points to make about transparency, but it was in the setting of specific directed informing, whereas the general public is not getting that experience. Easy to say “Education is the solution to everything” Talk to 7 year old: had a class on robots. 2 questions for them: What robots do you know? They knew exclusively SciFi robots. What will robots be able to do? Their ideas were very specific, but at the center was servitude: tidy their rooms, should explore black holes. Shows: literacy around SciFi forms of robots is formed quite early on. -> If we are not educating people as well about the invisible killer robots, they are not going to have awareness. Goes back to artisanal crafting: when you can’t tell the difference, you don’t have a choice. A: How are parameters such as trust, confidence, communication in and with technology evolving with the evolution of AI? I: Eroding trust: That is an aspect: I am looking at dystopic stories; like an algorithm beat the world’s best go player -> we’re doomed. There are a lot of tweets: we’re doomed; but there are also a lot of tweets of people who trust robots too much; reaching faith in some instances (like feeling of being blessed by algorithm; e.g. gig economy, particular day with lots of drives as uber driver). Note: I am doing qualitative not quantitative. No dichotomy between one extreme and the other extreme of trust; it’s a spectrum with some representations of robots blending over the spectrum. Terminator (dystopian) VS. robot politicians (there is no such thing as a purely rational algorithm, because they are created by humans). Determinism vs. Constructivism: How we achieve technological developments is muddled and confused but

166 there is the assumptions that humanity has only one way to go (explore the world and universe or whether there are alternatives?). Bipedal Boston Dynamics Robot energetically makes no sense, but it fulfills the android aspirations of SciFi.

Appendix F Notes of Interview #2 A – Author I – Interviewee

Introduction

A: Thanks! I aim to shed light on the currently observable trends shaping the future of HRI on a mid to long term perspective of approximately 20 years by conducting a scenario analysis. In order to gain a more detailed understanding of how different future scenarios might impact HRI specifically, relevant design aspects of HRI will be analyzed from the perspective of the respective future scenarios. Driving Forces: - Technology Protectionism - Workforce Emancipation: Organization of Labor, (Re-)skilling offers - Consumption Pattern: Mass customization, conscious consumption - Economics of Replacement: falling costs of technology, rising wages, maturing technology 8 Questions, so pithy answers, two back-up questions if we finish early or if we have more time. Let’s start.

Technology Trajectories

A: Are we currently witnessing a change in our value system? What does it take for a change to be introduced into society? Could the ecological change we are currently witnessing lead to a societal change? I: I don’t know; drawing on experience in teaching and instructural documentation: What the interviewer is asking is have people the ability to accurately self-reflect? I have seen few cases. To be able accurately look in and understand what you want, what you need, what you are, how to get where you want to be takes a lot of energy, takes the skills of before. Part of my job is to interview: Operator usually thinks they are smarter than they are. Engineers think they are regular people, what they’re saying is understandable. -> If you cannot look at who you are and what you are in society, cannot think about what has to change. The

167 interviewer is missing the media: Even if you can self-reflect, if you don’t get accurate information about society and technology’s impacts, they can’t make good decisions.

Spread/Design of HRI

A: Are public entities, like state universities, losing/having control over technology trajectories from a scientific not regulatory perspective? Is there a catch up game going on between private and public entities? I: Don’t really know; Experience is: where ever the money is. Corporate world finances projects at universities which is in turn influencing the research agenda (private finances public). A: What are the educational values that should be embedded in the curriculum as we face this technological transformation? I: - Empathy - Ability to abstract/creative/philosophical thinking ➔ bringing humanities back You can train people to build something, but for educating someone, humanities are needed as it induces reflection on what it means to be human and how people will react to technology and how technology shapes society. The tie back into SciFi themes: lead characters, protagonists, are usually the ones questioning society: Why are we doing this, What does this mean? They see for one reason or another society being manipulated, brainwashed, etc.

Science Fiction

A: How are humans reacting towards being increasingly molded with technology? I: Different reactions: - I, Robot: hesitation and fear; Especially short-story on Robby, which was a nurse and babysitter; People were going on the street, “I will kill the monster” -> fear - Cyberpunk/Neuromancer: stories are more dystopian; It is just how life is, everything is cybernetic and it’s digital. There is a moment of realization: Who is funding it and to what end? Why does somebody want to steal all of these high end digital parts and pay for this massive biological and technological upgrades? A lot of questioning around: once the machines cross the line to consciousness, then you start questioning all of it? This inevitable conflict is also seen in Asimov: why are the machines doing this? 168 - Mindscan: Starts in society that is much like ours now, android version moving back into society is faced with a lot of hesitation: all of the biological friends shun the android version of their biological friend; Anxiety in the android: this mechanical being knows it is mechanical, when it touches something it is not feeling but sensing it; All previous relationships are broken: it has a whole new life and a new place in society, it is not duplicating the human; broken mind-body connection A: Did the android try to integrate itself into society and how did it try to do so? I: Once the android realizes its mechanic and the human realizes its mechanic, there is no interaction anymore. Technically story continues, android goes on in a new life. The paths split. -> The end does not address what the interviewer is looking for A: I, Robot: What are the associations from that book to our society and the way we handle robotics in industry? I: Three laws of robotics: good ethical standard for robots; as we move from robot to AI, and move from rule-based coding to creating their own rules; having something hard programmed into the robot: good for safety; a very long way away. Current industrial robots: they don’t have intention, they are really dumb, just execute program code. There is a problem with education system: For all of this high-tech industry: we really need to think about ethics: intended and un-intended consequences. But everyone is so focused on their specialty: robotics engineers don’t take ethics or philosophical courses. So there should be guidelines and rules. A: How are parameters such as trust, confidence, communication evolving with the evolution of cyborgs? I: People are becoming more and more accepting of technology - If you work with tech, you don’t have the fear - if you don’t work with it, you have the fear As much as I work with technology, the idea to trust a robot is far away: the only robot that is there: Sophie. I do not trust robots; it is not fear; but rather I have seen industrial robots that fail for reasons that were not discovered for months. At some point: there will be that team work with robots, but a long way away; Robots are still tools, even 20 years in the future they are going to be tools; The question of trust and is it a team member, we are not there yet. A: Research on trust and acceptance might lead the way to approach philosophical issues associated with robots? I: Absolutely good in raising awareness, however, knowing the people I know that is not going to mean anything to them. 20 years is a short trajectory in terms of changing peoples’

169 minds and changing society. Until you get another generation that grows up with robots, find people that fear robots. A: What are the incentives to create cyborgs? I: Safety; mostly in Asimov in space exploration. Different societal trends can also affect the technology we are researching on: Example: Huge population boom: bigger and better AI needed to process data. GMO (Gene Manipulated Organism) food, tame, alter, curb, manipulate nature. Not taught to think about it critically, not allowed to think about it critically. Think through the ethical implications of this. A: We need a crisis to realize something is wrong. I: Yep, and we are seeing it now. If the private sector is still driven by profit, why would the attitude change? It is easier to externalize -> disposable economy, disposable ecology. Author is too optimistic about people’s self-reflection. Constant manipulation through advertisements. If people could think through that and there wasn’t the capitalism and corruption, we would not have saved the whales, we would not have people denying climate change -> you need a global society shift to get the change needed. What is changing on that trajectory? Like to say education, but that seems far-fetched as well. All signs are pointing to dystopian direction -> Questions I see: what is the point of no return? and when is it? Core question: what does the trajectory look like? If we don’t think of the unintended consequences, we can only react. And I do not know if we are successful in reacting. There is always going to be some evil in humanity. A: How is society itself going to change? I: See examples in SciFi: usually ruinous; I think it’s not entirely true, because society now is neither dystopian nor utopian. Also, we can’t predict how technology is shaping society as a whole. Example: loss of jobs; there are so many other factors that influence joblessness. Pandemic: what is happing to population could vastly change society. A lot of children see their parents currently in-home office trying hard and sometimes failing to combine work and family -> how does that shape their thinking? Demanding less work, or maybe rejecting kids. Work intensity is increasing now that people are being laid off and work is distributed on the remaining -> We are hitting a living at work, what is going to be the kick-back from that? Nobody wants to work anymore?

Appendix G Notes of Interview #3 A – Author I – Interviewee

170 Technology Trajectories

A: Are we currently witnessing a widespread change of values? I: Aus Produktionssicht: kann man HRC mit Nachhaltigkeit verbinden? In einer Produktion die Verschwendung erzeugt kann das Einführen von Ansätzen wie Lean Production zu Ressourceneffizienz führen und vielleicht ist auch HRC ein Mittel die Produktion Lean zu gestalten. Beispiel Autos: vielleicht kann HRC helfen verschiedene Antriebstechniken auf einem Band zu produzieren. Ganz allgemein für den Bereich AI -> ethische KI als Beispiel für einen Wertewandel. Bewusstsein dafür welche Werte einprogrammiert werden müssen; bei KI muss man darauf achten, auch bei KI getriebener Roboter. Genauigkeitswerte, die man braucht. Man muss einen Trade-Off zwischen Effizienz und andere design considerations beachten. Beispiel: Maschinen Richtlinien; Toolpfad soll optimiert werden, nach gewisser Zeit jedoch, wird die Maschine irgendwann eine andere Funktionsweise haben als sie ursprünglich zertifiziert wurde -> es entsteht Unsicherheit weil Fragen zur Reichweite von ursprünglichen Zertifizierungen entstehen. Somit gilt es Benefits eines möglichen Spielraums von Zertifizierungen mit dem Verbot solcher Entwicklungen abzuwägen. Standards und Richtlinien spielen dabei eine große Rolle. Zentrale Frage: Wie autonom lässt man ein System werden? Welche Auditierungssysteme müssen eingesetzt werden? A: Welchen Einfluss haben Standard-Setting-Bodies? I: Aus Firma Sicht: Boards usw. haben 2 Funktionen: - Durch Kooperation kann man den Wettbewerb kontrollieren, damit man sicher gehen kann, dass einem keiner technologisch entkommt. - Man möchte proaktiv sein und zeigen, dass man das Bewusstsein hat und selbständig Standards und Richtlinien umsetzt. Somit versucht man öffentlichen Institutionen den Antrieb zu nehmen mit harten Regularien einzugreifen. Beispiel: Selbstzertifizierung: eigenes TÜV-artiges Zertifikat für AI, dies ist freiwillig aber auf EU-Ebene gibt es Ideen sowas verpflichtend zu machen Frage ist: Wie viel Dokumentationsaufwand wird verlangt: Dokumentation verlangsamt, aber schafft Absicherung. Auf der Gegenseite kann man schneller neue Produkte entwickeln. -> Davon hängt auch Zeithorizont in den Szenarien ab. Noch gibt es keine eigenen Regularien für KI: vor 1-2 Jahren hat man eigentlich geglaubt, dass bisherige Regularien reichen. In der neuen EU-Kommission überlegt man eine eigene Direktive auf den Weg zu bringen. Die Frage für HRC ist dann: Kommt dazu eine eigene Direktive oder muss eine allgemeine KI-Direktive auf HRC angewandt werden. -> Sowas ist ein exogener Faktor in den Szenarien, der den Trajektionspfad beeinflusst.

171 A: When it comes to AI and robots: Where do you see similarities, where do you see differences? I: Im Prinzip ist es das Physische: Roboter sind mechanisierte Bauelemente, die angetrieben werden. KI ist eine Form des Antriebs.KI kann so vieles sein und so vieles auch nicht sein. Deshalb wird jede Form von Use Case Planung unheimlich komplex. Komplette Fabrik mit KI gesteuert -> In der Prozessindustrie möglich vs. man hat humanoiden Roboter und macht einzelne Arbeitsschritte. Darüber hinaus: KI als die Steuerung und Antriebsvariante von einem Roboter, so wird immer ein gewisser Autonomiefaktor mit drin sein. Einen Roboter kann man auch 1:1 steuern. Einer KI kann nie die Autonomie genommen werden. Das ist das Problem warum die Ausbreitung und Anwendung so schwierig ist: die KI ist eine Black Box -> fehlende Standardisierung. EU hat deshalb eine Initiative auf den Weg gebracht: Datenpools und Standardalgorithmen werden bereitgestellt, so dass gewisser Synergieeffekt entsteht. Frage ist, wie groß ist der Vorteil daran teilzunehmen und wie groß ist der Aufwand? Wenn die Daten so verschleiert werden, dass nur noch wenig erkennbar ist, lohnt es sich noch für andere damit was zu machen? Wenn die Daten aufbereitet werden, kann möglicherweise viel über das Unternehmen gesagt werden und man hat noch Aufbereitungsaufwand. In meinen Masterarbeits‘ Interviews: sehr gesplittet Meinung -> Manche waren dafür alles zu teilen außer Daten bezogen auf den Wettbewerbsvorteil, oder andere sagen gar nichts soll geteilt werden. A: Motivation für diese Initiative? I: Einerseits will man Einsichten gewinnen, wie man einen Datentransfer regeln kann und wie kann man Grundlage schaffen für Versuche hin zu standardisierten KIs. -> Das ist auch eine Form von suggeriertem Wertewandel: Änderung der Einstellung des Managements hin zu einer Datenökonomie über das Generieren von Vertrauen hin zum offenen Teilen von Daten.

Spread

A: What is the motivation to introduce AI at your company? I: zwei relevante Punkte: - Sicherlich der Versuch darüber Effizienz zu steigern, geringere Störanfälligkeit, oder Kosten zu senken - Rapid Prototyping: viele KI Prozesse sind langsam, besonders am Anfang. Beispiel: Bevor ein KI Projekt startet ist oft der Pay-Off am Ende nicht sicher. Das Endergebnis ungewiss. Somit ist fraglich ob ein Business Case entsteht. Daraus entsteht die Motivation für das AI lab: Man erstellt Prototypen, schaut sich an ob es weiterhin Sinn macht und ob es erste greifbare Ergebnisse gibt. Bis dahin sind nicht

172 viele Ressourcen gebraucht worden. Somit kann weiterentwickelt werden solange es Sinn macht. Arbeit erfolgt in Workshop Setting: Jemand aus der Business Unit und jemand aus der Entwicklungsabteilung, dem AI lab, und dann lässt man die gemeinsam entwickeln. Um AI im Industriebreich produktiv zu bekommen ist Technology Expertise und Domän Expertise nötig. Das Lab gibt seit zwei Jahren, klar gibt es Anfahrphase wo du Werbung dafür machen musst. Mittlerweile ist es bekannt und die Business Unit kommt auf das Lab zu. Was es noch nicht gibt sind Kollaborationen mit externen Partnern ohne Beteiligung der Business Unit; Es ist sozusagen eine Stabstelle A: Taking a bigger picture, is there a societal role in the spread of AI/HRI/HRC in companies? If yes, what is it? I: In der Geschäftswelt hat und findet ein Wandel statt: Von einem Shareholder zu Stakeholder Ansatz; Also Verantwortung nicht nur gegenüber der Wirtschaft sondern auch gegenüber der Gesellschaft. Dazu hat man sich intrinsisch motiviert. „Jeder unserer Mitarbeiter ist auch ein Mensch“. Es geht bei dem Wandel aber auch um Prestige, und bei manchen Produkten um rechtliche Verpflichtungen, die in finanziellen Verpflichtungen enden können, wenn Regeln nicht eingehalten werden. Gleichzeitig ist es eine wirtschaftliche Abwägungsfrage, wie viel Aufwand betreibst man wirklich das Ding idiotensicher zu machen, wie viel Restrisiko akzeptiert man am Ende. Eine Qualitätskontrolle muss gegeben sein, je nachdem wo es zum Einsatz kommt, es muss entschieden werden wie risikoreich ist es dort wo es einsetzt wird. Handelt es sich um Risiko für die Infrastruktur, oder um ein Risiko für den menschlichen Nutzer? Auf die Frage, wie man solche Risiken unterscheidet, gibt es gerade für KI Ansätze z.B. Algorithmen zu klassifizieren je nach Risiko, dann muss man den Algorithmus möglicherweise offenlegen und auditierbar machen. Für HRI muss je nach der Ausgestaltung der Interaktion eine andere Risikoabschätzung getroffen werden. A: Gibt es hier nationale Unterschiede? I: Es gibt große nationale Unterschiede: Zum einen ist es abhängig von der Produktionsstruktur in einem Land: in einem Land das fortschrittlich ist, wird man weiter sein, weil solche Technologien weiter sind. Deshalb weißt die EU Kommission auch die einzelnen Länder an, eine eigene Policy auszuarbeiten. Je nach Stand des Landes: das Thema anzustoßen oder das Thema serientauglich zu machen. Wie solche Regeln Innovation abbremsen können und wie sie beflügeln können werden ist mit Werten verbunden. Im Vergleich zu USA oder China, die beide möglichst schnell voranzukommen versuchen, ist der Fokus in Europa erst ein Sicherheitsgefühl zu schaffen in dem die Entwicklung Nachhaltigkeit vorangetrieben wird. ->

173 „Human-centred“ in der EU, irgendwann kopieren andere das EU Modell. Mit der Zeit findet Angleichung statt. Die EU hat ethische Prinzipien und eine Checkliste aufgesetzt und irgendwann kam dann Peking und hat sich sowas auch überlegt. Am Anfang war dies vielleicht oberflächlich aber es wird ausgearbeitet.

Design

A: Where is the knowledge on AI/HRI/HRC coming from to introduce AI/HRI/HRC at your company? I: Der größte Unterschied besteht zwischen öffentlichem Raum und dort wo geforscht wird: Universitäre Forschung und private Forschung sehr verzahnt. In Stadt XY gibt es eine Initiative in der Unternehmen und Universitäten zusammen forschen, haben eine Lobby Perspektive; Hier wird zwischen Industrien und innerhalb Industrien der Wissensaustausch gepflegt. Abgesehen davon gibt es viele Forschungskollaborationen; Beispiel: einzelne Forscher, die von Unternehmen finanziert werden aber auch Lehrstühle, die von öffentlicher Hand finanziert werden Beispiel: Seine Firma versucht Grundlagenforschung von staatlicher Seite gegenfinanziert zu bekommen, damit das Risiko nicht alleine bei meiner Firma liegt. Denn am Ende muss ein Business Case dastehen; Voraussetzung: Involvement von anderen Partnern, meine Firma wird immer versuchen noch ein Start Up oder eine uni oder das Fraunhofer mit reinzubringen. Am Ende: Wissenschaftler kennen sich und wissen wer woran arbeitet. A: Gibt es dabei Interessenskonflikte? I: Mittel werden einem konkreten Projekt zur Verfügung gestellt. Da könnte man mogeln und könnte Seitenprojekte mitfinanzieren, aber kein wirklicher Konflikt. A: Nachfrage zu Konflikten mit IP. I: IP bin ich zu wenig drin. Sicht von außen: keine Ahnung wie es passiert, kann nicht abschätzen wie schnell es geht. A: Are ideas about the design of AI converging globally? I: Es gibt die Ebenen Entwicklung und Nutzung, das sind unterschiedliche Welten - Nutzung wird sich nicht so schnell ändern - Produkte gleichen sich aber irgendwo global an Mentalität der Bevölkerung ist trotzdem eine andere, somit wird es Unterschiede in der Akzeptanz geben. Allerdings: wenn es ein Bedürfnis gibt für eine Technologie, Akzeptanz schafft sich, wenn die Technologie das Problem löst. Beispiel: Wenn draußen 10 % autonome Fahrzeuge fahren, die deutlich weniger Unfälle bauen, dann steigen die Leute um. Kann auf

174 jede Technologie angewendet werden: Traditionelle Kurve der Technology Adoption. Bei ethischen Standards usw.: Man wird sich in der Mitte treffen müssen, aus europäischer Sicht wird man nicht alles verlangsamen können. Gleichzeitig möchtest du nicht, dass dein KI Produkt aus China kommt, das nicht zertifiziert und kontrolliert wurde. Also muss man sich öffnen oder Mechanismen der Zertifizierung finden. -> Ausgleich zwischen Sicherheit und Geschwindigkeit. Man sieht jetzige Antitrust Probleme bei Amazon und Google, wie problematisch etwas ist wenn etwas zu schnell gemacht wurde. KI hat Riesenpotenzial oder zumindest wird es ihr zugesprochen. Somit besteht Bewusstsein dafür, dass das der Bereich ist wo wir es richtig machen müssen. -> Man will mit der Entwicklung Schritt halten. Frage wird sein: Muss ich alles machen, was ich machen kann - bisher wurde es so gemacht, aber die Frage wird sein ob es bei KI auch so laufen muss, denn KI ist die Technologie, bei der diese Überlegung einen Unterschied machen kann. -> Downsides, die alle Lebensbereiche betreffen können, und Upsides, die alle Lebensbereiche betreffen können. ML supervised und unsupervised learning, irgendwann sollte unsupervised stärker sein, aber da wird es noch schwieriger weil Verantwortlichkeiten nicht mehr zuweisen kann.

Appendix H Notes of Interview #4 A – Author I – Interviewee

Introduction

A: Thanks! There are 9 questions, so pithy but relevant answers are appreciated; Alternative, written response to questions to which more has to be said. Term clarification: HRI – Human-Robot Interaction/ HRC – Human-Robot Collaboration A: What is I40? I: enriching manufacturing environment with digital solutions, connected assets, leveraging data to improve assets in smart ways. Autonomous vehicles and robots and HRI are big elements in the whole digitalization effort.

Technology Trajectories

A: Are we witnessing a reorientation of our value system? I: Strong yes, we are, at the same time, the change is not as disruptive and quick as it can be read in the articles. People don’t change that quickly. People in particular with respect to sustainability enjoy expressing their morale stance on issues, but when it comes down to

175 action and consumption choices, they opt for the typical articles. There is a delay between morale consciousness and actual change in behavior. We know about sustainability but does not mean we behave accordingly. Business side is facilitating to make options accessible to us. When we have the option, if we can choose between sustainable and non-sustainable: most people go for sustainable; Companies see it as a differentiator to gain market share with A: What are the mechanisms shaping the future of Robots? I: 1st: Efficiency; It is the factor that drives major efficiency; Example of projects at clients: you can come up with fancy stuff, but you need a business case -> especially in times of Covid. 2nd: Agility; being able to act much quicker on changes; One, in customer demand but also in your supply chain; part of risk management. Being quicker than competition. 3rd: Innovation; robots will enable us to work even more precise, with materials that are more difficult to handle; it will unlock new opportunities. A: How confident are you that technology will reach human sophistication and on what time horizon? I: I foresee there is a change, but it is not as easy as quick as you think. Need to differentiate between greenfield vs brown field companies. Example: Tesla, Musk able to drive innovation because he started from scratch. Problems arise with legacy assets, legacy systems, also legacy capabilities; complete culture that hold back innovation that make it much harder to shift paradigms in your company. Differentiation between transformation and disruption: once old players are too slow to adapt, and technology is readily available that new players can rapidly start from scratch. Again example: automotive: traditional players had to act, because of pure pressure put on by tesla. Legacy companies saw the consumer traction that Tesla garnered, they couldn’t ignore it. Having said that, it is a huge challenge; Talking about HRI: automotive sector is quite well development, chemical industry is not well developed at all. It is very asset intensive, an industry that is hard to disrupt because of legacy process knowledge, they invite us to pitch: how can we drive innovation. When you go to the ground, they are so far away from even thinking of I40; there is a huge disparity between vision and the system in place. Change the whole culture, HR (Human Resources), equip the people with the right capabilities. A lot of projects stay within pilot and MVP (Minimal Viable Product), hard to skill.

Design of HRI

A: Do you perceive a societal role in the development of HRI/HRC use-cases in companies? If yes, what is it – i.e., labor unions, consumers, NGOs?

176 I: I am a capitalist, strong believer in the market and the forces within the market, life is not always fun, things happen. The alternative is not a better one, it sucked (socialism). Example: taxi vs. uber; taxi drivers are screwed, have to accept lower margins; then you come with politics, they said: Uber you can play your game but within the rules. In the Netherlands and Germany, the social security net is quite strong; In the end, this is a contributing factor for having a sustainable business environment and drive innovation; Having said that at the same time: in the US the social system, you could argue, is much weaker, and still all the big innovations come from the US.

A: Sweden as example: leave of absence to start a company with the right to go back into old job if start-up fails. I: Difference between Swedish model and US model: US model -> innovation with a stick, Swedish model: carrot. Resolution: societal role should be to promote risk-taking, incentivizing entrepreneurial activities, while mediating risk-aversive social security like strong labor unions trying to maintain status-quo. Question is not about: How strong should social security policies by, rather: Who should they target? How can they incentive risk taking? Humans look for the road of least resistance; that’s why I do not believe in socialism. It does not work, you need to incentivize people to work hard, and always work. As soon as you get too comfortable, creative power dies off. A: Who are initiators of (re-)skilling offers and who calls for (re-)skilling offers? I: Initiator quite often the consultants and C-Suite in the company. Labor unions are rather blocking as they see the change to which they are afraid. Reskilling, upskilling, Yes; BUT in the end it is about transformation: some people will lose, some will gain; This is hard to swallow for labor unions, if you keep everyone on board, the ones having a future are held back. Best practice: consultants engaged with unions from start on; accepting them as fact of life and taking them through the journey and making them part of solution instead of part of problem. Having said that: it is not easy. Challenge as consultant: essential rightness, if the story makes sense, you should be able to convince anyone, even your fiercest opponents. Initiators: company leadership with trusted advisors; because they are aware of the need of change, but they don’t know where and how to begin. Some leaders are afraid to act, so they go into an iterative changing mode, which is a self-fulfilling prophecy of failing, because you need a comprehensive transformation plan.

Spread of HRI

177 A: Where is the knowledge on HRI/HRC coming from to introduce HRI/HRC? I: First off: not sufficient collaboration in general; 80 % of knowledge comes from best practices: cross industry learnings, benchmarking to the competition. BUT: in a way with that you’ll never be going past the leader in an industry. 20 %, the remainder: innovation driven workshops with people from the company and sector experts; That’s where cool ideas come from, but companies have to take it step by step. BUT: for some companies, step by step is not enough, they need to make a big leap and they are capable to do so, have financial means, urgency. But often: fixing the basics is not sexy, so they take too long, and can’t do the next step in time.

A: Trickle down in media: from scientific media to public media to project because science is too focused, not as easily transferable I: A big reason for that is the lack of vision from leadership; too many leaders are just managers that grew up; they are too far away from the actual innovation; so when you confront them with innovation, they see it as too long of a journey, don’t have the patience or don’t have the luxury because of shareholders pressuring them for short term returns. There is a lack of leadership to be bold and drive innovation also more closely with academia.

Spread and Design of HRI

A: Disparity between academia and industry in the use-cases for robots: automation vs. collaboration. Do you see a future for collaboration? I: Industry now: full automation of production lines. But there is a huge potential for HRC: Projects within environments that are not as easily accessible as car manufacturing. Example: Process industry, turn-around of refinery: closed down for weeks for maintenance which is largely done by humans as it is different every time. As robots develops (Boston Dynamics), there is a huge potential in maintenance. -> More efficient, more effective, also safer

Appendix I Notes of Interview #5 A – Author I – Interviewee

Introduction

178 A: Thanks! There are 9 questions, so pithy but relevant answers are appreciated. Term Clarification: HRI – Human-Robot Interaction/ HRC – Human-Robot Collaboration

Technology Trajectories

A: Are we witnessing a reorientation of our value system? I: Ja, Konsum: Flugreisen, Produkte, Elektrofahrzeuge, aber auch über Incentivierung von Hybriden zum Beispiel erleben wir einen Wandel. Beispiel: Grüne Supply Chain, in Europa wird aber auch in Asien und Amerika noch einflussreicher. Einfluss von welchen Supply Chains geliefert wird und welche Produkte gekauft werden.

A: Das heißt es gibt nicht nur einen Wandel im Endkonsumentenbereich sondern auch durch den industriellen Bereich hindurch I: Teilweise gibt es die Anforderungen heute schon (zum Beispiel: im Medizintechnikbereich), Aber gerade im Konsumgüterbereich, Traceability in der Supply Chain, da gibt es Initiativen A: Wird das eher eine Nische und Differenzierungsmerkmal sein oder eher ein Phänomen um das die Hersteller nicht drum rum kommen? I: Da ist was dran, aber es ist schwierig das Kundensegmenten zuzuordnen. Eher gibt es für die Nachverfolgung in der Supply Chain einen starken pull bei den Herstellern selbst, die von durch die Traceability von mehreren Trends profitieren können: 1. Keine stabilen der Lieferketten, eher größere Variabilisierung der Lieferkette. Daher kann es passieren, dass ich bis dato unbekannte Lieferanten involvieren muss. 2. Nicht sicher, dass es ein Differenzierungsmerkmal ist, kann sein. A: What are the mechanisms shaping the future of Robots? I: Das hängt von der Industrie sehr stark ab. Zum Beispiel in der Fertigung von Konsumgütern teilweise nur sehr schwach automatisiert werden aufgrund technischer Herausforderungen aber eine große Stückzahl haben. Mit oder ohne Kollaboration kommt das Potenzial mit der prozesssicheren Applikation. Wenn dann noch die Effizienz gesteigert wird, oder die Qualität verbessert wird, dann werden Robotiklösungen vorgezogen. Der Einsatz wird wahrscheinlicher mit günstigeren Robotern, auch kollaborative Roboter werden günstiger. Beispiel: Primary packaging wird aktuell manuell gemacht wie zum Beispiel fleischkeule oder andere biegesteife teile oder das Verlegen von Fensterdichtungen. Herausforderung heute: Entweder kollaborierende Zellen oder fixe Roboter, aber die Entwicklung geht hin zu mehreren kollaborativen Robotern in einen Takt. Kollaborative Roboter sind durch Restriktive Faktoren

179 wie Payload und Reichweite beschränkt. Es gibt ein paar Systeme, die über die bisherigen Beschränkungen hinaus können, aber es gibt physikalische Grenzen. Stabilität und Präzision der Anwendung haben hohe Priorität Wenn man das in Griff bekommt mit besserer Mechanik, dann gibt es deutliches Potenzial. A: Aber ist nicht nur die Effizienz ein Treiber sondern auch der Mangel an Leuten und die Kosten der Leute? I: Absolut; Beispiel: Assistierende Roboter; Bei Liebherr wurde das Schweißen komplexer Strukturen und unzugänglicher Strukturen obwohl es nur kleine Stückzahlen sind, weil man keine Leute gefunden hat.

Design of HRI

A: Do you perceive a societal role in the development of HRI/HRC use-cases in companies? If yes, what is it – i.e., labor unions, consumers, NGOs? I: Neben der Effizienz: 1. Sicherheit und Ergonomie, Schutz vor gefährlichen Umgebungen; hier findet vor allem der kollaborative Ansatz Möglichkeiten. Wenn über Kollaboration nachgedacht wird: zunächst bei Arbeiten mit höhere, Gesundheitsrisiko. 2. In Jobs wo ein Restrisiko besteht: Beispiel: auf einer Bohrinsel Inspektionen zu machen; Beim Thema Kriegsführung ist das Thema Risiko sehr präsent; Ist aber dann erst ab gewisser Skalierungsmöglichkeit industrierelevant. A: Who are initiators of (re-)skilling offers and who calls for (re-)skilling offers? I: Es gibt große Initiativen, die sich schon eine Weile mit dem Thema I40 beschäftigen. Einige, aber nicht alle, haben Requalifikationsstrategien. Teilweise sind auch die Gewerkschaften on-board. Warum? Transformation ist nicht zu stoppen und gilt es zu unterstützen. Es gilt entsprechende Fähigkeiten auszuprägen: nicht nur Programmierung; eher track and trace und die Massendatenanalysen oder Regressionsanalysen aus dem Problem der ständigen Bewegung heraus: wir haben viele Probleme gelöst, sind stark, aber alles bewegt sich, Applikation ist nicht 100 % stabil, vieles muss optimiert werden -> Big Data handling, Muster erkennen. Und das ohne Data Scientist sondern mit Leuten, die den Prozess verstehen. Das muss zusammenwachsen. -> analytisches Denken. Gewisse Job Profile verlieren Relevanz, diese werden im Sinne einer langfristigen Strategie ersetzt. Wer initiiert? Meistens vom Kunden initiiert, das Management ist sich den nötigen Veränderungen bewusst.

Spread of HRI

A: Where is the knowledge on HRI/HRC coming from to introduce HRI/HRC? 180 I: Use-cases und Best-practices sind in der Regel kein Trigger, es ist eher eine Argumentationshilfe. Um ein Trigger oder Schablone zu sein, müssten die Applikationen normierter sein, sie sind zu speziell; Deshalb ist ein Outside-In um den Kunden an die Wissensgrenze zu bringen, nicht möglich. Use-cases sind Pin-points wenn der Kunde aus eigenem Antrieb keine Lösung, oder keinen Ansatz findet. Da wird in kleinen Ecosystem, sog. Ideation, Proof of Concepts oder Lieferantenwettbewerb, Hackathon. In diesen Formaten findet für den Kunden das Learning by Doing statt. Das Interesse wird allerdings über Benchmarking beim top Management getriggert. Allerdings direkt bei den Leuten in den Prozessen nicht, da ist es vielversprechender sehr spezifisch Lösungen gemeinsam zu entwickeln. Dafür gibt es mittlerweile etablierte Mechanismen. Beispiel: Scouting, Innovation funnel. Was kommt alles in den Funnel? Inputs aus verschiedensten Richtungen; Messe Besuche, Academia, ... Aber immer bereits sehr früh mit einem Auge auf Skalierung geblickt. Es ist wichtig sehr früh bei den Use-cases nachweisen zu können, dass das was ich verprobe auch eine potenzielle Lösung für viele weitere Applikationen ist

Backup

A: Do you see any differences in industrial HRI/HRC use cases between cultures? I: Gerade im Bereich der Industrieautomatisierung gibt es viele Gemeinsamkeiten, Asiatische Kulturen: Viel offener für Assistenz Roboter und Service Roboter dort ist die gesellschaftliche Akzeptanz höher. Während der Zeit beim Fraunhofer wurden erste Experimente zur gesellschaftlichen Akzeptanz in Deutschland gemacht; Ältere Leute hatten Angst und dachten an körperliche Gefahren, dann sind die Formen immer weicher geworden, auch die Farben eher weiß als grün und der Roboter bekam ein Gesicht Jedoch ist immer noch die Bereitschaft und das Vertrauen in Europa weniger ausgeprägt; in Japan zum Beispiel gibt es höhere Bereitschaft zu experimentieren, und die Menschen haben weniger Berührungsängste In der Industrie allerdings sind die gleichen Probleme verbreitet. Wobei in China ist die Situation wieder anders und ändert sich sehr schnell. Beispiel: Foxconn entwickelt sich immer stärker zum OEM. In einer WEF Initiative wurde Foxconn gehighlighted für die Initiative zur lights-out Factory. Auch Haier investiert sehr viel in die Automatisierung A: Sehen Sie einen Spill-over aus den Konsumentenrobotern in die Industrierobotik? I: Spill-over findet eher aus der Gegenrichtung statt: Service Robotik verdient auch bei den großen Herstellern noch kein Geld, Gelder und R&D kommen aus der Industrierobotik, die in gewissen Fällen ihren Einsatz in der Servicerobotik finden, Die Industrierobotik definiert die Standards für Service Robotik. In der Industrie: Ich muss automatisieren sonst bin ich weg vom

181 Fenster. Beispiel: Kabelbaummontage ist extrem manuell, Im Auto verlegen ist extrem manuell. Wenn es von den 5-15 Lieferanten jemand hin bekommt: Winner takes all, weil der Wettbewerbsvorteil so groß ist. Je näher man am Tipping-point ist, die Verlage durchgängig, oder entscheidend punktuell zu automatisieren, hat man sofort die Dominanz am Markt. Deshalb gibt es ein Wettrennen: auch die Gewerkschaften spielen hier mit, denn denen ist bewusst das Jobs wegfallen, aber natürlich deutlich mehr, wenn man das Rennen verliert.

182 TRITA -ITM-EX 2021:6

www.kth.se