DEGREE PROJECT IN MECHANICAL ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2019

Safety system design in human-robot collaboration Implementation for a demonstrator case in compliance with ISO/TS 15066

CAROLIN SCHAFFERT

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT

Safety system design in human-robot collaboration

Implementation for a demonstrator case in compliance with ISO/TS 15066

Carolin Schaffert

Master of Science Thesis TPRMM 2019 KTH Industrial Engineering and Management Production Engineering SE-100 44 Stockholm 1

Abstract

A close collaboration between humans and robots is one approach to achieve flexible production flows and a high degree of automation at the same time. In human-robot collaboration, both entities work alongside each other in a fenceless, shared environment. These workstations combine human flexibility, tactile sense and intelligence with robotic speed, endurance, and accuracy. This leads to improved ergonomic working conditions for the operator, better quality and higher efficiency. However, the widespread adoption of human-robot collaboration is limited by the current safety legislation. Robots are powerful machines and without spatial separation to the operator the risks drastically increase. The technical specification ISO/TS 15066 serves as a guideline for collaborative operations and supplements the international standard ISO 10218 for industrial robots. Because ISO/TS 15066 represents the first draft for a coming standard, companies have to gain knowledge in applying ISO/TS 15066. Currently, the guideline prohibits a collision with the head in transient contact. In this thesis work, a safety system is designed which is in compliance with ISO/TS 15066 and where certified safety technologies are used. Four theoretical safety system designs with a laser scanner as a presence sensing device and a collaborative robot, the KUKA lbr iiwa, are proposed. The system either stops the robot motion, reduces the robot’s speed and then triggers a stop or only activates a stop after a collision between the robot and the human occurred. In system 3 the size of the stop zone is decreased by combining the speed and separation monitoring principle with the power- and force-limiting safeguarding mode. The safety zones are static and are calculated according to the protective separation distance in ISO/TS 15066. A risk assessment is performed to reduce all risks to an acceptable level and lead to the final safety system design after three iterations. As a proof of concept the final safety system design is implemented for a demonstrator in a laboratory environment at Scania. With a feasibility study, the implementation differences between theory and praxis for the four proposed designs are identified and a feasible safety system behavior is developed. The robot reaction is realized through the safety configuration of the robot. There three ESM states are defined to use the internal safety functions of the robot and to integrate the laser scanner signal. The laser scanner is connected as a digital input to the discrete safety interface of the robot controller. To sum up, this thesis work describes the safety system design with all implementation details.

I

Sammanfattning

Ett nära samarbete mellan människor och robotar är ett sätt att uppnå flexibla produktionsflöden och en hög grad av automatisering samtidigt. I människa-robotsamarbeten arbetar båda enheterna tillsammans med varandra i en gemensam miljö utan skyddsstaket. Dessa arbetsstationer kombinerar mänsklig flexibilitet, taktil känsla och intelligens med robothastighet, uthållighet och noggrannhet. Detta leder till förbättrade ergonomiska arbetsförhållanden för operatören, bättre kvalitet och högre effektivitet. Det breda antagandet av människa- robotsamarbeten är emellertid begränsat av den nuvarande säkerhetslagstiftningen. Robotar är kraftfulla maskiner och utan rymdseparation till operatören riskerna drastiskt ökar. Den tekniska specifikationen ISO / TS 15066 fungerar som riktlinje för samverkan och kompletterar den internationella standarden ISO 10218 för industrirobotar. Eftersom ISO / TS 15066 representerar det första utkastet för en kommande standard, måste företagen få kunskap om att tillämpa ISO / TS 15066. För närvarande förbjuder riktlinjen en kollision med huvudet i övergående kontakt. I detta avhandlingar är ett säkerhetssystem utformat som överensstämmer med ISO / TS 15066 och där certifierad säkerhetsteknik används. Fyra teoretiska säkerhetssystemdesigner med en laserskanner som närvarosensor och en samarbetsrobot, KUKA lbr iiwa, föreslås. Systemet stoppar antingen robotrörelsen, reducerar robotens hastighet och triggar sedan ett stopp eller aktiverar bara ett stopp efter en kollision mellan roboten och människan inträffade. I system 3 minskas storleken på stoppzonen genom att kombinera hastighets- och separationsövervakningsprincipen med det kraft- och kraftbegränsande skyddsläget. Säkerhetszoner är statiska och beräknas enligt skyddsavståndet i ISO / TS 15066. En riskbedömning görs för att minska alla risker till en acceptabel nivå och leda till den slutliga säkerhetssystemdesignen efter tre iterationer. Som ett bevis på konceptet är den slutliga säkerhetssystemdesignen implementerad för en demonstrant i en laboratoriemiljö hos Scania. Genom en genomförbarhetsstudie identifieras implementeringsskillnaderna mellan teori och praxis för de fyra föreslagna mönster och ett genomförbart säkerhetssystem beteende utvecklas. Robotreaktionen realiseras genom robotens säkerhetskonfiguration. Där definieras tre ESM-tillstånd för att använda robotens interna säkerhetsfunktioner och för att integrera laserscannersignalen. Laserskannern är ansluten som en digital ingång till robotkontrollens diskreta säkerhetsgränssnitt. Sammanfattningsvis beskriver detta avhandlingar säkerhetssystemdesignen med alla implementeringsdetaljer.

II

Acknowledgments

This degree project is a part of the master’s degree in Production Engineering and Management at KTH Royal Institute of Technology in Stockholm, Sweden. It is equivalent to 30 credits out of a total of 120 credits and extended over 20 weeks from January to June 2019. The thesis was conducted in the Smart Factory at Scania AB in Södertälje, Sweden.

I am grateful for all the support and advice I have received during this challenging but definitely educational time. First of all, I would like to thank my company supervisor Juan Luis Jiménez Sánchez. Without his help, the results in this thesis work would not have been achieved. He solved many robot programming issues with me and was always willing to answer all my questions. Additionally, I would like to thank Fredrik Ore for his overall support and in specific for his knowledge in the risk assessment. Further, I would like to express my appreciation to my academic supervisor from the Royal Institute of Technology Xi Vincent Wang for his counsel. His valuable input and criticism helped me to identify the improvements areas of my system and definitely lead to a better result. Also, I would like to thank my colleagues in the Smart Factory at Scania. It was a lot of fun working with you and being part of such a young and innovative team.

Finally, I would like to thank Fredrik Lilkaer for his encouragement and enthusiasm for my thesis throughout the last months. Lastly, a big thank you goes to my parents for supporting me throughout my entire bachelor and master program, no matter where I decided to move.

III

Table of Contents

1 Introduction ...... 1 1.1 Background ...... 1 1.2 Problem description ...... 1 1.3 Research questions ...... 2 1.4 Research approach ...... 3 1.5 Delimitations ...... 4 1.6 Outline ...... 4 2 Literature review ...... 5 2.1 Human-robot collaboration ...... 5 2.1.1 Levels of interaction ...... 5 2.1.2 Industrial applications ...... 8 2.2 Safety in human-robot collaboration ...... 8 2.2.1 General safety aspects ...... 8 2.2.2 Risk reduction process ...... 9 2.2.3 Safety standards ...... 10 2.2.4 Discussion of safety standards ...... 14 2.2.5 Assignment of safety technologies to safeguarding modes ...... 15 2.3 Safety systems in human-robot collaboration ...... 16 2.3.1 Safety technologies in research ...... 17 2.3.2 Certified safety technologies ...... 20 2.3.3 Safety system design in research ...... 21 3 Proposed safety system design ...... 23 4 Methodology ...... 26 4.1 Case scenario description ...... 26 4.2 Risk assessment process ...... 28 4.3 Overall system structure ...... 29 4.4 Specification of hardware and software ...... 30 4.4.1 Industrial path solutions ...... 30 4.4.2 KUKA lbr iiwa ...... 31 4.4.3 Laser scanner ...... 32 5 Implementation and analysis of the safety system ...... 33 5.1 Classification of use case ...... 33 5.2 Implementation differences between theory and practice ...... 33 5.3 Feasible system behavior of the proposed safety system designs ...... 35 5.4 Risk assessment ...... 37 5.5 Calculation of safety zones ...... 38 5.6 Safety system configuration of the demonstrator case ...... 41

IV

5.6.1 Safety configuration of ESM states ...... 42 5.6.2 Circuit diagram ...... 44 5.6.3 Configuration of the laser scanner ...... 45 5.6.4 Integration in Java ...... 46 5.7 Analysis of the safety system...... 47 6 Conclusion ...... 50 6.1 Answers to research questions...... 50 6.2 Limitations and challenges ...... 51 6.3 Recommendations and future work ...... 52 7 References ...... 54 Appendix A – Risk assessment and layout ...... 58 Appendix B – Stopping time and stopping distance of KUKA lbr 14 ...... 62 Appendix C – Java code of the robot application ...... 63

V

Table of figures

Figure 1: Method of this thesis work ...... 3 Figure 2: Levels of interaction between the robot and the human [8] ...... 6 Figure 3: to determine the levels of interaction based on three characteristics [6] ...... 7 Figure 4: Steps of the risk reduction process according to ISO 12100 ...... 9 Figure 5: Safeguarding modes combined with the levels of interaction [6] ...... 16 Figure 6: Graphical representation of the protective separation distance ...... 23 Figure 7: Proposed safety system designs ...... 25 Figure 8: Pedal car assembly in the smart factory ...... 26 Figure 9: Generic station layout of the demonstrator case ...... 27 Figure 10: Task allocation of the human and the robot ...... 28 Figure 11: System structure of the HRC cell ...... 30 Figure 12: System behavior of the proposed safety system designs ...... 35 Figure 13: Safety system behavior in combination with the demonstrator case ...... 36 Figure 14: Final safety design layout of the HRC cell ...... 38 Figure 15: Safety system behavior of the demonstrator case ...... 41 Figure 16: Visual indication on the media flange ...... 41 Figure 17: Configuration of the ESM states and their activation in the process sequence ...... 42 Figure 18: Configuration of ESM state 1 ...... 43 Figure 19: Configuration of ESM state 3 ...... 43 Figure 20: Configuration of ESM state 2 ...... 43 Figure 21: Limiting of the hand-guiding motion with safety spheres and protected space monitoring ...... 44 Figure 22: Circuit diagram for the laser scanner connection ...... 45 Figure 23: Cabling of the devices ...... 45 Figure 24: Configuration of the laser scanner ...... 45

Table of tables

Table 1: Relevant standards ...... 10 Table 2: Example of risk assessment 1 ...... 29 Table 3: Calculation of the protective separation distance ...... 40 Table 4: Description of the implemented risk reduction measures and their safety function ...... 47

Table of equations

Equation 1: Protective separation distance based on ISO/TS 15066 ...... 12 Equation 2: Maximum permissible speed limit for transient contact ...... 13 Equation 3: Constant protective separation distance ...... 36

VI

Abbreviations

AGV Automatic guided vehicle

AMF Atomic monitoring function

EDM External device monitoring

EN European norm

ESM Event-driven safety monitoring

HDDM High definition distance measurement

HRC Human-robot collaboration

IMU Inertial measurement unit

ISO International Organization for

I/O Input/output

IPS Industrial Path Solutions

LBR Lightweight robot

OSSD Output signal switching device

PFL Power- and force-limiting

PL Performance level

PLC Programmable logic controller

PSM Permanent safety monitoring

SIL Safety integrity level

SSM Speed and separation monitoring

TCP Tool center point

TS Technical specification

2D Two-dimensional

3D Three-dimensional

VII

1 Introduction

This introduction gives a brief background of the research area as well as the reasons for the research, the resulting objective, and the research questions. 1.1 Background

The increased demands for customized products result in a wider variety of product variants. Along with shorter product life cycles, smaller lot sizes, and fluctuating demands, production systems need to be flexible and quickly adaptable to changing conditions [1, 2]. Both flexibility in production flows and a high degree of automation can be realized by a close linkage between robots and human operators [1, 3], also referred to as "human-centered automation" [2]. In contrast to conventional robotic cells where the robot is surrounded by a fence to prevent the operator from interacting with the robot in any way, in human-robot collaboration (HRC) the two work together in a fenceless, shared environment. Humans are restricted in force and precision but can quickly adapt to new processes [3]. Hence, they perform complex tasks which require tactile sense whereas robots carry out difficult, monotonous and exhausting activities [3]. HRC workstations connect human flexibility, tactile sense and intelligence with robotic speed, strength, endurance, accuracy, and repeatability [2, 3]. The combination of these characteristics leads to huge potentials in increased productivity and quality as well as in improved ergonomic conditions for the operators [2, 3].

With the introduction of affordable, fenceless lightweight robots (lbr) such as the KUKA lbr iiwa more companies take advantage of the opportunities which robots create in assembly [4]. Nevertheless, according to a study conducted by Bender et al. for Fraunhofer in 2016, mostly human-robot coexistence can be found in industries whereas human-robot collaboration is not yet widely implemented [4]. Only one out of 25 evaluated HRC applications represents a collaborative interaction whereas 60% are coexistent and about 30% are synchronized or cooperative [4]. The difference between the interaction modes is explained in chapter 2.1.1.

Obstacles for the widespread adoption of HRC are loosely defined standards, uncertainties in increased operational efficiency and the effects on the jobs of blue-collar workers [4]. The main reason why human-robot coexistence is yet dominant is the aspect of safety, i.e. the current safety legislation. For collaborative operations, only a technical specification, namely ISO/TS 15066, is defined. The close contact between the human and the robot which possesses significantly more strength implies a higher probability for the occurrence of serious accidents to the operator [5]. In addition to the risks presented by the robot, the unpredictability of human behavior, especially their movements must be considered [6]. As formulated by science fiction writer Isaac Asimov as the first rule in the 'Laws of Robotics' in 1942, a robot should not injure a human [7]. A high safety level through proven safety systems [8] and an understanding of the risks for the operator is a prerequisite for any human-robot collaborative tasks [5]. Due to the specific safety needs of each HRC use case, an individual safety concept is required for every HRC application [6].

1.2 Problem description

Efficient and high-quality processes are crucial to remain competitive. Multiple processes in manual assembly still require tactile sense, flexibility, and precision of the operator, e.g. inserting non-rigid parts or positioning

1

washers and screws. To improve the ergonomic working conditions for the operator and to increase productivity, robots should work alongside humans. Despite the obvious benefits of HRC installations, the current safety legislation situation limits the implementation of HRC use cases in the production environment. Companies must create a working environment free of risks for the operator and need to develop HRC solutions where the personal safety of the operator is guaranteed. To increase safety, engineers must understand the specifications of the robot, its related risks and the function of safety applications. For multiple years, companies and research institutes already work on developing industrially viable solutions.

Scania is a Swedish heavy vehicle manufacturer. To implement HRC cases on the production line at Scania in the future, there is an urgent need to increase the knowledge about safety. At the moment a wide gap between research and industrial installations of HRC at Scania exists. The power- and force-limiting safety functions integrated into collaborative robots such as the KUKA lbr iiwa are not sufficient enough for Scania’s internal level of safety. The reason is that the technical specification of the International Organization for Standardization, namely ISO/TS 15066 prohibits head collision in a transient state, see chapter 2.2.3. Therefore, additional safety technologies, such as a laser scanner, must be installed to protect hazardous areas and force the robot to an emergency stop upon safety violation. Most papers develop new safety methods and only very few discuss using commercially available technologies in an industrial use case while complying with ISO/TS 15066. There is a need to describe a safety system design in human-robot collaboration and its detailed implementation at an industrial use case in compliance with ISO/TS 15066.

1.3 Research questions

The objective of this thesis work is to design a safety system for an HRC demonstrator case, the seat assembly of a pedal car, in the Smart Factory at Scania. The following research questions are addressed:

Research question 1: What are the safety requirements for HRC defined by the standards?

Research question 2: What safety technologies are available and relevant to improve safety in HRC assembly installation in the best way?

Research question 3: How can a safety system be designed to comply with the existing standards and regulations and to guarantee a safe environment for the operator in a specific HRC installation like in the smart factory at Scania?

Research question 4: How can this safety system be configured to integrate internal safety functions and external safety equipment?

This thesis work is conducted at the department Global Industrial Engineering Development at Scania in Södertälje in Sweden which is responsible for developing and implementing new methods and systems within the area of digitalization. The demonstrator case is implemented at the pedal car line in the Smart Factory, a truck and bus production lab.

2

1.4 Research approach

The first task in the research process is to perform a literature review in the area of HRC focused on safety system design along with a detailed study of the safety standards governing robotic systems. The following keywords are used: human-robot collaboration, HRC and safety, human-robot interaction and safety. The aim of the literature review is to get familiar with the topic, to understand the state of the art of HRC and to identify a research gap. At the same time the available hardware and software are studied, i.e. the robot and the laser scanner, to identify missing hardware such as a safety relay and a power supply, and to understand the safety functionality of the devices. In the next step, a demonstrator case is chosen to have a proof of concept of the developed safety design. A first broad risk assessment is carried out and in combination with the technical knowledge of the devices four safety system designs are developed. Throughout the process, implementation difficulties, caused by the limitations of the laser scanner, became clear and first implementation alternatives are developed. In the detailed risk assessment, the ideas from the proposed safety system designs are considered as risk reduction measures and a final feasible safety system is designed. This safety concept is implemented, using the created workaround solutions. In an iterative process, the risk assessment is re-evaluated based on the previous safety concept. The iterative process ends when the safety engineers at Scania are convinced about the safety level of the solution and their design recommendations are implemented.

Figure 1: Method of this thesis work

3

1.5 Delimitations

The focus of this thesis work is on the safety aspects of industrial robots used in manufacturing, meaning other robot types such as social robots are out of scope. The improvement of ergonomics and productivity are considered while designing the demonstrator case and the safety concept, but it is not analyzed. The safety system design and its configuration are developed for a selected use case in the smart factory at Scania and must comply with Scania’s safety perspective. Nevertheless, it is possible to use the implementation details for other HRC cases. The lightweight robot used in this project is the KUKA lbr iiwa. International and European standards are applied. This thesis work is limited to the laser scanner MicroScan3 Core with inputs/outputs (I/O) from Sick, as a certified safety technology which meets at least performance level D, see chapter 2.2.2.

1.6 Outline

Chapter 1 presents the background of the research, the research problem as well as the research questions including its limitations. Section 2, as the frame of reference, provides an overview of human-robot collaboration and explains the current guidelines for the safety of collaborative industrial robots. Different safety technologies and approaches to design a safety system are presented. In chapter 3 several safety system designs are proposed in theory. In chapter 4 the applied methodology is outlined. This includes a description of the industrial use case where the proof of concept is realized, and the available hardware and software. The process towards the final safety system design, its implementation details, and its analysis are presented in chapter 5. In chapter 6 this thesis work is concluded by answering the research questions and by giving recommendations for future work.

4

2 Literature review

This chapter as the frame of reference is a summary of existing knowledge and a state of the art study of HRC and safety, and safety technologies. 2.1 Human-robot collaboration

HRC combines the concepts of manual assembly and fully automated production. An industrial robot is defined as an “automatically controlled, reprogrammable multipurpose manipulator, programmable in three or more axes” [9]. A collaborative robot, on the other hand, is "designed for direct interaction with a human within a collaborative workspace", meaning a common workspace where the robot and a human work simultaneously [10]. In comparison to a traditional robot system, “operators can work in close proximity to the robot system while power to the robot’s actuators is available, and physical contact can occur within a collaborative workspace” [11]. As a result, collaborative workstations bring together the best characteristics of robots and humans: human cognitive and sensorimotor skills and precision, speed and strength of the robot [3].

2.1.1 Levels of interaction Multiple definitions for levels of interaction between humans and robots exist because the classification is not standardized. Current research papers are still redefining classifications. To ease the design and evaluation of HRC systems in general and in specific to safety systems an exactly defined terminology is important. In industry as well as in literature it is common practice to refer to HRC for all levels of interaction where human and robot work alongside each other in a fenceless environment. In this thesis work, the term HRC refers to all levels of interaction and specific modes are particularly highlighted through e.g. HRC collaboration for the level of collaboration. This chapter outlines a selection of classifications to attain the one used in this thesis work, see chapter 5.1. The frameworks presented below are the same, different or even contradicting to each other.

In one of the first definitions Helms et al. identified four types of interaction between a human and an industrial robot [2]: - Independent: The human and the robot work on different workpieces as in a traditional robotic cell. - Synchronized: The human and the robot work consecutively on the same workpiece in separate areas. - Simultaneous: The human and the robot work on the same workpieces at the same time without having physical contact. - Supportive: The human and the robot work cooperatively together on the same workpiece.

In previous research, Krüger et al. introduced the term of hybrid assembly systems where robots perform simple tasks and humans work on complex, highly varying tasks [3]. They divide hybrid assembly systems into either workplace sharing systems, or workplace and time-sharing systems. In comparison to workspace sharing systems, where human and robot perform separate tasks, i.e. one entity is performing a handling task and the other one an assembly task, they complete a task together in workplace and time-sharing systems [3].

Aaltonen et al. proposed a new framework based on a literature review and case study examples [12]. They suggested four levels of HRC which are further divided into sublevels through the factors: workspace sharing,

5

robot's activity when the human is present, type of joint effort, and physical contact. This framework includes a wider variety of possible forms of HRC but encompasses a high level in detail and complexity.

Wang et al. classified the relationship between human and robot based on five characteristics [13]: 1. Shared workspace: working in the same workspace without fences. 2. Direct contact: direct physical contact between both entities exists. 3. Shared work task: working on the same task but not simultaneously. 4. Simultaneous process: working at the same time on the same or a different task. 5. Sequential process: working one after another, meaning in sequential order.

Every type of human-robot relationship must fulfill certain characteristics. According to Wang et al. coexistence is achieved with feature 4 (different task) and interaction with feature 1 and 3. Cooperation requires the features 1 to 3 and 5 and for collaboration, the characteristics 1 to 4 must be fulfilled. In contrast to cooperation, the human and the robot complete the same task simultaneously in collaboration [13].

Bender et al. distinguished between five levels of interaction [4], see Figure 2: - Cell: As in a conventional robotic cell the robot is surrounded by a fence and an interaction with the human does not exist. - Human-robot coexistence: The human and the robot work next to each other in separated workspaces without fences. - Human-robot synchronization: The human and the robot work on the same product in an overlapping workspace without being there at the same time. - Human-robot cooperation: The human and the robot work in a shared workspace at the same time but do not perform tasks simultaneously on the same product. - Human-robot collaboration: The human and the robot simultaneously perform a task on the same product in a shared workspace.

Figure 2: Levels of interaction between the robot and the human [4]

6

Behrens et al. developed a taxonomy consisting of three characteristics and four levels of interaction [14]. Their characteristics build upon each other with an increased level of interaction and are as follows, see Figure 3: shared workspace as a spatial but not time-related condition, simultaneous co-work as working in the same shared workspace at the same time and physical contact between the robot and the operator as working directly together. According to Behrens et al. both entities have their own workspace in coexistence. In sequential cooperation the human and the robot are working in a shared workspace on the same workpiece after each other, meaning the robot stops when the human enters the shared workspace. In parallel cooperation, the robot and the human perform tasks simultaneously on the same product without physical contact. In collaboration, both are working directly together on the same product while having physical contact [14].

Figure 3: Flowchart to determine the levels of interaction based on three characteristics [14]

The framework proposed by Behrens et al. is used in this thesis work because it is easily understandable, a combination of other definitions and mapped to the safeguarding modes of ISO/TS 15066, see chapter 2.2.5. The taxonomies of Behrens et al. and Bender et al. are only developed at different institutes, i.e. at Fraunhofer IFF or IAO and are very similar. Bender et al. distinguish between cooperation and collaboration through performing a task simultaneously on the same workpiece whereas Behrens et al. consider physical contact between the human and the robot. Both concepts are very similar, and it depends on the view if physical contact is required to work simultaneously on the same workpiece or not. Behrens et al. conclude that physical contact is the requirement for collaboration but does not define how physical contact can be achieved. The differences between both taxonomies are the name for the second level of interaction, sequential cooperation equals human-robot synchronization and that Bender et al. chose a visual approach and Behrens et al. a flowchart characterization.

In all classifications, the level of interaction increases towards the goal of collaboration. As a matter of fact, the more the interaction mode differs from full collaboration in time and space the benefits resulting from HRC significantly decrease. Especially, during coexistence, the ergonomic benefit disappears fully. Therefore, companies aim to find a balance between sufficient safety for the operator and a high level of interaction.

7

2.1.2 Industrial applications To understand the wide variety of application areas this subchapter outline example cases for HRC implemented in production. Currently, industrial applications are mainly dominant in the automotive industry. The first HRC cooperation at BMW Group was installed in Spartanburg in the United States in the door assembly [15]. An operator attaches the adhesive bead on the door and a robot performs the force-intensive pressing of the sound and moisture insulation [15]. The low speed of the robot and sensors which stop the robot in case of successful entity detection guarantee safety. More information about safety is not given. Another HRC cooperation is installed at Audi in Ingolstadt, Germany for mounting CFRP roofs [16]. The operator places the roof on a rotary table, activates the adhesive application process performed by a collaborative robot from Universal Robots and installs the roof with the aid of a handling device. The company ensures safety through integrated collision detection, an emergency stop button and an illuminated LED on the robot arm which signalizes the system state, i.e. red or green. The operator controls the process the entire time because a button has to be pressed and hold to trigger the robot action [16]. A collaborative application was introduced at the BMW production site in Dingolfing, Germany [17]. The collaborative robot KUKA lbr iiwa installs the heavy housing for front axle gearboxes and the operator inserts washers and the housing cover. The following safety measures are taken: collision detection with the integrated joint-torque sensors on every axis, required task confirmation with a button for each completed step and a protective sheath to cover the gripper [17].

2.2 Safety in human-robot collaboration

2.2.1 General safety aspects Robots are powerful machines with a large operating range which can perform unanticipated movements because of programming errors. In addition to the risks presented by the robot, the unpredictability of human behavior must be considered [6]. Compared to conventional robotic cells where safeguards are installed to protect people in close vicinity of potential hazards the risks for the operator drastically increase when humans and robots work in the same workspace without spatial separation. Therefore, a risk assessment and risk reduction measures such as safety devices are required to reduce risks to an acceptable level. According to ISO/TS 15066, “any collaborative robot system design requires protective measures to ensure the operator’s safety at all times during collaborative robot operation” [11]. Due to the specific safety needs of each HRC use case, an individual safety concept for every application is required [4, 6]. The challenge in safety system design is to guarantee safety at any time for the operator and to have only a few interruptions of the robot to achieve high operational efficiency [4]. Exceeding the safety requirements of the results of a system in frequent robot stops and highly reduces operational efficiency making HRC applications not cost-effective. Bender et al. recommend that companies should aim for an adequate level of safety [4]. In general, system integrators tend to install unnecessary safety measures to be "on the safe side" because they lack experience in identifying hazards [18]. This results in overly safe systems which reduce speed too much and therefore increase the duration for the robot task, presenting a negative effect on efficiency [18]. The process of developing a safety concept for HRC can be divided into several steps: use case selection, use case simulation, risk assessment, and selection of safety measures.

8

2.2.2 Risk reduction process The Machine Directive 2006/42/EC demands to ensure personal safety in the manufacturing industries. Every HRC system must be certified in the form of a CE declaration of conformity in line with the Machinery Directive before it can be used by the operators. The Machine Directive 2006/42/EC states robots as an incomplete piece of machine. Each robotic cell poses unique risks, hence the manipulated tool and workpiece, the production environment and the operations must be considered during the design of the safety system. For example, stricter safety measures are required for parts with sharp edges or for a heavy tool [19, 20]. Therefore, the risk assessment and the CE marking have to be done in the full installation, meaning in the context of the intended application, i.e. with the part, end effector, sensors, tasks and environment [21]. The certification is carried out by a system integrator, an internal safety engineer and a professional association [4]. The system integrator performs a system-related risk analysis whereas the company having the HRC installation performs an application-related risk assessment [4]. The safety engineer carries out the final risk assessment, mostly with the help of a professional association [4]. The system integrator is responsible for installing the robot, designing the safety system and implementing the safety functions as defined in the risk assessment [22]. Further, the system integrator issues the EC declaration of conformity and writes the operating instructions for the system [22].

ISO 12100 specifies the principles of the risk assessment and the risk reduction process to achieve safety of machinery [23]. The risk reduction process is shown in Figure 4. In the first step of the risk assessment which is the risk analysis, the robot system is specified, the hazards associated with the collaborative robot system are identified and the risks for each hazard are estimated based on their severity of harm and probability of occurrence. In addition to the intended use, reasonable foreseeable misuse must be considered, e.g. the operator bends down to tie his shoelaces. Next, the risks are evaluated, and a decision is made if risk reduction measures are required. If so protective measures are taken to reduce too high risks. In the first attempt, the risks should be reduced by adapting the design of the robot and the interaction to be inherently safe. Afterwards the use of safeguarding and complementary protective devices is investigated. Considering these measures an iterative process from hazard identification on to risk reduction measures starts until all hazards are brought down to an acceptable level.

Figure 4: Steps of the risk reduction process according to ISO 12100

9

To support system integrators in the automatic hazard identification and the selection of safety measures Awad et al. developed a new computer-aided design method using model-driven risk assessment and an Eclipse plug- in [18]. The system proposes safety measures with regard to the investment cost and impact on time and flexibility [18]. Another approach was taken by Marvel et al. who developed a methodology for the risk assessment of collaborative robot systems which evaluates risks based on collaborative tasks instead of hazards associated with the environment [5]. The risk assessment occurs offline during the initial design stage. Their methodology in detail is: Process tasks are split into subtasks to identify hazards and to determine the potential risk severity of each step from level 1 to 4. With the help of a risk matrix, the highest priority hazards of level 4 are identified and mitigated first. Actions are the minimization of either risk in terms of likelihood or of the hazard impact, meaning potential severity. According to Marvel et al. this can be achieved by two mechanisms: 1) modification of the exposure to the risk by e.g. alternative collaboration type, different tool, changed subtasks, additional safeguards or 2) reduction of the impact by e.g. personal protective equipment, adapted forces and speeds, improved workstation design through padding and smooth edges. Marvel et al. recommend that the risk levels for a collaborative task should be at the most level 1.

2.2.3 Safety standards This chapter addresses the current safety legislation situation by analyzing the relevant international and European standards. The Industrial Organization for Standardization defined ISO 10218 in the year 2011 as an international standard for the safety requirements of industrial robots and robotic devices. Part 1 covers the safety of the robot itself, i.e. hardware and software requirements for the robot and its controller, and part 2 focuses on the robot system and the integration requirements. The Technical Specification ISO/TS 15066 serves as a guideline for safe collaborative operations between an operator and a robot and supports ISO 10218. ISO/TS 15066 details four concepts for safety system design, which can be used singular or in combination. The source standard for calculating the protective separation distance for the method safety-rated monitored stop and speed and separation monitoring is ISO 13855. Table 1 gives an overview of the ISO standards and EU directives used in this thesis work. The most important points of ISO 10218 and ISO/TS 15066 are outlined below.

Title Description 2006/42/EC Machine Directive ISO 12100 Safety of machinery – General principles for design – Risk assessment and risk reduction ISO 10218-1 Robots and robotic devices: Safety requirements for industrial robots – Part 1: Robots Robots and robotic devices: Safety requirements for industrial robots – Part 1: Robot systems and ISO 10218-2 integration ISO/TS 15066 Robots and robotic devices – Collaborative robots Safety of machinery – Positioning of safeguards with respects to the approach speed of parts of the ISO 13855 human body Table 1: Relevant standards

ISO 10218 requires the following stopping functions for every robot system which must comply with the requirements defined in detail in IEC 60204-1 [9]: Robots need a manually initiated and independent emergency stop of category 0 or 1 and at least one protective stop of category 0, 1 or 2. The stop categories are determined

10

by the risk assessment. IEC 60204-1 defines them as follows: stop category 0 removes the power from the robot actuators immediately and as a result the stopping path cannot be controlled, stop category 1 reduces the velocity in a controlled stop before activating the brakes and stop category 2 stops the robot and then actively controls the motor in standstill position while drive power is active [9]. In case of an emergency, the operator presses the emergency stop button and as a result, the power is removed from the robot actuators and a manual reset is necessary. In contrast, the protective stop is automatically initiated by an external protective device when the protective separation distance is violated. It can either remove (category 0 or 1) or control (category 2) the power from the robot actuators and can be automatically reset. For collaborative operations, ISO 10218 requires a safety-rated monitored stop which is the “condition where the robot is stopped with drive power active, while a monitoring system with a specified sufficient safety performance ensures that the robot does not move” [9]. ISO 10218 and ISO/TS 15066 do not define the difference between a protective stop and a safety-rated monitored stop. Based on their description it is concluded that a safety-rated monitored stop is the same as a protective stop. Additional safety functions, upon which a protective stop is initiated, are: a safety-rated monitored speed function monitors that the robot speed does not exceed the safety-rated reduced speed limit of 250 mm/s and a safety-rated soft axis and space limiting function where the robot is programmed to either not enter or exit a defined space [9]. ISO 10218 requires a visual indication when the robot is in collaborative mode and safe design of the change point between autonomous and collaborative operations [9].

Additionally, ISO 10218 states that all protective devices which perform safety functions such as light curtains, laser scanners, and pressure-sensitive mats, have to comply with at least one of the following performance requirements [10]: PL (Performance Level) D and category 3 in accordance with EN ISO 13849-1 and/or SIL (Safety Integrity Level) 2 in accordance with IEC 62061. PL is divided into five levels from A to E. Safety-related parts of the control systems with PL E have the highest contribution to risk reduction and meet the lowest probability of dangerous failure per hour [24]. SIL, ranging from level one to four, where level four represents the highest safety integrity, defines the reliability of devices for risk reduction based on the failure rate [25].

The four modes of collaborative operations were first introduced in ISO 10218 and have been further detailed in ISO/TS 15066. Collaborative operations should include one or more of the following four safety methods [11]: 1) Safety-rated monitored stop, 2) Hand-guiding, 3) Speed and separation monitoring (SSM) and 4) Power- and force-limiting (PFL). ISO/TS 15066 defines them in detail as follows [11]:

1) Safety-rated monitored stop: In this method, the robot is not allowed to enter the collaborative area as long as an operator is inside this area. When a safety-rated device such as a laser scanner detects the presence of a human in the collaborative workspace the robot movement is immediately stopped by activation of the safety- rated monitored stop function. The robot can automatically resume its motion after the human exited the collaborative area.

2) Hand-guiding: By activating an enabling device a human can manually move the robotic end-effector up until a defined speed limit, enforced by a safety-rated monitored speed function. In all other situation where the human is present in the collaborative workspace and does not manually control the robot, i.e. before the human

11

enters the collaborative area and after completing the hand guiding task, a safety-rated monitored stop is active. The robot can automatically resume its motion after the human exited the collaborative area. If the risk assessment requires a limit for the robot motion a safety-rated soft axis and space limiting functions have to be integrated.

3) Speed and separation monitoring: In this method, the human and the robot work simultaneously in a shared workspace but do not come in contact with each other. The robot constantly monitors a minimum separation distance to the human and adjusts its speed to predefined separation distance thresholds, enforced by a safety- rated monitored speed function. Instead of or in addition to a speed reduction the robot can adapt its motion, i.e. moving backwards, to increase the distance to the human again. When the protective separation distance is violated by the operator the robot immediately stops. After a safety-rated monitored stop, the robot can automatically resume its motion when the protective separation distance is kept again. The robotic speed is directly correlated to the protective separation distance, i.e. a slower speed results in a reduced protective separation distance. The distance threshold can be either static, meaning the distance is calculated once for the maximum programmed speed, or dynamic resulting in a threshold recalculation of the protective separation distance based on the speed at the current point of time. The distance thresholds and the robot’s response are programmed in the safety configuration. If the risk assessment requires a limit for robot motion, safety-rated soft axis and space limiting functions have to be integrated. The benefit of SSM is that a speed reduction delays the time point for a robot stop.

The protective separation distance 푆푝 at the current time 푡0 is calculated with equation 1 [11]:

푆푝(푡0) = 푆ℎ + 푆푟 + 푆푠 + 퐶 + 푍푑 + 푍푟

푡0+ 푇푟+ 푇푠 푡0+ 푇푟 푡0+ 푇푟+ 푇푠 = ∫ 푣ℎ(푡)푑푡 + ∫ 푣푟(푡)푑푡 + ∫ 푣푠(푡)푑푡 + 퐶 + 푍푑 + 푍푟 푡0 푡0 푡0+푇푟

Equation 1: Protective separation distance based on ISO/TS 15066 where:

푆ℎ = operator’s change in location, meaning the operator movement while the robot is stopping (푚푚)

푆푟 = the reaction time of the robot system, as the travelled distance of the robot before the braking is initiated (푚푚)

푆푠 = stopping distance of the robot while the robot is braking (푚푚) C = intrusion distance based on the expected reach of a body part into the safety area before the laser scanner detects the human’s leg. ISO 13855 recommends 850 mm for an outstretched arm (푚푚)

푍푑 = position uncertainty of the operator as the measurement tolerance of the sensing devices (푚푚)

푍푟 = position uncertainty of the robot (푚푚)

푇푟 = the reaction time of the robot system, including the required time for detecting the operator’s position with e.g. a laser scanner, signal processing and stop activation, but excluding the time until the robot stopped (푠)

12

푇푠 = stopping time of the robot, from the activation time point of the stop signal until the standstill of the robot. It is a function of robot speed, load and motion path (s)

푣ℎ = directed speed of the operator towards the robot, which can be either positive or negative 푚푚 depending on the direction of the movement ( ) 푠

푣푟 = directed speed of the robot towards the operator, which can be either positive or negative 푚푚 depending on the direction of the movement ( ) 푠 푚푚 푣 = speed of the robot while stopping, i.e. from the stop signal until the robot has halted ( ) 푠 푠

The calculated protective separation distance mainly depends on the stopping values 푆푠, 푇푟, 푇푠 and C. Especially for faster and larger robots with longer stopping times the safety distance increases [26].

Marvel et al. calculated the robot and sensor reaction times 푇푟 and 푇푠 based on the robot stop position and its velocity [27]. They calculate the stop position of the robot as the sum of the travel distances of the robot during sensor activation and robot stop. In several test runs the robot stop positions at stop activation (through sensors) and in standstill are measured [27]. This formula was applied by Belingardi et al. to find values for 푇푟 and 푆푠 [28]. They determined the stopping positions of the robot for different velocity in a simulation environment of a

HRC case. Additionally, the measurement uncertainty 푍푑 of human presence sensing devices is not yet standardized. Therefore, Marvel et al. recommend to use the measurement uncertainty values of active opto- electronic protective devices, as they are defined in IEC 61496-2 [27].

4) Power- and force-limiting: In this method, physical contact between the robot and the human in the collaborative area is allowed. The risk reduction is achieved by limiting the resulting impact in case of a collision to avoid any harm to the operator. Collaborative lightweight robots can limit forces, velocities and axis range to not exceed predefined thresholds through integrated power- and force-limiting functions.

In Annex A of ISO/TS 15066, biomechanical threshold limits for quasi-static and transient collisions are suggested which should prevent the occurrence of even minor injuries for the human. For every of the 12-defined body regions, e.g. hand/finger, lower arm/wrist joint and chest, the corresponding maximum pressure and force values are defined. These are distinguished between transient contact, a dynamic impact where the body part can freely move under the robot's force and quasi-static impact, where the robot crushes a human's body part against a fixed object. Under transient impact, the force can be two times higher than under quasi-static impact. In transient contact head collision, i.e. contact between the robot and the operator's face, skull and forehead, is prohibited under any circumstances. With these maximum pressure and force thresholds, the transfer energy in a quasi-static collision between the robot and the human can be calculated. The transfer energy highly depends on the size of the contact area during the collision. The transfer energy determines the speed limit for the robot in the collaborative workspace which results in an acceptable transfer energy causing only minor injuries. The recommended robot speed during transient contact can be calculated with Equation 2 from ISO/TS 15066 for every contact scenario with the respective maximum permissible pressure value [11]:

13

푝푚푎푥 ∙ 퐴 푉푟푒푙,푚푎푥 = 1 1 √( + ) −1 ∙ 푘 푚ℎ 푚푟 Equation 2: Maximum permissible speed limit for transient contact where: 푚푚 푣 = maximum permissible robot speed ( ) 푟푒푙,푚푎푥 푠 푁 푝 = maximum permissible pressure value ( ) 푚푎푥 푐푚2 A = smallest contact area between the robot and the body region during collision (푐푚2)

푚ℎ = the effective mass of the colliding body region (kg)

푚푟 = the effective mass of the robot (kg) k = the effective spring constant to consider the deformation of the colliding body area and its energy 푁 absorption ( ) 푚푚

Instead of or rather in addition to calculating the speed limit values, the actual collision forces for every possible hazard should be measured with force-torque sensors on the colliding body part to ensure that the biomechanical limits are not exceeded.

The biomechanical threshold limits are kept through active or passive risk reduction measures [11]. Active design measures focus on the control system of the robot, e.g. force and speed limiting functions, safety-rated monitored stop and safety-rated soft axis and space limiting function. Passive design measures, however, focus on the mechanical design of the robot system such as lightweight robots to reduce the consequences of a collision and round edges to increase the contact surface area.

2.2.4 Discussion of safety standards As already mentioned in chapter 1.1, industrial HRC installations are limited by the current safety legislation. The existing standards provide rather guidance than clear definitions [29] and do not clarify what sufficient safety for the operator in collaborative operations is [30]. They are loosely defined and complex [18]. A high interdependency exists [18], meaning the standards build up on each other so that it takes longer to look up the related definitions from previous standards. Further, different use case scenarios are not specified, which makes the current standards too general and allows for multiple interpretation possibilities [30]. As a result, companies cannot follow defined guidelines and depending on the company’s internal safety level they interpret the recommendations in ISO/TS 15066 differently.

The guideline is not yet a standard because further adjustments are required. For example, it is open for interpretation which human features should be detected by vision systems [29]. There is a lack of clarity on how compliance with the biomechanical limits for the PFL mode in ISO/TS 15066 can be verified [4]. In addition, the values only consider the intended use case and values for unintended contact due to foreseeable misuse are missing [14]. Another problem is that ISO/TS 15066 forbids “objects with sharp, pointed, shearing or cutting edges, such as needles, shears, or knives, and parts which could cause injury […]” in the contact area for the PFL method [11]. One major issue is that this guideline prohibits head collision in transient contact. As a result, only

14

cooperation and coexistence and not close collaboration are possible to safely implement which contradicts with the vision of HRC. To sum up, ISO/TS 15066 does not associate each level of interaction with required safety functions [29] and does not clearly define in what way compliance with the safeguarding modes can be achieved e.g. by tailored sensor technologies for certain safeguarding modes [14].

An interesting experiment using ISO/TS 15066 was conducted by Rosenstrauch et al. who exemplified that the defined threshold limits are too high [30]. They applied the PFL principle to a use case and discussed the residual risks for the operator despite following the guideline. A pick-and-place operation is analyzed where a clamping risk of a hand between the table and the gripper exists, representing a hazardous situation with quasi-static contact. The allowed speed is calculated with the equations and pressure and force limits given in ISO/TS 15066. To prove that despite observance of these threshold limits serious injury can occur Rosenstrauch et al. simulate the scenario with a pork belly skin and the actual forces are measured by a force torque sensor. Even though only 60% of the allowed clamping force was applied severe damage to the skin could be seen. The results of this experiment contradict with the objective of ISO/TS 15066 that limit adherence prevents soft tissue penetration such as bloody wounds. Reasons for this might be: too high threshold limits for the maximum permissible pressure during quasi-static contact which were derived only from one study, exclusion of sharp objects and unknown factors such as fast acceleration of human speed [30]. Rosenstrauch et al. conclude that a "gap between a feasible guideline for safety in HRC and the resulting real safety" exists.

Marvel et al. proposed a set of metrics to evaluate and compare the effectiveness of SSM algorithms of safety systems in shared workspaces [31]. These metrics are based on the robot’s mass, velocity, separation distance, and sensor uncertainty. During experiments, they conclude that even simple SMM algorithms improve safety but do not guarantee safety under all circumstances [31]. More recently, Marvel et al. analyzed and discussed every part of the SSM protective separation distance algorithm as defined in ISO/TS 15066 and provide guidance for implementation [27]. Because the directed speed towards the robot or operator and not the actual travel direction is considered for the velocity terms of the human and the robot, unnecessary stops can be activated, e.g. when the robot moves away from the operator with increasing speed [27]. They conclude that it’s mathematical computation is defined in detail, but the implementation depends on the system integrator and which values are chosen for the equations. They state that the current method lacks verification and validation methodologies to ensure reliable and functional fenceless robotic work cells. Researchers are focusing on new monitoring and sensing technologies, but these are not yet ready for commercialization. According to Marvel et al. an important functionality of new safety systems should be of high reliability in unstructured manufacturing environments.

2.2.5 Assignment of safety technologies to safeguarding modes Based on the level of interaction Behrens et al. mapped the level of interaction with the possible safeguarding modes, see Figure 5 [14]. According to Behrens et al. all safeguarding modes except hand-guiding can be applied for coexistence because the human does not need access to the area around the robot. In sequential cooperation the workspace of the robot and the human overlap which means that fences cannot be installed anymore. During parallel cooperation, a safety-rated monitored stop is not feasible because the robot should move while the

15

human is in the shared workspace. In collaboration physical contact between the robot and the human is required and therefore only the safeguarding modes PFL and hand-guiding are possible.

Figure 5: Safeguarding modes combined with the levels of interaction [14]

Bdiwi et al. developed another classification for the interaction levels and defined the respective required safety functions [29]. They specify the human features which should be detected by vision systems and the robot parameters which should be controlled. These are for example the detection of human readiness, upper body and hands with a near-field vision system and the monitoring of robot speed, position, and torque [29].

2.3 Safety systems in human-robot collaboration

The risks which could not be mitigated by an inherently safe design in the risk reduction process, see chapter 2.2.2, are brought down to an acceptable level by additional safety technologies, referred to as sensitive protective equipment. The system integrator designs the safety system with selected safety technologies. The safety system design depends on the respective HRC application [6].

The focus of safety system designs is divided into two areas: collaborative lightweight robots and technologies to enable collaborative operations for conventional industrial robots with a high payload. Collaborative lightweight robots already possess the PFL safeguarding mode, whereas non-collaborative robots, i.e. large industrial robots, have to be equipped with additional sensing technologies [27]. Nevertheless, it depends on the risk assessment of the specific use case and on the company if solely a collaborative lightweight robot meets the safety requirements.

Safety schemes in HRC can be defined as pre-collision and post-collision systems [3, 32]. These terms are directly related to the safety strategies collision avoidance and impact force limitation which are also used in some research papers.

16

- Pre-collision systems: Collisions are prevented by stopping the robot motion, altering its trajectory [3, 32], alarming the operator with an audio output and moving back from the operator [33]. Workspace monitoring systems detect the position of the human with external safety sensors [3, 32]. - Post-collision systems: In case of an unexpected or unavoidable collision the harm to the human is minimized [3, 32]. This can be achieved with integrated joint torque sensors in the robot’s internal safety control systems for collision detection which initiate a robot stop and mechanical compliant systems such as lightweight robot structures and robot skins which limit the maximum transmittable energy to the operator during an impact [3, 32].

The disadvantages of post-collision systems are 1) incorrect defined limits might result in serious accidents, especially in combination with sharp tools or parts 2) frequent production stops in case of collision occurrence [32]. The advantages of a pre-collision system are 1) visual or acoustic signals can warn the operator of a coming robot stop 2) a speed reduction can delay the time point of a motion stop. However, external sensors require more space around the robot because of the sensor detection time. A use case can have elements of both systems.

Another less frequently used classification is made by Michalos et al. who distinguishes between three different safety strategies [19]: - Crash safety: Collisions are made controllable or 'safe' by power- and force-limiting functions. - Active safety: Collisions are detected in time to stop the robot. Detecting technologies are e.g. proximity sensors, vision systems, and force/contact sensors. - Adaptive safety: Collisions are avoided without interrupting the robot operations by adjusting the trajectory of the robot, referred to as corrective actions.

2.3.1 Safety technologies in research In research, safety is discussed from different aspects and several technologies to ensure safety in HRC as part of pre-collision and post-collision systems have been developed. The first part of this chapter summarizes research approaches implementing the safeguarding mode SSM, mainly vision-based methods, and the second part outlines collaborative robots and robot skins for the PFL mode.

A projection and camera-based sensor system which consists of one projector and one camera mounted on the ceiling was developed by Vogel et al. [34]. A visible area is projected around the robot to visualize the safety zones. Dynamic safety zones are generated by connecting the robot controller to the monitoring system to calculate the separation distance based on the current robot joint positions and the velocity of the robot. A camera constantly monitors the projected safety zones and detects zone violations by image processing, or more precisely the system compares the real pixel positions of the camera with the expected pixel positions of the projector. The system takes into account the requirements from ISO/TS 15066 and is little dependent on light conditions [34]. Later this concept, especially the robustness of the system, was further improved by Vogel et al [35]. A bandpass filter on the camera is added to filter all other light out except the one form the projector. The system is extended with virtual buttons which enable the interaction with the robot, e.g. pausing the robot

17

motion or confirming a process. The projector is also used to visualize safety-, robot- or process-specific information by projection into the workspace [35].

An approach for workspace monitoring and active collision avoidance in an augmented environment is developed by Mohammed et al. [33]. Three-dimensional (3D) models of robots are coupled with multiple depth images from the worker to calculate the minimum distance between both objects. The images of the worker are captured by two Microsoft Kinect cameras and represent the operator as a point cloud. Based on the location of the human with respect to the robot different safety strategies are activated. The method was tested with a non- collaborative robot [33].

Rosenstrauch et al. implemented the SSM principle using a Microsoft Kinect camera, the robot operating system ROS and a UR5 [36]. The distance between the human and the camera is obtained with the time of flight sensor of the Microsoft Kinect camera. Based on the travel time of the radiated and reflected signal of known speed the human is tracked in real time and with the joint angles from the robot controller the position of the robot is determined. The combination of both values is the shortest separation distance [36]. Another approach with a Microsoft Kinect camera is taken by Du et al. who uses an Unscented Kalman Filter to incorporate measurement errors and modifies the robot motion with the artificial potential field method [37].

A safety system for a handing-over task with a heavy industrial robot is developed by Bdiwi et al. [29]. They equipped a robot with two stereo cameras for monitoring the workspace, one RGBD camera as a near-field camera to detect human gestures, face and postures and a force sensor on the robot flange to sense the hand- over forces. Based on the infringed zone (in total 4 defined) different actions are taken: robot stop, speed reduction and tracking of human readiness before starting the hand-over task [29]. A speed control algorithm is developed to dynamically adjust the robot’s speed based on the distance between the robot and the human. Through gesture control, i.e. with hands and arms, the operator can guide the robot to an ergonomic position. This gesture control system was shown at the Hannover fair in 2019 [38] and even implemented at a reworking station for welding tasks of car underbodies at Volkswagen in Zwickau, Germany [39].

Safeea et al. tracked the human position with a laser scanner and five inertial measurement units (IMU) which are attached to the upper body of the human, i.e. chest, upper arms and lower arms [40]. The laser scanner captures the position of the legs to define the relative position of the torso from the legs and the upper body configuration. The minimum distance between the human and the robot, both represented by capsules with the gathered position data, is calculated using QR factorization. Collision avoidance strategies are realized with a customized potential field's method that allows adjusting the pre-defined robot paths. Safeea et al. test their proposed methodology for a pick-and-place operation with a SICK TiM5xx laser scanner, IMU's from TECHNAID and a KUKA lbr iiwa. The robot automatically adjusts his pre-planned path when the minimum distance between the robot and the human is violated and resumes its task when the human moved away. If a collision cannot be avoided the robot stops.

A neural network-based monitoring system is developed by Rajnathsing et al. to ensure a minimum protective safe distance between the human and the robot [41]. Their system consists of: a vision system with two cameras

18

and a convolutional neural network object detector to detect the human and the robot, one artificial neural networks for analyzing and drawing a conclusion on the detected situation and a speech recognizer to enable communication in unclear situations. The benefit of such a system is that it gets better over time through learning from wrong identifications [41].

Nowadays, it is still difficult to get reliable and accurate sensor data of the position of the human and the robot in an unstructured environment and from multiple sensors in real-time which makes it challenging to develop collision avoidance algorithms [41]. Fusing data from multiple vision-based sensors can reduce the risk of occlusion, meaning the human is hidden by another object [42]. Vision cameras are sensitive to light conditions and dust and the camera field of view has to be free of obstacles [42]. Halme et al. reviewed fifteen vision-based safety systems to compare their current technological readiness level for industrial application [42]. They conclude that despite the affordable price, flexible installation and tailoring possibilities and even though huge research efforts have been put into vision-based safety systems they have not been widely implemented on real HRC scenarios and are not yet standardized. The SafetyEYE from Pilz is one of the few commercialized vision- based safety sensors, see chapter 2.3.2.

Another system for SSM consisting of a tactile floor mat monitoring the human's position combined with projectors displaying colored safety zones is developed by Behrens et al. [14]. These zones are adjusted in correlation with the robot speed and position [14]. To not only ensure safety for the operator but also allow for a high productivity Shin et al. combined the SSM method with the PFL method. They suggest a methodology of controlling the allowable maximum robot velocity even when the protective separation distance is violated [43]. The allowable maximum velocity is calculated using a collision model that predicts the collision peak pressure and the peak force in the event of a collision and is compared to the allowable pressure and force threshold from ISO/TS 15066. As a result, the allowable maximum velocity of the robot is estimated as a function of the distance between the robot and the human. So far the method was not demonstrated in an experiment [43].

To satisfy the fourth type of collaborative operations robot manufacturers have developed special collaborative lightweight robots which have PFL functions integrated, i.e. low speed, limited payload, lack of sharp edges, detection collisions with another entity, e.g. KUKA lbr iiwa [22], UR 10 by Universal Robots and YuMi from ABB. First research approaches have been taken by the German Aerospace Center who developed the LWR technology which was later transferred to KUKA Roboter GmbH resulting in the KUKA lbr iiwa [44]. The design characteristics are e.g. torque sensors in each joint for collision detection and active vibration dampening to compensate for the elasticity of the robot and to increase motion accuracy [44]. A tactile robot skin consisting of tactile sensors with pressure-dependent transistors and a layer of energy absorbent material was developed by Fritzsche et al. to create a sensitive robot which detects any unintended contact and cushions collisions [45]. Kim et al. developed a soft skin module with a built-in airtight cavity for pressure sensing [46]. An air pressure change signals a collision and the elasticity of the material reduces the impact forces in case of a collision [46]. Cirillo et al. present a skin of printed circuit boards which senses the position of the contact point and three contact force vectors [47]. Collaborative lightweight robots already possess enabling devices which comply with the

19

safeguarding mode hand-guiding. For larger industrial robots Behrens et al. developed a wheel-shaped hand guiding device with force-torque and tactile sensors [14].

2.3.2 Certified safety technologies In several research approaches, devices which have been originally developed for other industries such as the Microsoft Kinect camera for gaming are used to create new safety systems. Even though the camera is commercially available, it is not certified for the use as protective equipment because it does not meet the requirements by ISO 10218 of at least PL D. This chapter presents selected certified safety technologies available on the market.

AIRSKIN by Blue Danube Robotics GmbH is a soft and pressure-sensitive safety skin, covering the robot and the gripper and has recently been certified with PL E [48]. In case of a collision the soft pads dampen the force impact and the sensor pads monitor internal pressure changes, i.e. a deformed pad indicates a collision and triggers an emergency stop. The 3D printing possibility of the pads allows installing the skin on every tool. Currently, the robot skin is available for e.g. the UR10 from Universal Robots and the KR10 from KUKA. The construction of the pads results in a free airflow and prevents overheating of the robot. An add-on between the robot and the tool or end effector is the AIRSKIN safety flange which detects collisions with the tool [48].

Mayser GmbH & Co. KG offers an ultrasonic sensor USi which detect objects and measures their distance to the sensor [49]. The sensor surface of the transducer transmits ultrasonic waves and receives the resulting echoes. Up to two sensors can be connected to each evaluation unit. Every sensor has one protective field and one warning field which can have the size of maximum 200 cm and 250 cm respectively. The safety option of the USi transfers the signal through an output signal switching device (OSSD) pairs and therefore complies with PL D [49]. Due to its small size, the sensor can be mounted on the tool and creates a 3D safe area around it. The main benefit of this sensor in comparison to a laser scanner is that the safety area can move with the tool.

The SafetyEYE from PILZ GmbH is a camera system for 3D zone monitoring which meets PL D [50]. The sensor unit has three cameras to detect objects in the configured warning and detection zones. The control device consists of an analysis unit for receiving and processing of image data from the sensor unit and a programmable safety system which is connected to the robot controller. The camera must be mounted high enough above the robot to ensure a wide vision field, e.g. in 4 m height for a vision field of 5.1 m x 3.8 m [50]. The main benefit is that the protective safety distance can be dynamically adjusted based on the robot speed and its position. The disadvantages of the SafetyEYE are its high sensitivity to changing light conditions, air pollution and reflections [51].

The laser scanners from SICK AG are two-dimensional (2D) area monitoring devices creating horizontal or vertical planes. They emit laser beams and calculate the distance to the detected object based on the time interval between signal emission and the reflected light pulse from the object [52]. More details are explained in chapter 4.4.2.

20

2.3.3 Safety system design in research The following chapter presents selected safety system designs in research papers where certified safety technologies are applied.

A safety system with flexible safety zones which are monitored by a SafetyEYE is designed by Augustsson et al. [51]. A safety programmable logic controller (PLC) controls the robot behavior. Their use case employs a large industrial robot performing nailing operations of wall elements. The safety system has two sections, each with a warning and a hazard zone. Outside of these zones, the robot operates at full speed. As soon as the operator triggers the warning zone the robot backs away with reduced speed towards another wall element in the second section to increase the distance to the operator and initiates the new warning and hazard zone of the section. If the operator triggers the second warning zone the robot moves to its home position and waits until the operator has left the second warning zone. After several tests Augustsson et al. conclude that another safety device should be added in case of a failure of the SafetyEYE [51]. The paper mentions that the safety zones are dimensioned in compliance with ISO 10218 and dummy tests. However, the distance calculations are not outlined and an emergency stop for the situation where the operator moves faster than the robot retracts is not implemented. It has to be noted that the safety system is not dynamic because only two monitoring cases are defined. Nevertheless, robot stops are significantly delayed by the combination of speed reduction and retraction from the operator.

Michalos et al. implemented the SafetyEYE at two industrial pilot cases with different safety requirements. The first one is an automotive dashboard preassembly [19]: A dual-arm robot loads the traverse of the dashboard and installs the body computer. The robot and the human carry out the cable installation together where the robot grasps the cables for the operator to install them. During robot mode the SafetyEYE monitors a detection zone, resulting in an emergency stop, and a warning zone, where the operator is informed about a coming robot stop. In collaborative mode, the warning zone is extended into the detection zone by a small path to allow the operator to reach the cables. In the warning zone, the robot slows down to 250 mm/s. In the second case, Michalos et al. chose the rear wheel assembly with a large industrial robot. The robot lifts the wheels and the operator guides the robot to align the wheel correctly against the holes. In manual mode, the robot controller restricts and monitors the robot's position in a confined virtual volume and in automatic mode a SafetyEYE monitors the area and stops or slows down the robot.

An industrial security system for HRC coexistence with laser scanners is developed by Long et al. [32]. Their system consists of three laser scanners (TiM551 from SICK) which create a spatial envelope around a UR10 robot. Two laser scanners are installed on the robot's first axis, creating two vertical inclined planes and a dynamic zone. The third laser scanner is positioned slightly above the ground, resulting in a static horizontal monitoring plane, to detect an operator's leg. The robot can operate in three security mode: nominal (full speed), coexistence (reduced speed according to ISO/TS 15066) and gravity compensation (robot compensates its own weight). To avoid acceleration discontinuities between speed transitions Long et al. use a novel time scaling method. They compare their system (A) with a system consisting of one laser scanner with a horizontal plane (B). To guarantee the same security level system B requires a larger zone than system A because the horizontal laser scanner does

21

not detect outstretched limbs. Consequently, system B slows down the robot earlier, prolonging the total task operation time. Experiments with three different test cases showed that system A operates more than double the time at full speed. The vertical sensors mainly activated the gravity compensation mode whereas the horizontal laser scanner triggered the reduced mode [32]. In this paper, the separation distance is a fixed value and is not calculated according to ISO/TS 15066. The main advantages of this system are the integration of vertical laser scanners to reduce the safety zones and the installation of the laser scanners on a robot axis to create a dynamic zone moving with the robot. As a result, the robot mostly operates at full speed to allow for high operational efficiency.

Gopinath et al. analyzed risk reduction measures for two laboratory HRC demonstrators and discuss the influence of human factors in safety of collaborative operations [53]. They identify 'loss in situational awareness', meaning a person’s perception of their surroundings leading to unintentional mistakes, and 'mode-error', meaning an operator activates the wrong mode of operation, as mechanisms leading to hazards which can result in serious accidents or production delays. The implemented risk reduction measures are fences, pressure-sensitive mats, light curtains, laser scanners, several buttons, floor markings, and warning lamps. The aim of the measures is to help the operator understand the system-state, i.e. automatic mode and collaborative mode. For example, is a mode-change button used to confirm a task completion and to change to another safety mode [53].

22

3 Proposed safety system design

This chapter suggests four theoretical safety system designs which are developed based on the first risk assessment and the technical knowledge about the devices.

Even though inherently safe lightweight robots are installed in many HRC cases additional safety measures have to be taken to reduce all risks [36]. As already mentioned in the problem description the use of a collaborative robot with power- and force-limiting functions is not safe enough because the occurrence of head collision cannot be fully excluded. Therefore, a protective device is required to stop the robot before head collision can occur. In this thesis work, a laser scanner is installed because they are robust and reliable devices for the shop floor the device was already bought by Scania.

Based on the first risk assessment of the demonstrator case and the technical knowledge of the devices four safety system designs for the SSM method with a laser scanner are proposed, see Figure 7. The case scenario is described in detail in chapter 4.1. The laser scanner system monitors the safety zones surrounding the robot, and issues commands to the robot controller to slow down or stop the robot when humans infringe the safety zone(s). The protective separation distance, which creates the stop zone, is calculated based on the SSM principle as stated in ISO/TS 15066. If the distance between the robot and a detected operator is less than the value of the protective separation distance, the safety system initiates a safety-monitored stop. The robot resumes moving once the separation distance is greater than the protective separation distance.

Figure 6 visualizes the parameters of the protective separation distance. Because the laser scanner is mounted on lower leg height, an intrusion distance of 850 mm is needed to ensure that the robot stops before the outstretched arm of the operator reaches the moving robot. In design 1, 2 and 4 the intrusion distance is the length of an outstretched arm whereas in design 3 it is reduced to 300 mm by combining the SSM method with the PFL principle.

Figure 6: Graphical representation of the protective separation distance

23

In the designs 2 and 3 the protective separation distance is calculated for a full speed value (resulting in the reduced speed zone) and a reduced speed value (resulting in the stop zone). When the human enters the reduced speed zone the robot slows down, and the new protective separation distance based on the reduced speed value is issued. In general, the stop zone or speed reduction zone decrease with a smaller reduced speed or respectively full speed value. The values for full speed and reduced speed remain constant within the proposed designs. To achieve a dynamic protective separation distance based on the actual speed the distance must be recalculated and adjusted continuously. This behavior is not possible with a laser scanner. To increase the situational awareness of the operator LED lights on the robot flange are implemented for all proposed safety system designs to communicate the safety state of the robot.

The following four safety system designs are proposed:

- Safety system design 1 – ‘Stop’: The robot stops as soon as the human enters the stop zone. The motion resumes automatically with the programmed full speed after the human has left the stop zone. The full intrusion distance is considered. - Safety system design 2 – ‘Speed reduction with stop zone’: The robot reduces its speed as soon as the human enters the speed reduction zone and the robot stops when the human infringes the stop zone. After a stop, the robot first continues with reduced speed and then accelerates to full speed upon leaving the reduced speed zone. As in system 1, the full intrusion distance is included but the protective separation distance is calculated based on the reduced speed value. The reduced speed value can be freely chosen. Depending on the value the size of the stop zone changes. - Safety system design 3 – ‘Speed reduction with smaller stop zone’: The collision detection function in the power- and force-limiting robot is used to minimize the stop zone by reducing the intrusion distance. Assuming that collisions up until a defined body part are allowed the reduced speed value is determined with the biomechanical force limits of the expected worst-case collision for these body parts. The length of the body parts for which collision is allowed is deducted from the intrusion distance. The idea is to allow collision from the fingers up to the upper arm to lower the intrusion distance from 850 mm to 300 mm for a bent head. - Safety system design 4 – ‘Speed reduction without stop zone’: The stop zone is removed, and the robot only stops when a collision is detected. When a human enters the speed reduction zone the robot slows down to the biomechanical speed limit for chest collision in transient contact. Chest collision is chosen because it results in the lowest speed value. The full intrusion distance has to be considered because the robot is moving with full speed before. The concept can be extended with a second speed reduction zone for e.g. upper arm collision.

24

Figure 7: Proposed safety system designs

The operational efficiency as well as the implementation complexity increases throughout the proposed solutions. With a speed reduction zone, also referred to as warning field, operational efficiency can be improved because safety stops are avoided. The speed reduction allows the robot to move for a longer time because the stop can be initiated at a later point in time. The principle of system 3 is chosen to be implemented because it is more efficient than system 1 or 2 but safer than system 4. In contrast to system 4 head collision is avoided in system 3 because the robot stops before a possible collision with a (bent) head occurs. The final implemented safety system design depends on the detailed risk assessment and the implementation feasibility as described in chapter 5.

In addition to operational efficiency, factory layout constraints should be considered when deciding for safety solutions. Sensor-based safety devices such as laser scanners require large safety areas to comply with the SSM method than for example fences. It is advisable to use fences in areas where the operator does not need frequent access to. Because the protective separation distance depends on the robot speed, a higher robot speed requires a larger protective separation distance. Additionally, more powerful robots have a longer machine stopping time and thus increase the distance further, see chapter 5.5 for the distance calculation according to ISO 13855.

25

4 Methodology

This chapter describes the methodology of this thesis work which leads to the final safety system design. 4.1 Case scenario description

For the proof of concept, the existing pedal car seat assembly process in the smart factory at Scania is adapted with the objective to create a high level of collaboration. In the original case, the operator assembled the seat after the robot inserted the base. The adapted case is as follows: The goal of the process is to assemble the seat onto the pedal car, see Figure 8. An automatic guided vehicle (AGV) delivers the box with the base into the rack. The HRC process starts with the robot picking the part out of the box. The robot locates the part with the collision detection capability in the force sensitive robot flange. After the robot moved to the collaboration point the human hand-guides the robot to an ergonomic assembly position by pressing the enabling button on the robot flange. In the next cycle, the robot will automatically move to the new collaboration point which was set by the current operator. The operator assembles the washer on the base shaft and confirms the successful task completion by pressing the user button on the robot flange. The confirmation with the user button has been introduced in the first risk assessment to activate the base rotation in a safe way. While the robot rotates the base to the next assembly position (150 degrees clockwise) the operator picks the seat. The operator slides the seat on the base and confirms the task with the user button. This activates the robot motion towards the pedal car. The robot calculates the position of the hole for the seat by touching the axle three times using the force detection in the robot flange. After the robot has released the seat into the hole the operator fixes the seat on the base with a screw and the process is finished.

Figure 8: Pedal car assembly in the smart factory

26

The use case is simulated with Industrial Path Solutions (IPS). Figure 9 shows the simulation in IPS with the main motion points and the respective workspace. The collaborative workspace is the area where the human and the robot work together. The robot zone is the danger area and requires additional safety measures.

Figure 9: Generic station layout of the demonstrator case

27

The process sequence is divided into robot, human and collaborative task. Figure 10 shows the task allocation and simultaneous tasks such as base rotation and seat collection. In the second iteration of the risk assessment, the process sequence for the seat screwing was adapted to exclude the clamping risk over the pedal car, see chapter 5.4.

Figure 10: Task allocation of the human and the robot

4.2 Risk assessment process

The risk assessment for this demonstrator case is done based on the guideline for a general risk assessment at Scania [54]. Risk reduction measures by inherently safe design are chosen over the installation of safeguarding or protective devices. The risk assessment is iterated until all risks are brought down to an acceptable level. The next iteration starts based on the proposed actions from the previous risk assessment. After every risk assessment iteration, the risk assessment layout is adapted. Table 2 shows an example of assessed risks. The crushing risk of a body part around the box is eliminated by a fence which prevents the human from approaching

28

the moving robot. The risk evaluation classifies a high risk which requires immediate action. The risk level depends on the factors: 1) severity of incident 2) probability 3) possibility of avoiding injury.

Risk evaluation Risk / What can Implementation Task Risk cause low medium high Proposed action Number happen? decision human approaches fenced area restriciting access x 3 Base collection crushing moving robot to the rack yes fenced area OR laser scanner Move to collaboration human approaches x stops robot motion when 4 point collision to body parts moving robot human is in stop area laser scanner 5 Washer collection no risks Table 2: Example of risk assessment 1 4.3 Overall system structure

The complete system consists of four subsystems, which are the robot, the scanner, the gripper, and the robot flange. The system structure is displayed in Figure 11. The robot controller monitors predefined parameters such as inputs, forces, and workspaces which are configured in the safety configuration. If a violation occurs the respective reaction is executed. The robot application, the actual program, is transferred to the robot controller and the robot is operated with a smartPAD. The controller has several interfaces to integrate external devices. The electric media flange touch is connected through the interface X66. The flange holds the gripper, enables visual indications on the LED strip, activates the hand-guiding function with the enabling switch and has a freely configurable user button. The gripper is connected to the robot controller through an I/O module from Beckhoff. The I/O module translates the EtherCAT signal from the robot controller to a digital output signal for the gripper, i.e. open and close. The local I/O’s of the laser scanner are integrated into the robot controller with the discrete safety interface X11. A safety relay creates a redundant control architecture and is necessary to ensure the fail- safe signal transfer from the laser scanner to the robot controller. The selected safety relay is SICK UE10-2FG2D0 and complies with EN ISO 13849-1. The communication of a warning field violation is implemented via the I/O module instead of X11 because of the limited functionality of the scanner, see chapter 4.4.2 and 5.2. To receive a signal the scanner and the I/O module have to be connected to the same power supply.

29

Figure 11: System structure of the HRC cell

4.4 Specification of hardware and software

4.4.1 Industrial path solutions IPS is “a math-based software tool for automatic verification of assembly feasibility, design of flexible components, motion planning and optimization of multi-robot stations, and simulation of key surface treatment processes” [55]. The software is developed by the research institutes Fraunhofer-Chalmers Centre and Fraunhofer ITWM, and distributed by IPS AB and fleXstructures GmbH [55]. In this thesis work, 3 out of 13 modules are used. With the module ‘rigid body path planner’ collision-free motion paths for assembly operations are planned efficiently. With the module ‘intelligently moving manikins’ biomechanical motions performed by the human are controlled and ergonomic aspects are evaluated. Robot programs and motion paths are automatically generated and optimized with the module ‘robot optimization’ [55].

The simulation of the use case in IPS supports the risk assessment process and the discussions with the safety engineers. The java program with the simulated robot movements is imported to Sunrise Workbench, representing the first robot application. To create smooth movements the points are adjusted and the best motion type, e.g. point-to-point, is chosen. In the end, a velocity value is assigned to every motion.

30

4.4.2 KUKA lbr iiwa The KUKA lbr iiwa is a collaborative lightweight robot with the following specifications: 14kg payload, 7 degrees of freedom, a payload-weight ratio of 0.47 and a maximum reach of 820mm [56]. Every axis has integrated torque sensors which can be used for collision detection, gesture control and for sensitive assembly operations [56]. With gravity compensation, the robot compensates the load so that that the robot can be moved with a minimum force during hand-guiding [56]. The robot is configured with the software Sunrise Workbench and Work Visual. Sunrise Workbench is the tool for programming robot applications and configuring safety settings [22]. In Work Visual field busses are configured and connected, e.g. the inputs and outputs from the media flange are mapped to the respective I/O group in Sunrise Workbench [22].

The KUKA lbr iiwa possesses several safety-oriented functions which are defined in the safety configuration in Sunrise Workbench. The safety functions, called atomic monitoring functions (AMF), can be either always active as a permanent safety monitoring (PSM) mechanism or specifically activated as an event-driven safety monitoring (ESM) mechanism [22]. In contrast to an ESM where only one AMF can be defined to trigger the specified reaction upon a violation, in the PSM mechanism, up to three AMFs are logically linked in one row and all AMFs have to be violated to trigger the reaction [22]. ESM mechanisms are used to activate safety settings for a specific process or situation. PSM mechanisms are available as a predefined KUKA PSM, e.g. monitoring the emergency button on the smartPAD, and as a Customer PSM where user-specific safety functions can be defined. The relevant AMFs, meaning safety functions, in this thesis work are: collision detection (monitors the external torque on all axes and compares it to the configured limit), cartesian protected space monitoring (monitors that the monitored structure e.g. the robot and the tool are outside of the configured protected space), cartesian workspace monitoring (the monitored structure has to be within the configured workspace), cartesian velocity monitoring (monitors the translational velocity of the center points of joints and tool) and input signal (a low safety input signal activates the defined reaction). The available reactions are: stop 0, stop 1 and stop 1 path- maintaining. Additionally, for a customer PSM, the reactions brake and output are available. With the reaction brake as part of the Enhanced Velocity Controller package the robot automatically slows down to a predefined speed when the full row is violated and accelerates when the row is not violated anymore. The idea in this thesis work was to use this functionality to reduce the speed. For this, the Customer PSM row should have a safe input from the scanner as a warning field infringement, a workspace monitoring function to control the activation of the speed reduction based on the position of the robot and a velocity monitoring function which represents the reduced speed value.

The safety devices for safe inputs and outputs have to be connected through the safety interfaces either Ethernet for field buses (PROFINET and EtherCAT) or the connector X11 (connected to the cabinet interface board CIB_SR). Based on the installed software packages some functions described above are not available. This case study is realized with Enhanced Velocity Controller, Safe Operation and Human-Robot Collaboration.

31

4.4.3 Laser scanner As an electro-sensitive protective device, the laser scanner MicroScan3 Core I/O creates two-dimensional protective fields by scanning the environment with infrared laser beams, either as horizontal or vertical monitoring planes. When an object is detected in a protective field the safety outputs, the OSSD pairs, change their state. With the principle of time-of-flight measurements, the laser scanner calculates the distance to the object based on the time interval between signal emission and the reflected light pulse from the object [52]. The laser scanner MicroScan3 Core I/O has one OSSD pair and three universal I/O’s. OSSD’s only exist in a pair to achieve redundancy. The universal I/O’s can be configured as an external device monitoring (EDM) function to monitor the functionality of the safety relay, a contamination of the optics cover, a monitoring result to communicate the status of the warning field and a static control input to switch between monitoring cases, i.e. protective field sets. The static control input occupies two universal inputs and receives two signals from the robot controller about e.g. a successful speed reduction. These two inputs are switched inversely to change the status of the logical input condition which determines which monitoring case has to be activated.

The laser scanner has a scanning angle of 275°, a protective field range of up to 9 m, in total eight configurable fields and two fields can be monitored simultaneously [57]. The safe high definition distance measurement (HDDM) technology filters out disturbances such as ambient light, dust, and dirt and therefore makes this scanner a reliable solution in a production environment [57]. The software Safety Designer is used to configure the scanner and to draw different protective areas. The device meets performance Level D and category 3 in accordance with EN ISO 13849 and safety integrity level 2 in accordance with IEC 62061 [58].

32

5 Implementation and analysis of the safety system

This chapter presents how the final safety system design is created based on the risk assessment and the feasibility study of the theoretical proposed safety system designs. Further, the implementation details of the safety system at the demonstrator case are explained and the complete system is analyzed. 5.1 Classification of use case

The classification of HRC is influenced by ISO/TS 15066, industry and research institutes. Even though the aim of a standard is to standardize different definitions, multiple classifications of HRC exist, see chapter 2.1.1. It would be beneficial to have several use case examples for every level of interaction to ease the identification of an HRC scenario. The correct and standardized identification of an HRC case is important to propose the right safety strategies and technologies. Nevertheless, the individual risk assessment, considering also the environment and the part, leads to the final decision on the safety system design.

Based on the knowledge gained in chapter 2.1.1 this use case is assigned to a level of interaction. The presented frameworks are partly contradictory with each other and highly depend on the definitions and the reader's point of view for certain characteristics such as collaborative workspace, physical contact, and a common goal. The case study in this thesis work is HRC collaboration because all three characteristics from Behrens et al. are achieved: the human and the robot share the same workspace, they work simultaneously in the same workspace (the robot holds the base when the human assembles the washer) and physical contact is achieved (the robot and the human work directly together in hand-guiding). Nevertheless, it is possible to argue that hand-guiding does not fulfill the requirement for physical contact because the human approaches a standing robot and controls the movement of the robot manually. Also, it depends on how the size of the shared workspace is defined because for the requirement of physical co-work the human has to be present in the shared workspace while the robot is moving. The author concludes that every HRC case has elements of several interaction levels.

5.2 Implementation differences between theory and practice

The available technologies were outlined in section 4. This chapter explains the feasibility of the proposed four theoretical safety system designs and which workaround solutions are necessary to realize their implementation anyway. While studying the manuals of the laser scanner and the robot technical constraints are identified which lead to the differences between theoretical and practical design. The behavior of the system is developed with the technical knowledge of the devices. The risk assessment and the system behavior influence each other. The risk assessment identifies the risk which should be mitigated by the safety system and the proposed actions depend on the technical knowledge of the devices and its implementation feasibility. One example is the risk of a too early motion resume which is reduced with a delay time.

The following implementation differences between theory and practice are identified: - Speed reduction: The laser scanner MicroScan3 Core I/O has only one OSSD pair, representing one safe output which is occupied with the safety stop. As a result, the speed reduction has to be achieved by a workaround solution instead of the brake reaction of the Enhanced Velocity package. In the scanner configuration, the warning zone is connected to a digital output as a monitoring result and the signal is

33

transferred to the robot controller through the I/O module EK1100. In the robot application, a condition is programmed which monitors the status of the signal and respectively activates the speed reduction. In contrast to the brake reaction where the speed can be reduced to a specified value, adjusting the speed in the robot program is only possible with relative values, e.g. 30% of the programmed speed. As a result, the speed is lowered more than necessary if the defined relative value of the respective full speed for a certain motion is less than the desired reduced speed. - Automatic motion resume: When a safe stop is triggered as a reaction in the KUKA safety configuration the motion execution is paused. An automatic motion resume when the human has left the stop zone is not possible. KUKA has programmed the robot to only continue moving after the start button on the smartPAD is pressed. This ensures that the human has to leave the danger area to reach the smart pad which is located outside of the safety zone(s). - Hand-guiding: To allow for hand-guiding of the robot the laser scanner, i.e. the stop zone, must be deactivated. Otherwise, the human violates the stop zone when he approaches the robot and activates a safety stop which in return blocks the hand-guiding. As a result, ESM mechanisms instead of permanent active Customer PSM mechanisms are used. ESM mechanisms allow changing ESM states, i.e. with and without the AMF ‘safe input’ of the laser scanner. - Adjustment of safety zones: It is not possible to change to another safety system design during the process. The laser scanner is configured once with the respective safety zones. It would be possible to create the safety zones for each monitoring case and then use the static control input of the laser scanner to switch between monitoring cases. This requires changing the status of the two channels of the status control input inversely. This behavior is only realizable in a safe way with a safety PLC.

34

5.3 Feasible system behavior of the proposed safety system designs

Based on the implementation feasibility a system behavior is developed for every proposed safety system design. Figure 12 shows the reaction of the KUKA lbr based on the behavior of the human for the four proposed safety system designs. The additional system design for hand-guiding is necessary to realize the use case scenario. The proposed safety designs 2 and 3 have the same system behavior.

Figure 12: System behavior of the proposed safety system designs

35

Figure 13 displays the feasible system behavior in combination with the demonstrator case for all proposed safety system designs. Every design has a scanner configuration with the respective safety zones, e.g. scanner configuration 1 has a stop zone. When a laser scanner configuration is chosen in the second decision block, this configuration cannot be changed during the process. The red line highlights the implemented safety system 3 which is the result of the risk assessment and is tailored to the needs of Scania, see chapter 5.4. The orange notes explain the consequences of technical constraints. The calculation of the speed limits is outlined in 5.5.

Figure 13: Safety system behavior in combination with the demonstrator case

36

5.4 Risk assessment

The detailed risk assessment leads to the final safety layout and design, see Figure 14, in three iterative steps. The main results are described below in the order of the process sequence, whereas the detailed risk assessments are attached in the appendix. All calculations for the safety zones and speeds are presented in chapter 5.5.

To prevent the human from entering the dangerous robot zone while the robot is moving, fences are installed around the pedal car and from the right side of the table until the rack. The laser scanner is activated at P2 and not when the robot is leaving the robot zone at P21 to ensure that the robot stops in time even when a human is already standing inside the stop zone, e.g. in front of the fence close to P21. It has to be noted that this fence has a lower height for the length of d to allow the robot to move out of the robot zone. Collisions up until the upper arm, which includes fingers, hands, wrist joints, lower arms and elbows, are allowed to achieve a smaller stop zone. To eliminate the risk of head collision a C value of 300 mm is required because the operator could bend down his head before the laser scanner detects his legs. Accepting collisions up to the upper arm involves reducing the speed to 343 mm/s to comply with the biomechanical limits to not harm the operator in case of a collision. There is a risk of crushing a hand between the table and the robot if the operator hand-guides the robot with only one hand and places the other hand on the table. We decided to not mitigate this risk because 1) collision detection is active 2) the part yields in the gripper during a collision 3) the human has to actively push the robot towards the table. Instead of the enabling button, the user button on the media flange is used as a task confirmation because this lowers the risk of false activation during hand-guiding. Additionally, the user button is the input in the program to start the rotation of the flange. In the first iteration, the laser scanner was deactivated from P3 until the start of the movement towards P4. The risk of collision during the rotation would have been mitigated by 1) knowingly activating the rotation with the user button 2) delaying the rotation by 4 seconds 3) limiting the speed of the rotation to 109 mm/s to comply with the biomechanical limits for chest collision during transient contact 4) an inherently safe design because the next process step is the seat collection outside of the safety zones. During a discussion with the safety engineers at Scania, it was decided to activate the laser scanner before the rotation starts, i.e. 4 seconds after the confirmation button was pressed. If a human is still in the stop zone a safety stop is activated. The delay time prevents frequent activation of stops. The reasons for not starting the base rotation without active safety zones are 1) the safety concept should be consistent throughout the station to not confuse the operator 2) the risk of head collision should be eliminated, e.g. the operator bends down to pick up something. To ensure that the base rotation is finished before the human approaches the robot with the seat the laser scanner is deactivated after the rotation is completed. A crushing risk of a body part exists when the robot transports the seat to the pedal car and the human has to screw the seat to the base in the next process step. To completely exclude the collision risk of any body part the process sequence is changed and the area around the pedal car becomes inaccessible for the human while the robot is moving there. Therefore, the seat screwing is moved after the seat assembly, representing an inherently safe design measure. As with the base rotation the laser scanner is activated as soon as the delay time elapsed. Additionally, the LEDs on the media flange are used to make the operator aware of the current state of the robot system, e.g. when a safety stop was initiated, see chapter 5.6 for details.

37

Figure 14: Final safety design layout of the HRC cell

An external emergency stop is not required because the smartPAD already has an emergency stop and is located outside of the safety zone(s) at a fixed location. In this thesis work, the protective separation distance is not dynamically adjusted based on the current speed and the position of the tool center point (TCP) of the part. The stop zone and the speed reduction zone have the origin at the outermost point of the hazard which is the base shaft at P3. The origin of the stop zone for the clamping risk around the pedal car is at P4 because this point represents the start of the risk. The stop zone around the pedal car is larger than the stop zone around P3 because the C value is not reduced from 850mm to 300mm. If clamping up until the upper arm would be accepted the robot speed has to be reduced to less than 20 mm/s to comply with the biomechanical limits for fingers in quasi- static contact. The safety zones are half circles instead of rectangles as in the proposed safety system designs. The worst-case risk exists at P3 and the protective separation distance is measured from that point. A rectangular stop zone covers a too large area.

5.5 Calculation of safety zones

This chapter explains how the safety zones of the proposed safety system design 3 are calculated. In this thesis work a constant value for 푆ℎ is calculated based on the maximum programmed speed in the robot application as:

푆푝 = 푆ℎ + 푆푟 + 푆푠 + 퐶 + 푍푑 + 푍푟 = 푣ℎ ∙ (푇푟 + 푇푠) + 푣푟 ∙ 푇푟 + 푆푠 + 퐶 + 푍푑 + 푍푟

Equation 3: Constant protective separation distance

The operator’s speed 푣ℎ is estimated to be 1600 mm/s, see ISO 13855. The robot speed 푣푟 is the maximum programmed speed in the application. The variables for measuring and position uncertainty 푍푑 and 푍푟 are 5 mm and 0.15 mm respectively.

38

푇푟 consists of the response time of the laser scanner, i.e. the time between the field infringement to the OFF state of the OSSD pair, and the safety relay. The object resolution is set to 50 ms to reliably detect the leg of an operator. Because the scanner is installed stationary, horizontally and in a clean environment, multiple sampling is set to 2. This means that the scanner must detect an object only twice to initiate a field violation. To reduce the response time the scan cycle time is set to 30 ms and not 40 ms. This reduces the sensing field ranges, e.g. to 3 m for the protective field, but a wide range is not needed in this case anyway. Because only one scanning device is installed the interference protection mode is set to 1, adding 0 ms to the total response time. With a scan cycle time of 30 ms, a multiple sampling mode of 2 and no interference protection against other scanners the response time of the laser scanner is 70 ms, including the output dependent processing time of the signal for

OSSD pairs as 10 ms. The response time of the safety relay is 10 ms. As a result, 푇푟 is 80 ms.

The variables for the robot in the SSM formula are calculated based on equations provided by Fraunhofer as part of an internal Scania project. The stopping time 푇푠 and the stopping distance 푆푠 of the robot for a stop 1 of axis 1 to axis 4 are read from graphs in the instructions of the KUKA lbr iiwa media flange option [59], see Appendix B for an example graph. In the data sheet of the KUKA lbr iiwa [56], these values are only specified for the basic flange and not for the media flange touch electric. The stopping values vary based on payload, extension and override, also the velocity of the robot, each in the range 33%, 66% or 100%. KUKA only provides the data for axis 1 to 4 because these have the greatest deflection. In this use case, the extension is 100% and the payload 33%, i.e. the 2 kg gripper and the 2 kg part based on a maximum payload of 14 kg. The values are evaluated for an override of 33%, 50%, and 66%. Values below an override of 33% are not specified. The detailed calculations of 푇푠 and 푆푠 are displayed in Appendix B. For 푇푠 the worst-case stopping time value of axis 1 to axis 4 is taken.

The stopping distance 푆푠 is the summed travel distance of the TCP during a stop for axis 1 to axis 4 and is calculated with the angle of rotation 푃ℎ𝑖 of the corresponding axis 1 to 4 and the distance from the TCP 퐿.

푃ℎ𝑖 ∙ 휋 푆 = 푠,푎푥𝑖푠1 180 ∙ 퐿

As the risk assessment prohibits head collision the PFL method is not sufficient to mitigate this risk. Without another safety measure, the head could trigger the collision detection to stop the robot. To lower the C value of 850 mm and therefore reduce the safety zones the author combines the PFL with the SSM principle. The risk assessment accepts collisions with body parts such as the hand. As a tool to estimate the maximum robot speeds in compliance with the biomechanical limits from ISO/TS 15066, the reduced speed value, the app HRC guide from KUKA [60] is used. Based on the employed robot variant, the moving mass, the contact surface area, and the contact event, i.e. transient or quasi-static contact, KUKA suggests a robot speed for every body region except the head. The minimum contact area for upper arm collision with the robot base is 0.5 cm2 because the rounded edges on the base represent the smallest contact area. The maximum moving mass, as the gripper and the part, is 4 kg. Based on these input parameters the following speed limits are suggested: 480 mm/s for hands and fingers, 355 mm/s for lower arms and wrist joints and 343 mm/s for upper arms and elbows. The lowest value is the allowed speed in the speed reduction zone.

39

With the explained values above five protective separation distances are calculated, see Table 3. Based on the override 푇푠 and 푆푠 change. Calculation 1 represents the stop zone around P3 which is calculated with the reduced speed value and a smaller intrusion distance. Calculation 2 is the stop zone around P4. Calculation 3 shows that even though the maximum robot speed is doubled in comparison to calculation 2 the protective separation distance only increases by 200 mm. This proves that the parameters 푆푠, 푇푟, 푇푠 and C have a high influence on the protective separation distance. Calculation 3 represents the reduced speed zone around P3 which is calculated with the full speed value and the full intrusion distance. The calculations 4 and 5 show how the protective separation distance increases with a higher speed.

Calculation 1 Calculation 2 Calculation 3 Calculation 4 Calculation 5 override of robot stopping values 33% override 33% override 50% override 66% override 100% override vh (human speed) in mm/s 1600 1600 1600 1600 1600 Tr (reaction time of robot system) in sec 0.08 0.08 0.08 0.08 0.08 Ts (robot stopping time) in sec 0.3 0.3 0.35 0.4 0.6 Sh (operator's change in location) in mm 608 608 688 768 1088

vr (maximum robot speed of application) in mm/s 343 500 1000 1320 2000 Tr (reaction time of robot system) in sec 0.08 0.08 0.08 0.08 0.08 Sr (reaction time of robot system) in mm 27.44 40 80 105.6 160

Ss (robot stopping distance) in mm 153 153 239 382 856 Zd (position uncertainty of the operator) in mm 5 5 5 5 5 Zr (position uncertainty of the robot) in mm 0.15 0.15 0.15 0.15 0.15 C (intrusion distance) in mm 300 850 850 850 850 Sp (protective separation distance) in mm 1094 1656 1862 2110 2959 Table 3: Calculation of the protective separation distance

The laser scanner is situated on the floor. The company SICK recommends adjusting the C value based on the laser scanner height: for devices on the ground, the C value is 1200 mm to include overstepping of the scanner plane with one foot and at an installation height of 875 mm the supplement is 850 mm. For high-mounted laser scanners, it needs to be assured that the operator cannot crawl under the scanning plane. This recommendation is not followed in this thesis work because all calculations are already conservative and in the author's point of view, the overstepping is already sufficiently included in an intrusion distance of 850 mm.

40

5.6 Safety system configuration of the demonstrator case

This chapter outlines the implementation details of the safety system which was chosen in the risk assessment. The running system is proof of concept of the developed safety system design. The system behavior of the implemented safety system is displayed in Figure 15.

Figure 15: Safety system behavior of the demonstrator case

The LED strip on the media flange is used as a visual indication for the operator. The colors have the following meaning, see Figure 16:

LED color Meaning Use Green Application running as planned When the robot is moving as programmed Yellow Warning field infringed To warn the operator of an active speed reduction and coming robot stop Red Stop zone infringed When safety stop is active and to notify the operator of needed motion resume input on smartPAD Blue Waiting for operator input Confirmation with user button required, hand-guiding Figure 16: Visual indication on the media flange

41

5.6.1 Safety configuration of ESM states A different ESM state is activated with respect to the process. Figure 17 shows the activated ESM state in the process sequence. During the seat positioning, the collision detection must be deactivated because it would be triggered within the normal process sequence. The robot touches the axle three times with a force condition to calculate the hole position.

Figure 17: Configuration of the ESM states and their activation in the process sequence

42

The ESM states in the KUKA safety configuration are set to the following parameters, see Figure 18, Figure 19 and Figure 20:

Figure 18: Configuration of ESM state 1

Figure 19: Configuration of ESM state 3

Figure 20: Configuration of ESM state 2

The enabling button on the media flange has three positions: not pressed, slightly pressed and fully pressed. To satisfy the requirements of the enabling device, as defined in ISO/TS 15066, the hand-guiding is defined in the KUKA PSM. The position fully pressed represents a panic state and activates a safety stop and, in the position, slightly pressed the robot can be moved.

The risk assessment decided that the crushing risk of the hand in between the table and the robot during hand- guiding does not have to be mitigated. Nevertheless, are the implementation details elaborated below because the safety-oriented tool has already been developed for the second risk assessment and are valuable for other

43

HRC cases. The gripper with the attached part ‘GripperLoad’ is configured as a safety-oriented tool. It has two safety spheres which are monitored against the AMF ‘cartesian protected space’ around the table. As soon as one of the safety spheres of the tool enters the protected space a safety stop is initiated. The implementation is as follows: In addition to the TCP frame of the tool (the tip of the gripper), two safety-oriented frames for the seat pad and the seat back are created. For every frame, a safety sphere, see orange spheres in Figure 21, is defined. One frame is set for the back of the seat (frame origin: in the middle of the back part edge) with a safety sphere of 40 mm and one for the seating pad (frame origin: in the hole for the mounting screw) with a safety sphere of 80 mm. The protected space is a cube situated on the table with a height of 10 mm. The AMF is configured in the Customer PSM. It has to be noted that after a violation the robot has to be moved out of the protected space in KRF mode (controlled robot driving).

Figure 21: Limiting of the hand-guiding motion with safety spheres and protected space monitoring

5.6.2 Circuit diagram To connect the laser scanner to the safety relay and the X11 adapter on the robot controller the following electrical circuit as in Figure 22 is developed. When the protective field is infringed the OSSD pair gets low and voltage is applied at B1/A2 and B2/A2. For OSSD 1. A the current path 13/14 closes and respectively for OSSD 1.B the current path 23/24 [52]. As a result, safe input 4 becomes low, the AMF function ‘input signal’ is violated, and a safety stop is activated. To ensure the functionality of the safety relay, e.g. the contactors are not welded, input 3 is configured as an EDM. As soon as one of the paths 13/14 or 23/24 is closed the current path Y1/Y2 opens [61] and input 3 becomes high. For every OSSD change the laser scanner checks if input 3 became high. Otherwise, the safety relay has a malfunction and the laser scanner switches into the locking state.

44

Figure 22: Circuit diagram for the laser scanner connection

The current paths 13/14 and 23/24 in the safety relay together with the OSSD pair represent the dual connection for the safety-relevant stop function. Instead of connecting output 2 as a monitoring result to the Beckhoff I/O module, the speed reduction should be achieved with a dual connection via X11 to achieve PL D. Figure 23 visualizes the cabling of the safety relay, the power supply, and the I/O module.

Figure 23: Cabling of the devices

5.6.3 Configuration of the laser scanner The laser scanner is configured with the software Safety Designer. The parameters and the size of the stop zones and warning zones are set as calculated in chapter 5.5.

Monitoring plane Horizontal Object resolution Leg (50 mm) Multiple sampling 2 Scan cycle time 30 ms Additional interference protection Mode 1 Figure 24: Configuration of the laser scanner

45

The option ‘immediate restart without restart interlock’ is selected which means that the OSSD pair is immediately switched back on when the operator has left the stop zone. This behavior is necessary to reduce the necessary delay time for the motion start of the base rotation and the seat positioning.

5.6.4 Integration in Java This chapter explains the safety-relevant parts of the robot program. The full code is displayed in Appendix C.

The maximum and reduced speed values of the robot, i.e. the part, must be transferred to the robot application where the maximum speed is programmed from one point to another. Based on the motion type the velocity is set as a relative joint velocity for point-to-point motions or as a cartesian velocity for linear motions. The absolute value for the cartesian velocity is the maximum allowed value for this motion. If the value is reached during the execution depends on motion planning restrictions such as speed limits for every axis (measured in degrees per second). To set the velocities as close as possible to the maximum speed the value is increased until the velocity monitoring function of 1000 mm/s in the safety configuration is triggered. For some motions, the maximum velocity of 1000 mm/s cannot be achieved even though the relative velocity is set to 1, because the combination of joint movements restricts a higher speed.

The workaround solution of the speed reduction is realized as follows: In Work Visual a Boolean variable ‘WarningField’ is created as a Sunrise I/O group and mapped to channel 1 of the fieldbus EL1800 which receives the digital signal from the laser scanner. To check the Boolean digital input ‘WarningField’ of the I/O module in the robot application an I/O-related condition ‘WarningFieldViolation’ is created. With a listener, the state of the condition is constantly monitored while the application running. If a condition change occurs the listener is notified and executes the method for the handling routine which contains the programmed speed reduction. The listener type ‘IAnyEdgeListener’ is chosen to notify the listener of every condition change. This behavior allows to reduce and reset the speed. To activate the speed reduction only in ESM state 1 and 3 a variable ‘ESMstate’ is created. The listener is only notified once of a condition change. If a human enters the stop zone before the scanner is activated and the variable ‘ESMstate’ is set to 1 or 3 the method for speed reduction is not activated. Therefore, an if-statement is integrated after every activation of the scanner to ensure that the robot slows down even if a human already triggered the speed reduction when the scanner was deactivated. The speed reduction is programmed with the method ‘setApplicationOverride’ which slows the robot down to a configured percentage of the current speed. The application override is set to 0.33 to achieve approximately the desired reduced speed value of 343 mm/s from the full speed value of 1000 mm/s. Nevertheless, as explained above the full speed cannot always be set to 1000 mm/s which means that for slower programmed motions the speed is reduced more than desired.

As with the speed reduction, the variable ‘UserButton’ and all LED colors are mapped respectively to the input and outputs of the media flange. Two I/O-related Boolean conditions ‘UserButtonPressedWasher’ and ‘UserButtonPressedSeat’ are created. With the method ‘waitFor’ the program waits for the respective input, meaning for the operator to confirm the completion of his task, before resuming to the next line in the program.

46

The LED colors on the media flange are activated by setting the value of the variable to true. For the yellow color both variables ‘LEDRed’ and ‘LEDGreen’ must be true.

A LED change to yellow is required when the human enters the speed reduction zone. This is programmed in the handling routine of the speed reduction. To switch the LED from green or yellow to red when a safety stop is triggered, the interface ‘IStatusController’ is used. It is part of the software package Status Controller and can be used for monitoring of status groups. A listener ‘IStatusListener’ is created with the method ‘addStatusListener(…)’ which is notified of status changes. A listener for each status group ‘APPLICATION_RUNNING’ and ‘SAFETY-STOP’ of the class ‘DefaultStatusGroup’ is registered to represent a moving and stopped robot. If the status of the listener ‘stopZoneViolation’ is set the LED light turns red. If the status of the listener ‘applicationRunning’ is set the LED light turns green. A variable ‘noStop’ is created to keep the red LED even when the human re-enters the speed reduction zone.

The delay time is realized with the method ‘milliSleep’. The test runs showed that the delay time must be at least 4 seconds to allow the operator to leave the stop zone and to give the laser scanner enough time to change the signal.

5.7 Analysis of the safety system

The developed safety system design combines the safety strategies crash safety and active safety and has elements of pre-collision and post-collision systems. The following risk reduction measures with their respective safety functions are implemented, see Table 4.

Risk reduction measure Safety function Physical fences Restriction of human access to the robot Laser scanner Monitoring of the fenceless workspace, activation of the speed reduction and the robot stop Enabling switch Safe three position button to allow hand-guiding Confirmation button Signalizing of the task completion to activate the robot movement LED lights Signalizing the safety state of the system Emergency stop button Stopping of all robot motion at any time Play button on smartPAD Activation of the motion resume after a robot stop Table 4: Description of the implemented risk reduction measures and their safety function

The different buttons for hand-guiding and for task confirmation reduce the hazards associated with ‘mode-error because the operator must press a different button to start the next process. The LED lights on the robot flange communicate the system state to minimize risks associated with ‘loss in situational awareness’, meaning the operator unintentionally walks into the stop zone because he is not aware of the system state.

Currently, the speed reduction is programmed with an input listener and is not enabled through a PSM mechanism. To achieve PL D the safe input for the speed reduction must be communicated with a dual connection to the safety interface X11 or X66. A dual connection could be obtained with the OSSD pair of a second MicroScan 3 Core I/O and a safety relay or a MicroScan 3 Pro ProfiSafe with a safety PLC. Then the brake

47

reaction of the Enhanced Velocity package can be used to reduce the speed as a Customer PSM in the safety configuration. With the application override method, the speed is reduced to a relative value of the current speed instead of an absolute value. The brake reaction in the Enhanced Velocity Controller Package allows to slow down the robot to an absolute value independent of the current speed. This behavior increases the overall speed of the robot in the warning zone because the speed is reduced to the desired value and not below.

In theory, the system could be further improved with a safety PLC. This would allow to switch between monitoring cases and to automatically restart the robot motion. In the current system, the robot does not automatically continue when the human has left the stop zone. The operator must press the motion resume button on the smartPAD to clear up the stop command. With a safety PLC, an automatic restart can be realized as soon as the human leaves the stop zone and the overall efficiency of the system improves. Additionally, the stop zone around the pedal car is activated at all times to eliminate the clamping risk over the pedal car. This is not necessary because the risk only exists when the robot is moving over the pedal car. A monitoring case switching function can activate this stop zone after the process step seat mounting. With a static control input, the laser scanner changes between monitoring case 1, the stop zone around P3 and monitoring case 2, the stop zones around P3 and P4.

For installations of HRC coexistence and/or robots with a high payload, larger stop or warning zone are required. The protective field range of the MicroScan3 Core I/O can be extended from 3 m to 4 m by mounting the scanner 300 mm above the ground. As a result, the resolution increases from 50 mm to 70 mm because instead of a thin ankle a leg needs to be detected.

The stop zone is already reduced by reducing the required intrusion distance. Nevertheless, the parameters 푆푠,

푇푟, 푇푠 and C have a great influence on the protective separation distance and therefore exact values should be obtained. The intrusion distance C was reduced by moving the robot at PFL conformant speeds. The stopping time 푇푠 and the stopping distance 푆푠 depend on: the accuracy of the person who reads the stopping values from the graphs provided by KUKA, which values are set for the parameters payload, extension and override and the equation for 푆푠 . Additionally, the provided values can differ due to internal and external influences. More accurate stopping values for the robot can be achieved by measuring these directly under real conditions with the robot program of the application use case. It is suspected that the measured values will be lower than the provided ones. A small protective separation distance is not only desirable due to layout constraint but also to eliminate unnecessary walking which is considered as a non-value adding activity. A larger stop zone increases the walking time of the operator because the material supply must be stationed further away. Instead of reducing the intrusion distance it can be eliminated with a vertical monitoring plane. In addition to the horizontal monitoring plane, a second laser scanner can be installed vertically. Nevertheless, this approach does not allow for a circular horizontal safety zone and requires a suitable layout.

The outcome of a risk assessment is very subjective and influenced by the safety perspective of the company. In the risk assessment, the risk of head collision is ranked as high. Nevertheless, it is possible to argue that the robot only moves at the height of the lower chest and that therefore head collision is less likely. If this argumentation

48

is accepted depends on how the company rates foreseeable misuse of the operator such as bending down to pick up a part. In intended use, the risk of head collision might be low but the risk level for foreseeable misuse needs to be discussed by the company. In this demonstrator case, most risks appeared in transient contact because the robot gripper moved towards free space away from the table. The clamping hazards in quasi-static contact around the pedal car are mitigated with a stop zone. Depending on the use case more hazards of quasi- static contact in the collaborative area can exist. This requires lower PFL compliant speed values.

49

6 Conclusion

In this chapter, the research questions are answered. The limitations and challenges of the implemented safety system are discussed, and future work is identified. 6.1 Answers to research questions

This thesis work presented a novel safety system design and its implementation in compliance with ISO/TS 15066. Below, the conclusions of the findings are presented by answering the four research questions.

Research question 1: What are the safety requirements for HRC defined by the standards? The first research question is answered in chapter 2.2.2 and chapter 2.2.3. The relevant standards for HRC are ISO 12100 for the risk reduction process and the section about collaborative operations in ISO 10218. However, most important is the technical specification ISO/TS 15066 because it defines the safety requirements for HRC more in detail. The main findings are how a safety system can be designed to comply with the standards and especially with ISO/TS 15066. Compliance is achieved by applying one or more of the four safeguarding methods which are safety-rated monitored stop, hand-guiding, SSM and PFL. Which risk reduction measures are selected for the safety design depends on the identified risks in the risk assessment. However, the risk assessment is subjective, and its outcome depends on the company-specific safety perspective. ISO/TS 15066 is more specific than ISO 10218 but does not create a clear enough picture for companies of what is safe enough for an HRC application. The guideline is loosely defined, open for interpretation and does not exemplify use case scenarios where the safeguarding modes are applied. ISO/TS 15066 represents the state of the art of 2016 and will be adapted in the future before a standard for collaborative operations is introduced by ISO.

Research question 2: What safety technologies are available and relevant to improve safety in HRC assembly installation in the best way? Safety technologies can be divided into pre- and post-collision systems. In pre-collision system, collisions are prevented through sensing the presence of a human with workspace monitoring systems. In post-collision systems, collisions are allowed but the resulting impact to the human is minimized through e.g. integrated collision detection sensors and lightweight robot structures. Several research approaches have been taken to implement the SSM or PFL mode. These are for example vision-based systems with a Microsoft Kinect camera to create a dynamic SSM behavior. For the PFL mode, the focus is on robot skins and collaborative lightweight robots. The availability of certified safety technologies for HRC on the market is limited. The following certified safety technologies have been studied: the soft and pressure-sensitive safety skin AIRSKIN, the ultrasonic sensor USi, the SafetyEYE as a 3D zone monitoring device and the laser scanner MicroScan3.

Research question 3: How can a safety system be designed to comply with the existing standards and regulations and to guarantee a safe environment for the operator in a specific HRC installation like in the smart factory at Scania? In this thesis work, a safety system is designed which complies with ISO/TS 15066 and uses certified safety technologies. Four safety system designs with a laser scanner as a presence sensing device and a collaborative robot, the KUKA lbr iiwa, are proposed. These are: ‘stop’, ‘speed reduction with stop zone’, ‘speed reduction with smaller stop zone’ and ‘speed reduction without stop zone’. When the operator approaches the robot before it

50

is intended in the process sequence the following actions are taken. The systems either stops the robot motion, reduces the robot’s speed and then triggers a stop or only activates a stop after a collision between the robot and the human occurred. In system 3 the size of the stop zone is decreased by combining the speed and separation monitoring principle with the power- and force-limiting safeguarding mode. By using the collision detection function in the collaborative robot and by reducing the speed in compliance with the biomechanical limits for upper arm collision a smaller stop zone is achieved. To eliminate the risk of head collision even if the operator bends his head the intrusion distance is set to 300 mm which accounts for a bent head.

Through a feasibility study, the differences between theoretical design and practical design are identified. The theoretical designs were adjusted towards feasible solutions. The implementation difficulties are: speed reduction via I/O module, motion resume via smartPAD after stop zone violation, hand-guiding only possible without an active stop zone and static safety zones. For every design, the behavior of the system as such and in combination with the demonstrator case is defined. Safety system design 3 is chosen to be implemented for the demonstrator case and is adapted to Scania’s needs in the risk assessment. The tailored parts for Scania are for example that the base rotation can only start with an active stop zone.

Research question 4: How can this safety system be configured to integrate internal safety functions and external safety equipment? This research question is answered in chapter 5 where the final safety system design is implemented for the demonstrator as a proof of concept. A risk assessment is performed to reduce all risks to an acceptable level and lead to the final safety system design in three iterations. The safety zones are calculated according to the protective separation distance in ISO/TS 15066. In total three ESM states with up to three AMF’s are defined in the safety configuration of the robot. The used AMF’s cartesian velocity monitoring, safe input, and collision detection. A circuit diagram is developed to connect the laser scanner to the robot controller.

6.2 Limitations and challenges

The KUKA lbr iiwa is a widely used robot for HRC installations. Nevertheless, this lightweight robot has a reduced payload and is therefore not sufficient in applications with heavy parts. Several methods have been developed in research to make industrial heavy-duty robots safe enough for close collaboration with the human. However, certified technologies are not available yet. This thesis work presented a safety system for a collaborative lightweight robot. For heavy duty robots even, stricter safety measures must be taken to make use of their heavy payload which is excellently suited for the automotive industry.

The implemented safety system is limited by the functionality of the laser scanner. The speed limits for the PFL method are obtained with the HRC app provided by KUKA. The suggested speed values are an approximation and compliance with the biomechanical limits can only be guaranteed by measuring the occurred forces for every contact scenario.

The challenges regarding the implementation of the safety system in this thesis work were: to understand the functionality of the robot and the laser scanner, to develop workaround solutions for the non-feasible features

51

of the proposed safety system designs, to connect the laser scanner to the robot through a safety relay, and to find accurate values for the robot stopping time and stopping distance.

6.3 Recommendations and future work

In chapter 5.4 the safety system is analyzed, and improvement suggestions are pointed out to increase safety and efficiency of the developed safety system. These are:

- To get PL D the speed reduction needs to be achieved through the KUKA safety interfaces. Then the reaction ‘brake’ can be used to slow down the robot to a predefined speed. There are two options: either a second laser scanner MicroScan3 Core I/O with a second OSSD pair is connected to X11, or a laser scanner which has safety outputs via a network is connected to X66. A possible device is the MicroScan3 Pro with PROFINET. PROFINET is an Ethernet-based network for safety-oriented data communication. - A safety PLC is necessary to enable an automatic motion restart after a stop zone violation and to switch between monitoring cases, e.g. deactivate the stop zone around the pedal car. A Simantic-1500F with CPU 151XF from Siemens is suggested. - The operational efficiency of the system can be further improved by having multiple speed reduction zones and by reducing the stop zone. The latter can be achieved through more accurate values of the robot stopping time and stopping distance. These can be obtained by measuring them directly at the demonstrator case. - A more dynamic behavior of the safety system could be created with a SafetyEYE. Because of the 3D monitoring area, an intrusion distance must not be considered in the SSM calculations. Nevertheless, the device is not reliable enough for use in the manufacturing industry. Therefore, the ultrasonic sensor from Mayser should be tested. Because the sensor is mounted on the robot flange, a safety zone around the gripper is created. In this demonstrator case, this could reduce most clamping risks because the safety zone constantly moves with the motion of the tool.

The suggestions above are short-term goals. The long-term goal is to design a dynamic safety system which reacts based on the movements of the operator and chooses the safest but also most efficient reaction. This can be achieved through:

- The trajectory of the robot is adapted when the protective field is infringed. This needs extensive path re-planning logic. The first step could be to retract the robot to another task which is executed in a larger distance away from the operator. - The safety zones move with the position of the outermost point of the workpiece and the safety zones are recalculated based on the robot speed at the current time point. - The safety zones are visualized with a projector. In contrast to static safety zones, for a dynamic safety system, the projector must adapt them based on the robot movement.

Recently, KUKA developed a new software package Safety Visualization to visualize the cartesian monitoring spaces which are defined in the safety configuration in 3D (KUKA Roboter GmbH, 2019, p. 213). The user can see

52

the active ESM states, active tool, safety tool spheres, protected spaces, and workspaces. This tool supports the discussion with the safety engineers and can be used to verify the safety concept. Further, KUKA launched a smaller and cheaper version of the KUKA lbr iiwa this year, the KUKA iisy, with a payload of 3 kg for smaller products.

Even though the safety of the operator is of the highest importance too strict measures reduce operational efficiency. If a worker triggers the safety measures the robot either slows down or suddenly stops its operations. As a result, productivity reduces and even up- and downstream workstations might be affected negatively. If HRC is not cost-efficient for companies less HRC cases are implemented and subsequently, the operators cannot benefit from improved ergonomic working conditions. A balance between the safety measures and its effect on operational efficiency must be found. Especially, including reasonable misuse according to ISO 10218-2 should not be in the main focus when choosing appropriate safety measures. It should be considered how high the risks with the current stations are and what they would be with an implemented HRC case.

ISO/TS 15066 was released in 2016 and builds up on ISO 10218 which was released in 2011. Since 2016 more technologies for HRC have been introduced and first industrial implementation experiences have been gained. The standards need to be defined more in detail to create a clear understanding of the safety requirements for companies. Before ISO introduces a standard for collaborative operations, the current ISO/TS 15066 will be adapted to the new best practices. To sum up, the one safety system for all applications does not exist. Every safety concept is individual based on the process steps and the production environment.

53

7 References [1] Petruck, H., Faber, M., Giese, H., Geibel, M., Mostert, S., Usai, M., Mertens, A., and Brandl, C. 2019. Human-Robot Collaboration in Manual Assembly – A Collaborative Workplace. Advances in Intelligent Systems and Computing 825, 21–28.

[2] E. Helms, R. D. Schraft, and M. Hagele, Eds. 2002. rob@work: Robot assistant in industrial environments. Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[3] Krüger, J., Lien, T. K., and Verl, A. 2009. Cooperation of human and machines in assembly lines. CIRP Annals 58, 2, 628–646.

[4] Bender, M., Braun, M., Rally, P., and Scholtz, O. Lightweight robots in manual assembly - best to start simply!: Examining companies' initial experiences with lightweight robots. In Report.

[5] Marvel, J. 2015. Characterizing Task-Based Human–Robot Collaboration Safety in Manufacturing. IEEE Transactions on Systems, Man, and Cybernetics: Systems 45, 2, 260–275.

[6] Pilz, T. From vision to reality - Human-robot collaboration over the years. In The 8th International Conference on the Safety of Industrial Automated Systems (SIAS).

[7] Asimov, I. 1963. I, Robot.

[8] Pedrocchi, N., Vicentini, F., Malosio, M., and Tosatti, L. M. 2013. Safe human-robot cooperation in an industrial environment. International Journal of Advanced Robotic Systems 10.

[9] ISO 10218-1:2011. 2011. Robots and robotic devices - Safety requirements for industrial robots. Part 1: Robots.

[10] ISO 10218-2:2011. 2011. Robots and robotic devices - Safety requirements for industrial robots. Part 2: Robot systems and integration.

[11] ISO/TS 15066:2016. 2016. Robots and robotic devices - Collaborative robots.

[12] Aaltonen, I., Salmi, T., and Marstio, I. 2018. Refining levels of collaboration to support the design and evaluation of human-robot interaction in the manufacturing industry. Procedia CIRP 72, 93–98.

[13] Wang, X., Seira, A., and Wang, L. 2018. Classification, Personalised Safety Framework and Strategy for Human-Robot Collaboration.

[14] Behrens, R., Saenz, J., Vogel, C., Elkmann, N., Ed. 2015. Upcoming Technologies and Fundamentals for Safeguarding All Forms of Human-Robot Collaboration.

[15] BMW Group. 2013. Innovative human-robot cooperation in BMW Group Production. https:// www.press.bmwgroup.com/global/article/detail/T0209722EN/innovative-human-robot-cooperation-in- bmw-group-production?language=en. Accessed 20 February 2019.

[16] Audi AG. 2017. Human robot cooperation: KLARA facilitates greater diversity of versions in production at Audi. https://www.audi-mediacenter.com/en/press-releases/human-robot-cooperation-klara- facilitates-greater-diversity-of-versions-in-production-at-audi-9179. Accessed 20 February 2019.

[17] KUKA AG. 2017. HRC systems in production at BMW Dingolfing. https://www.kuka.com/en-de/press/ news/2017/06/bmw-dingolfing. Accessed 20 February 2019.

[18] Awad, R., Fechter, M., and van Heerden, J. 2018. Integrated risk assessment and safety consideration during design of HRC workplaces. IEEE International Conference on Emerging Technologies and Factory Automation, ETFA.

[19] Michalos, G., Makris, S., Tsarouchi, P., Guasch, T., Kontovrakis, D., and Chryssolouris, G. 2015. Design Considerations for Safe Human-robot Collaborative Workplaces. Procedia CIRP 37, 248–253.

54

[20] Vysocky, A. and Novak, P. 2016. HUMAN – ROBOT COLLABORATION IN INDUSTRY. MM SJ 2016, 02, 903– 906.

[21] Machinery Directive 2006/42/EC.

[22] KUKA Roboter GmbH. 2019. KUKA Sunrise.OS 1.16, KUKA Sunrise.Workbench 1.16. Operating and Programming Instructions for System Integrators.

[23] ISO 12100:2010. 2010. Safety of machinery - General principles for design - Risk assessment and risk reduction (ISO 12100:2010).

[24] ISO 13849-1:2016. 2016. Safety of machinery - Safety-related parts of control systems. Part 1: General principles for design (ISO 13849-1:2015).

[25] IEC 62061:2005. Safety of machinery - Functional safety of safety-related electrical, electronic and programmable electronic control systems.

[26] Salmi, T., Väätäinen, O., Malm, T., Jari Montonen, and Ilari Marstio, Eds. 2014. Meeting New Challenges and Possibilities with Modern Robot Safety Technologies. Enabling Manufacturing Competitiveness and Economic Sustainability. Springer International Publishing.

[27] Marvel, J. and Norcross, R. 2017. Implementing Speed and Separation Monitoring in Collaborative Robot Workcells. Robotics and Computer-Integrated Manufacturing 44, 144–155.

[28] Belingardi, G., Heydaryan, S., and Chiabert, P. 2017. Application of speed and separation monitoring method in human-robot collaboration: industrial case study.

[29] Bdiwi, M., Pfeifer, M., and Sterzing, A. 2017. A new strategy for ensuring human safety during various levels of interaction with industrial robots. CIRP Annals 66, 1, 453–456.

[30] Rosenstrauch, M. and Krüger, J. 2017. Safe human-robot-collaboration-introduction and experiment using ISO/TS 15066. In , 740–744. DOI=10.1109/ICCAR.2017.7942795.

[31] Marvel, J. 2013. Performance Metrics of Speed and Separation Monitoring in Shared Workspaces. IEEE Transactions on Automation Science and Engineering 10, 2, 405–414.

[32] Long, P., Chevallereau, C., Chablat, D., and Girin, A. 2018. An industrial security system for human-robot coexistence. Industrial Robot 45, 2, 220–226.

[33] Mohammed, A., Schmidt, B., and Wang, L. 2017. Active collision avoidance for human–robot collaboration driven by vision sensors. International Journal of Computer Integrated Manufacturing 30, 9, 970–980.

[34] Vogel, C., Walter, C., and Elkmann, N. 2013. A projection-based sensor system for safe physical human- robot collaboration. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 3 - 7 Nov. 2013, Tokyo, Japan ; conference digest. IEEE, Piscataway, NJ, 5359–5364. DOI=10.1109/IROS.2013.6697132.

[35] Christian Vogel, Christoph Walter, and Norbert Elkmann. 2017. Safeguarding and Supporting Future Human-robot Cooperative Manufacturing Processes by a Projection- and Camera-based Technology. Procedia Manufacturing 11, 39–46.

[36] Rosenstrauch, M., Krüger, J., and J. Pannen, T. 2018. Human robot collaboration – using kinect v2 for ISO/TS 15066 speed and separation monitoring.

[37] Du, G., Long, S., Li, F., and Huang, X. 2018. Active Collision Avoidance for Human-Robot Interaction With UKF, Expert System, and Artificial Potential Field Method. Frontiers in Robotics and AI 5.

55

[38] Fraunhofer Institute for Machine Tools and Forming Technology IWU. Interactive control to guide industrial robots. https://www.iwu.fraunhofer.de/en/trade-fairs-and-events/fraunhofer-iwu-at-the- hannover-messe-2019/mrk.. Accessed 8 April 2019.

[39] Fraunhofer Institute for Machine Tools and Forming Technology IWU. 2019. More than pure gesture control for production. https://www.iwu.fraunhofer.de/content/dam/iwu/en/documents/Events/ Infosheet-More-than-pure-gesture-control-for-production.. Accessed 8 April 2019.

[40] Safeea, M. and Neto, P. 2019. Minimum distance calculation using laser scanner and IMUs for safe human-robot interaction. Robotics and Computer-Integrated Manufacturing 58, 33–42.

[41] Rajnathsing, H. and Li, C. 2018. A neural network based monitoring system for safety in shared work- space human-robot collaboration. Industrial Robot 45, 4, 481–491.

[42] Roni-Jussi Halme, Minna Lanz, Joni Kämäräinen, Roel Pieters, Jyrki Latokartano, and Antti Hietanen. 2018. Review of vision-based safety systems for human-robot collaboration. Procedia CIRP 72, 111–116.

[43] Shin, H., Seo, K., and Rhim, S. 2018 - 2018. Allowable Maximum Safe Velocity Control based on Human- Robot Distance for Collaborative Robot. In 2018 15th International Conference on Ubiquitous Robots (UR). IEEE, 401–405. DOI=10.1109/URAI.2018.8441887.

[44] Albu‐Schäffer, A., Haddadin, S., Ott, C., Stemmer, A., Wimböck, T., and Hirzinger, G. 2007. The DLR lightweight robot: design and control concepts for robots in human environments. Industrial Robot 34, 5, 376–385.

[45] Fritzsche, M., Elkmann, N., and Schulenburg, E. 2011. Tactile sensing. In Proceedings of the 6th international conference on Human-robot interaction. ACM, New York, NY, 139. DOI=10.1145/1957656.1957700.

[46] J. Kim, A. Alspach, and K. Yamane, Eds. 2015. 3D printed soft skin for safe human-robot interaction. 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[47] Cirillo A., Ficuciello F, . Natale C., Pirozzi S., and Villani L. 2016. A Conformable Force/Tactile Skin for Physical Human–Robot Interaction. IEEE Robotics and Automation Letters 1, 1, 41–48.

[48] Blue Danube Robotics GmbH. AIRSKIN. https://www.bluedanuberobotics.com/airskin/.

[49] Mayser GmbH & Co. KG. USi Ultrasonic sensors. https://www.mayser.com/en/safetytechnology/usi- ultrasonic-sensors. Accessed 25 May 2019.

[50] Pilz GmbH & Co. KG. Safe camera system SafetyEYE. https://www.pilz.com/en-DE/eshop/ 00106002207042/SafetyEYE-Safe-camera-system. Accessed 27 May 2019.

[51] S. Augustsson, L. G. Christiernin, and G. Bolmsjo, Eds. 2014. Human and Robot Interaction based on Safety Zones in a Shared Work Environment. 2014 9th ACM/IEEE International Conference on Human- Robot Interaction (HRI).

[52] SICK AG. Operating instructions microScan3 Core I/O. Safety laser scanner.

[53] Gopinath, V. and Johansen, K. 2019. Understanding situational and mode awareness for safe human- robot collaboration: case studies on assembly applications. Production Engineering 13, 1.

[54] Scania AB. 2018. General risk assessment.

[55] Industrial Path Solutions Sweden AB. Industrial Path Solutions. https://industrialpathsolutions.se/. Accessed 18 May 2019.

[56] KUKA Roboter GmbH. 2016. LBR iiwa 7 R800, LBR iiwa 14 R820. Specification.

[57] SICK AG. 2018. Safety beyond limits. Safety laser scanners for efficient processes.

56

[58] SICK AG. Datasheet microScan3 Core MICS3-AAAZ55AZ1P01.

[59] KUKA Roboter GmbH. 2016. Option Media Flange V7. for product family LBR iiwa.

[60] KUKA AG. KUKA HRC Guide: Our latest collaborative robot application. https://www.kuka.com/de-de/ produkte-leistungen/robotersysteme/software/applikationssoftware/hrc-guide-app. Accessed 18 May 2019.

[61] SICK AG. 2016. Operating instructions UE10-2FG and UE12-2FG. Safety relay.

57

Appendix A – Risk assessment and layout

Risk evaluation see layout risk reduction 1 Implementation Task Risk / What can happen? Risk cause low medium high Proposed action decision Number human approaches fenced area restriciting access to the x 3 Base collection crushing moving robot rack yes fenced area OR laser scanner stops Move to collaboration human approaches x robot motion when human is in stop 4 point collision to body parts moving robot area laser scanner 5 Washer collection no risks workspace monitoring (defined Hand-guiding (with crushing of fingers between hand-guiding with one x moving range to table) in safety 6 enabling device) the robot and the table hand configuration not decided 7 Washer assembly no risks enabling device (because a trigger base rotation by use a different confirmation task confirmation by the accident when possibility: buttons on enabling human leaving the stop x Task confirmation of assembling the washer device OR human leaves stop zone zone reduces the washer assembly with force assembly with too much OR confirmation button outside stop productivity of the HRC 8 detection base rotation too early force zone case) Task confirmation of washer assembly with enabling device (user trigger base rotation by x no action because human actively button or double press accident earlier in confirms by pressing a button, press 9 enabling button) base rotation too early handguiding activity user button for confirmation Task confirmation of washer assembly with robot movement starts before enabling device (user human moved away from human forgot that task x button or double press robot and robot hits standing confirmation button 10 enabling button) operator restarts robot movement limit robot speed to ? mm/s yes Task confirmation of delay of movement start by 4 washer assembly with robot movement starts before seconds when human pressed enabling device (user human moved away from x confirmation button to ensure button or double press robot and robot hits standing robot starts movement to human is at least in warning zone 11 enabling button) operator fast when rotation starts yes human activates motion start with task confirmation AND inherently x safe design (next process step is seat collection = leaving area) AND 12 Base rotation collision to body parts operator bending down limited speed yes 13 Seat collection no risks no risks because robot is in 14 Seat assembly stopping position Task confirmation of seat human forgot that task assembly with enabling robot movement starts before confirmation button device (user button or human moved away from restarts robot movement, x delay of movement start by 4 double press enabling robot and robot hits standing robot starts movement to seconds and limited robot speed to ? 15 button) operator fast mm/s yes human stands in motion limited speed, collision detection, x 16 Seat positioning collision to body parts path of robot laser scanner for this area laser scanner operator moves an extremity between robot x limited speed, collision detection, 17 Seat positioning crushing and pedal car laser scanner for this area laser scanner 18 Seat screwing no risks Table A1: Risk assessment 1

Figure A1: Layout for risk assessment

58

Risk evaluation see layout risk reduction 2

Implementation Risk status Task Risk / What can happen? Risk cause low medium high Proposed action decision Number risk eliminated in risk x 1 assessment 1 Permanent risk robot drops part power shortage safe gripper, safety shoes (PPE) yes risk NOT extended fenced area until the end eliminated in human can approach of the station, no fences in front of x risk assessment moving robot because the the pedal car because the pedal car 2 1 Base collection collision to body parts fences end exits the station to the right side yes risk eliminated in risk human approaches x 3 assessment 1 Base collection crushing moving robot fenced area restriciting access to the rackyes human can exit risk NOT monitored space towards eliminated in the robot because the x risk assessment Move to collaboration space next to the table is 4 1 point collision to body parts not fenced extend fenced area to the table yes human is already in stop zone when laser scanner is activated (currently when robot exits robot x activate laser scanner earlier: when zone) and the robot robot is the protective separation Move to collaboration needs sufficient stopping distance away from the table end 5 new risk point collision to body parts time (app. at P2) yes head collision when yes (smaller stop zone protective separation accomplished by allowing distance is calculated with x collision up to upper arm. Move to collaboration lower C value (allowing up to stop zone calculated with C value of Reduced the speed vs. 6 new risk point upper arm collision) human bends head 300mm (bended head) smaller C value) risk NOT eliminated in workspace monitoring (defined x risk assessment Hand-guiding (with crushing of fingers between hand-guiding with one moving range to table) in safety 7 1 enabling device) the robot and the table hand configuration not decided enabling device (because a trigger base rotation by use a different confirmation task confirmation by the Task confirmation of accident when possibility: buttons on enabling human leaving the stop x better action washer assembly with assembling the washer device OR human leaves stop zone zone reduces the identified to force detection assembly with too much OR confirmation button outside stop productivity of the HRC 8 eliminate risk (programmed) base rotation too early force zone case) Task confirmation of risk eliminated washer assembly with trigger base rotation by no action because human actively x in risk enabling device (user accident earlier in confirms by pressing a button, press 9 assessment 1 button) base rotation too early handguiding activity user button for confirmation Task confirmation of robot movement starts before limit rotational speed of base to 109 washer assembly with human moved away from human forgot that task mm/s (compliance with x enabling device (user robot and robot hits standing confirmation button biomechanical limit for chest 10 updated action button) operator restarts robot movement collision) yes delay of movement start by 4 Task confirmation of robot movement starts before seconds when human pressed risk eliminated washer assembly with human moved away from robot starts movement to x confirmation button to ensure in risk enabling device (user robot and robot hits standing fast before human can human is at least in warning zone 11 assessment 1 button) operator leave area when rotation starts yes human activates motion start with task confirmation AND inherently risk NOT safe design (next process step is seat x eliminated in collection = leaving area) AND risk assessment limited speed AND delay of 12 1 Base rotation head collision operator bending down movement restart by 4 seconds yes activate stop zone after delay time and deactivate when rotation is finished OR testing results show that x in the process sequence rotation is always finsihed OR operator should base rotation not finished wait by himself until rotation is before human reenters finished (signaled by blue light = 13 new risk Base rotation head collision with seat operator input needed) to be decided human forgot that task risk NOT Task confirmation of robot movement starts before confirmation button eliminated in seat assembly with human moved away from restarts robot movement, x delay of movement start by 4 risk assessment enabling device (user robot and robot hits standing robot starts movement to seconds and limited robot speed to 14 1 button) operator fast 109 mm/s yes

eliminate close contact between better action x human and pedal car by moving seat identified to collision to body parts (head human stands in motion screwing operation to collaborative 15 eliminate risk Seat positioning collision not permitted) path of robot tasks (after seat assembly) yes eliminate close contact between better action operator moves an x human and pedal car by moving seat identified to extremity between robot screwing operation to collaborative 16 eliminate risk Seat positioning crushing and pedal car tasks (after seat assembly) yes operator moves an extremity between robot x limited speed, collision detection, 17 Seat positioning crushing and pedal car laser scanner for this area laser scanner 18 Seat screwing no risks Table A2: Risk assessment 2

59

Figure A2: Layout for risk assessment 2

60

Risk evaluation see final layout

Implementation Risk status Task Risk / What can happen? Risk cause low medium high Proposed action decision Number extended fenced area until the end risk eliminated human can approach of the station, no fences in front of x in risk moving robot because the the pedal car because the pedal car 1 assessment 2 Base collection collision to body parts fences end exits the station to the right side yes human can exit monitored space towards risk eliminated the robot because the x in risk space next to the table is 2 assessment 2 Move to collaboration pointcollision to body parts not fenced extend fenced area to the table yes human is already in stop zone when laser scanner is activated (currently when robot exits robot x activate laser scanner earlier: when risk eliminated zone) and the robot robot is the protective separation in risk Move to collaboration needs sufficient stopping distance away from the table end, 3 assessment 2 point collision to body parts time realise with workspace monitoring yes risk eliminated in risk Move to collaboration x stop zone calculated with C value of 4 assessment 2 point head collision human bends head 300mm (bended head) yes no, not needed because: 1) collision detection active 2) risk NOT part yields in gripper during x eliminated in collision 3) manually push risk assessment Hand-guiding (with crushing of fingers between hand-guiding with one safety spheres and protected space necessary to move the robot 5 2 enabling device) the robot and the table hand monitoring in safety configuration to the table Task confirmation of robot movement starts before limit rotational speed of base to 109 risk eliminated washer assembly with human moved away from human forgot that task mm/s (compliance with x in risk enabling device (user robot and robot hits standing confirmation button biomechanical limit for chest 6 assessment 2 button) operator restarts robot movement collision) yes Task confirmation of robot movement starts before delay of movement start by 4 risk eliminated washer assembly with human moved away from robot starts movement to seconds to make sure that human is x in risk enabling device (user robot and robot hits standing fast before human can in warning zone before rotation 7 assessment 2 button) operator leave area starts yes yes, all. laser scanner has to be additionally activated to ensure as a consistent safety concept. The robot should never start when human in human does not leave the stop zone to not confuse stop zone before the x operator and to eliminate risk delay time elapses and human activates motion start with of head collision. The delay the base rotation starts, task confirmation AND delay of time of 4 seconds prevents a risk NOT then operator bending movement restart by 4 seconds AND frequent activation of stops. eliminated in down as part of forseable activation of laser scanner after The speed is limited to the risk assessment reasonable misuse and delay time and deactivate when speed of the respective safety 8 2 Base rotation head collision gets hit rotation is finished zone risk NOT eliminated in base rotation not finished activate stop zone after the delay x risk assessment before human reenters time and deactivate when rotation is 9 2 Base rotation head collision with seat finished yes human forgot that task robot movement starts before confirmation button Task confirmation of seat human moved away from restarts robot movement, x delay of movement start by 4 risk moved to assembly with enabling robot and robot hits standing robot starts movement to seconds and limited robot speed to 10 number 13 device (user button) operator fast 109 mm/s yes

eliminate close contact between risk eliminated x human and pedal car by moving seat in risk human stands in motion screwing operation to collaborative 11 assessment 2 Seat positioning collision to body parts path of robot tasks (after seat assembly) yes

eliminate close contact between risk eliminated operator moves an x human and pedal car by moving seat in risk extremity between robot screwing operation to collaborative 12 assessment 2 Seat positioning crushing and pedal car tasks (after seat assembly) yes human forgot that task robot movement starts before confirmation button Task confirmation of seat human moved away from restarts robot movement, x mounting with enabling robot and robot hits standing robot starts movement to delay of movement start by 4 13 new risk device (user button) operator fast seconds before scanner is reactivated yes Table A3: Risk assessment 3

61

Appendix B – Stopping time and stopping distance of KUKA lbr 14

Figure B1: Example of robot stopping times for axis 1

100% extension 33% payload 33% override Axis Phi (angle of rotation) in ° ts (stopping time) in sec L from TCP in mm travel TCP (stopping distance) in mm A1 3 0.25 820 42.9 A2 4 0.3 820 57.2 A3 3 0.25 610 31.9 A4 3 0.22 405 21.2 0.3 153.3 worst case value total stopping distance 100% extension 33% payload 50% override Axis Phi (angle of rotation) in ° ts (stopping time) in sec L from TCP in mm travel TCP (stopping distance) in mm A1 4 0.3 820 57.2 A2 7 0.35 820 100.2 A3 5 0.3 610 53.2 A4 4 0.28 405 28.3 0.35 238.9 worst case value total stopping distance 100% extension 33% payload 66% override Axis Phi (angle of rotation) in ° ts (stopping time) in sec L from TCP in mm travel TCP (stopping distance) in mm A1 7 0.32 820 100.2 A2 10 0.4 820 143.1 A3 9 0.33 610 95.8 A4 6 0.32 405 42.4 0.4 381.5 worst case value total stopping distance 100% extension 33% payload 100% override Axis Phi (angle of rotation) in ° ts (stopping time) in sec L from TCP in mm travel TCP (stopping distance) in mm A1 17 0.48 820 243.3 A2 24 0.6 820 343.5 A3 18 0.41 610 191.6 A4 11 0.38 405 77.8 0.6 856.2 worst case value total stopping distance Table B1: Calculations of stopping time and stopping distance

62

Appendix C – Java code of the robot application public class RobotApplication extends RoboticsAPIApplication { @Inject private LBR robot;

@Inject private Controller control;

private BeckhoffModuleIOGroup miEK1100;

private MediaFlangeIOGroup miMediaFlange;

private AbstractFrame miMundo;

public IApplicationControl applicationControl;

int ESMstate;

@Inject private IStatusController statusController;

boolean noStop;

@Inject @Named("Gripper") private Tool miGripper;

@Inject @Named("GripperLoad") private Tool miGripperload;

public Frame handGuidingTeachingPoint; public void initialize() { miMundo=World.Current.getRootFrame(); robot.detachAll(); miGripperload.attachTo(robot.getFlange()); miEK1100=new BeckhoffModuleIOGroup(control); miMediaFlange=new MediaFlangeIOGroup(control);

handGuidingTeachingPoint=getApplicationData().getFrame("/AGVDelivery/ P3").copyWithRedundancy(); } private void doSmartFactoryDemoNew() { robot.setESMState("2"); // deactivate scanner ESMstate = 2; miMediaFlange.setLEDGreen(false); miMediaFlange.setLEDBlue(false); miMediaFlange.setLEDRed(false); noStop=true; // reset of stop zone variable

// Status listener for violation of stop zone to activate red LED IStatusListener stopZoneViolation = new IStatusListener() { @Override public void onStatusSet(StatusEvent statusEvent) { getLogger().info("Stop zone violation"); miMediaFlange.setLEDRed(true); miMediaFlange.setLEDGreen(false);

63

miMediaFlange.setLEDBlue(false); noStop = false; // variable to keep red LED until play button is pressed } @Override public void onStatusCleared(StatusEvent statusEvent) { getLogger().info("No stop zone violation"); } }; statusController.addStatusListener(stopZoneViolation,DefaultStatusGroups.SA FETY_STOP);

// Status listener for application running to deactivate red LED IStatusListener applicationRunning = new IStatusListener() { @Override public void onStatusSet(StatusEvent statusEvent) { getLogger().info("Application running"); miMediaFlange.setLEDRed(false); miMediaFlange.setLEDGreen(true); miMediaFlange.setLEDBlue(false); noStop = true; } @Override public void onStatusCleared(StatusEvent statusEvent) { getLogger().info("Application paused"); miMediaFlange.setLEDRed(true); miMediaFlange.setLEDGreen(false); miMediaFlange.setLEDBlue(false); } }; statusController.addStatusListener(applicationRunning,DefaultStatusGroups.A PPLICATION_RUNNING);

// Speed reduction listener BooleanIOCondition WarningFieldViolation = new BooleanIOCondition(miEK1100.getInput("WarningField"), true); IAnyEdgeListener WarningFieldViolationListener = new IAnyEdgeListener() { @Override public void onAnyEdge(ConditionObserver, Date time, int missedEvents, boolean conditionValue) {

if ((conditionValue == true) && (ESMstate == 1 || ESMstate==3)) // speed reduction only active in ESM state 1 { getLogger().info("Speed reduction requested"); getApplicationControl().setApplicationOverride(0.33d); miMediaFlange.setLEDRed(true); // yellow LED miMediaFlange.setLEDGreen(true); miMediaFlange.setLEDBlue(false); } else if ((conditionValue == false) && (ESMstate == 1 || ESMstate==3)) { if (noStop == false) { // stop zone was activated, red LED remains miMediaFlange.setLEDRed(true); miMediaFlange.setLEDGreen(false); miMediaFlange.setLEDBlue(false); } else { // full speed, change to green LED getLogger().info("Speed reduction reset"); getApplicationControl().setApplicationOverride(1.0d);

64

miMediaFlange.setLEDRed(false); miMediaFlange.setLEDGreen(true); miMediaFlange.setLEDBlue(false); } } }; ConditionObserver WarningFieldViolationObserver = getObserverManager().createConditionObserver(WarningFieldViolation, NotificationType.MissedEvents, WarningFieldViolationListener); WarningFieldViolationObserver.enable();

//Declaration of Force conditions ForceConditionpartDetection=ForceCondition.createNormalForceCondition(miGri pper.getDefaultMotionFrame(), CoordinateAxis.Y, 6); ForceConditiontouchUpX=ForceCondition.createNormalForceCondition(miGripperl oad.getDefaultMotionFrame(), CoordinateAxis.Z, 5); ForceConditiontouchUpY=ForceCondition.createNormalForceCondition(miGripperl oad.getDefaultMotionFrame(), CoordinateAxis.X, 5); ForceConditionhandDetection=ForceCondition.createNormalForceCondition(miGri pperload.getDefaultMotionFrame(), CoordinateAxis.Y, 10); CartesianSineImpedanceControlModecontrolLissajous=CartesianSineImpedanceCon trolMode.createLissajousPattern(CartPlane.XZ, 2, 1, 10); HandGuidingMotion manual=new HandGuidingMotion();

robot.detachAll(); miGripper.attachTo(robot.getFlange()); // attach flange to robot //Move to HOME position and close the gripper miGripper.move(ptp(Math.toRadians(90),0,0,Math.toRadians(90),0,Math.toRadia ns(-90),Math.toRadians(45)).setJointVelocityRel(0.5)); miEK1100.setOpenGripper(true); miEK1100.setCloseGripper(false); miMediaFlange.setLEDGreen(true); miMediaFlange.setLEDBlue(false); miMediaFlange.setLEDRed(false);

//Move to the first point in the delivery box miGripper.move(ptp(getApplicationData().getFrame("/AGVDelivery/P1A")).setJo intVelocityRel(1)); ThreadUtil.milliSleep(500); //Start moving to the second point in the delivery box and interrupt when touching the part miGripper.move(lin(getApplicationData().getFrame("/AGVDelivery/P1B")).setCa rtVelocity(70).breakWhen(partDetection)); FramepartPosition=robot.getCurrentCartesianPosition(miGripper.getDefaultMot ionFrame()); //Move back miGripper.move(linRel(0, 0, -30).setCartVelocity(500)); Frame pickUpPosition= getApplicationData().getFrame("/AGVDelivery/P1C").copyWithRedundancy(); double x, y, alpha, beta, gamma; double calibX, calibY, calibAlpha; //Calculate pickup position as the relative distances from the touching point to the pickup point calibX=18; calibY=52; x=partPosition.getX()+calibX; y=partPosition.getY()+calibY; pickUpPosition.setX(x); pickUpPosition.setY(y); // move to calculated pickup position

65

miGripper.move(lin(pickUpPosition).setCartVelocity(500)); miEK1100.setOpenGripper(false); miEK1100.setCloseGripper(true); ThreadUtil.milliSleep(200); //Move forward and grip piece miGripper.move(linRel(0, 0, 49).setCartVelocity(100)); miEK1100.setOpenGripper(true); miEK1100.setCloseGripper(false); //Tool change to the tool with the piece grabbed miGripper.detach(); miGripperload.attachTo(robot.getFlange()); ThreadUtil.milliSleep(300);

// move to exit point in box miGripperload.move(lin(getApplicationData().getFrame("/AGVDelivery/P2")).se tCartVelocity(1000)); robot.setESMState("1"); // activate laser scanner ESMstate = 1; if (miEK1100.getWarningField()== true){ // check if human is already in warning zone when ESM 1 is activated getLogger().info("Speed reduction requested"); getApplicationControl().setApplicationOverride(0.33d); miMediaFlange.setLEDRed(true); // yellow LED miMediaFlange.setLEDGreen(true); // yellow LED miMediaFlange.setLEDBlue(false); }

//Move to a half-way point between pickup and hand-over position miGripperload.moveAsync(ptp(getApplicationData().getFrame("/AGVDelivery/P25 ")).setJointVelocityRel(1).setBlendingCart(20));

//Finish moving to the hand-over position miGripperload.moveAsync(ptp(getApplicationData().getFrame("/AGVDelivery/P3" )).setJointVelocityRel(0.75));

// move to previous ergonomic handguiding position miGripperload.move(lin(handGuidingTeachingPoint).setCartVelocity(250)); robot.setESMState("2"); // deactivate laser scanner ESMstate = 2;

//Standby until the operator presses the enabling button on the media flange miMediaFlange.setLEDRed(false); miMediaFlange.setLEDGreen(false); miMediaFlange.setLEDBlue(true); // wait for operator to perform hand-guiding miGripperload.move(manual); handGuidingTeachingPoint=robot.getCurrentCartesianPosition(miGripperload.ge tDefaultMotionFrame());

//wait for operator acknowledgment of washer assembly BooleanIOCondition UserButtonPressedWasher = new BooleanIOCondition(miMediaFlange.getInput("UserButton"), true); getObserverManager().waitFor(UserButtonPressedWasher); miMediaFlange.setLEDBlue(false); miMediaFlange.setLEDGreen(true);

// Delay of 4 sec for operator to leave stop zone ThreadUtil.milliSleep(4000);

66

getLogger().info("Reactivating scanner"); robot.setESMState("1"); // activate laser scanner after 4s delay ESMstate = 1; if (miEK1100.getWarningField()== true){ // check if human is already in warning zone when ESM 1 is activated getLogger().info("Speed reduction requested"); getApplicationControl().setApplicationOverride(0.33d); miMediaFlange.setLEDRed(true); // yellow LED miMediaFlange.setLEDGreen(true); // yellow LED miMediaFlange.setLEDBlue(false); }

// base rotation miGripperload.move(linRel(Transformation.ofDeg(0,0,0,-150,0,0), miGripperload.getDefaultMotionFrame()).setJointVelocityRel(1)); robot.setESMState("2"); // deactivate laser scanner when rotation finished ESMstate = 2; miMediaFlange.setLEDRed(true); miMediaFlange.setLEDBlue(true); miMediaFlange.setLEDGreen(false);

// wait for operator to confirm seat mounting BooleanIOCondition UserButtonPressedSeat = new BooleanIOCondition(miMediaFlange.getInput("UserButton"), true); getObserverManager().waitFor(UserButtonPressedSeat); miMediaFlange.setLEDBlue(false); miMediaFlange.setLEDGreen(true); ThreadUtil.milliSleep(4000); // Delay of 4 sec for operator to leave stop zone getLogger().info("Reactivating scanner"); robot.setESMState("3"); // activate laser scanner ESMstate = 3; if (miEK1100.getWarningField()== true){ // check if human is already in warning zone when ESM 1 is activated getLogger().info("Speed reduction requested"); getApplicationControl().setApplicationOverride(0.33d); miMediaFlange.setLEDRed(true); // yellow LED miMediaFlange.setLEDGreen(true); // yellow LED miMediaFlange.setLEDBlue(false); }

//Move to a point above the pedal car axles double compensationY=85; miGripperload.moveAsync(ptp(getApplicationData().getFrame("/AGVDelivery/P4" )).setJointVelocityRel(1).setBlendingCart(20)); miGripperload.move(ptp(getApplicationData().getFrame("/AGVDelivery/P5")).se tJointVelocityRel(0.8));

//Descend to a point between the two axles miGripperload.move(lin(getApplicationData().getFrame("/AGVDelivery/P8")).se tCartVelocity(500)); //Declare variables for the hole position calculations double axis1posX1, axis1posX2, axis1posY1, axis1posY2, axis2posY, currentAngle, rotAngle; double posXfinal, posYfinal; Frame rotFinal, posFinal;

//Advance in the X direction until it touches the axle ThreadUtil.milliSleep(300);

67

miGripperload.move(linRel(0,0,100).setCartVelocity(50).breakWhen(touchUpX)) ; axis1posX1=robot.getCurrentCartesianPosition(miGripperload.getDefaultMotion Frame()).getX(); axis1posY1=robot.getCurrentCartesianPosition(miGripperload.getDefaultMotion Frame()).getY(); miGripperload.move(linRel(0,0,-20).setCartVelocity(250));

//Move forward in the Y direction and touch the axle in the X direction again miGripperload.move(linRel(50,0,0).setCartVelocity(250)); ThreadUtil.milliSleep(300); miGripperload.move(linRel(0,0,100).setCartVelocity(50).breakWhen(touchUpX)) ; axis1posX2=robot.getCurrentCartesianPosition(miGripperload.getDefaultMotion Frame()).getX(); axis1posY2=robot.getCurrentCartesianPosition(miGripperload.getDefaultMotion Frame()).getY();

//Calculate the rotation angle of the axle rotAngle=Math.atan((axis1posX1-axis1posX2)/(axis1posY1-axis1posY2)); getLogger().info("The rotation angle is: "+Math.toDegrees(rotAngle)); //Adopt the correct angle and move in the Y direction until it touches the axle miGripperload.move(lin(getApplicationData().getFrame("/AGVDelivery/P8")).se tCartVelocity(100)); currentAngle=getApplicationData().getFrame("/AGVDelivery/P8").getAlphaRad() ; rotFinal=getApplicationData().getFrame("/AGVDelivery/P8").copyWithRedundanc y(); rotFinal.setAlphaRad(currentAngle-rotAngle); miGripperload.move(lin(rotFinal).setCartVelocity(250)); miGripperload.move(linRel(0,-20,0).setCartVelocity(250)); miGripperload.move(linRel(200,0,0).setCartVelocity(50).breakWhen(touchUpY)) ; axis2posY=robot.getCurrentCartesianPosition(miGripperload.getDefaultMotionF rame()).getY(); miGripperload.move(linRel(0,100,0).setCartVelocity(250));

//Calculate the position of the assembly hole posYfinal=axis2posY+compensationY/Math.cos(rotAngle); posXfinal=axis1posX1+32*Math.cos(rotAngle)+(posYfinal- axis1posY1)*Math.tan(rotAngle); posFinal=getApplicationData().getFrame("/AGVDelivery/P5").copyWithRedundanc y(); posFinal.setX(posXfinal); posFinal.setY(posYfinal); posFinal.setAlphaRad(currentAngle-rotAngle);

//Move to a point above the assembly hole miGripperload.move(lin(posFinal).setCartVelocity(250));

//Descend to the upper surface of the assembly hole, stopping if a collision is detected boolean exit=false; do{ IMotionContainer insertionDescent=miGripperload.move(linRel(0,- 43,0).setCartVelocity(50).breakWhen(handDetection)); IFiredConditionInfo insertionDescentFired=insertionDescent.getFiredBreakConditionInfo(); if(insertionDescentFired!=null){

68

miMediaFlange.setLEDGreen(false); miMediaFlange.setLEDRed(true); miGripperload.move(lin(posFinal).setCartVelocity(100)); ThreadUtil.milliSleep(2000); miMediaFlange.setLEDRed(false); miMediaFlange.setLEDGreen(true); }else{ exit=true; } }while(exit!=true);

//Finish the insertion miGripperload.move(linRel(0,- 70,0).setCartVelocity(20).setMode(controlLissajous));

//Release the part and return home miEK1100.setCloseGripper(true); miEK1100.setOpenGripper(false); miGripperload.detach(); miGripper.attachTo(robot.getFlange()); ThreadUtil.milliSleep(200); miGripper.move(linRel(0,5,0).setCartVelocity(100)); miGripper.move(linRel(0,0,-50).setCartVelocity(100)); miGripper.move(ptp(Math.toRadians(- 90),0,0,Math.toRadians(90),0,Math.toRadians(- 90),Math.toRadians(45)).setJointVelocityRel(0.8));

miGripper.move(ptp(Math.toRadians(90),0,0,Math.toRadians(90),0,Math.t oRadians(-90),Math.toRadians(45)).setJointVelocityRel(0.8)); miMediaFlange.setLEDGreen(false); } }

69

TRITA ITM-EX 2019:156

www.kth.se