<<

University of Calgary PRISM: University of Calgary's Digital Repository

Graduate Studies The Vault: Electronic Theses and Dissertations

2020-01-29 Real-time Collision Detection Algorithm for Humanoid

Moghaddasi, Shadi

Moghaddasi, S. (2020). Real-time Collision Detection Algorithm for Humanoid Robots (Unpublished master's thesis). University of Calgary, Calgary, AB. http://hdl.handle.net/1880/111593 master thesis

University of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission. Downloaded from PRISM: https://prism.ucalgary.ca UNIVERSITY OF CALGARY

Real-time Collision Detection Algorithm for Humanoid Robots

by

Shadi Moghaddasi

A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE

GRADUATE PROGRAM IN MECHANICAL ENGINEERING

CALGARY, ALBERTA JANUARY, 2020

c Shadi Moghaddasi 2020 Abstract

Humanoid robots (humanoids) are highly capable of assisting humans and working with them in cluttered and confined environments. However, they are not completely ready to work in close proximity with humans while not risking the safety of themselves and the objects and people around them. Current methods have not been fully successful in preparing humanoid for safe Human Interaction (HRI) because they rely on expensive and fragile equipment, and erroneous techniques. This thesis presents a novel real-time methodology that enables the safe close proximity HRI for all types of humanoids (controlling systems, etc.). The proposed approach employs signals from robots’ motor joints and data from the computers running the robot to develop a collision detection algorithm. Using this algorithm, humanoids will be able to speedily identify impacted joints during a collision. Experimental results for the Taiko are presented to demonstrate the applicability of the proposed approach.

ii Preface

This thesis is an original work by the author. No part of this thesis has been previously published.

iii Acknowledgements

I would like to thank the people who helped me to brainstorms ideas, find annoying bugs and review the thesis. First and foremost thanks to Dr. Alejandro Ramirez-Serrano for always supporting me, believing in me and guiding me with his amazing suggestions and comments. Then I would like to thank my lab-mates: Ashraf, Parastoo, and Marshall for all the insightful discussions that helped me to gain progress. I would also want to thank my friends who are always there for me. Last but certainly not least, my father, mother, and sister to always encourage me to do my best and guide me in difficult situations. Very special thanks to Erfan without whom, the challenges associated with this work would have not been manageable.

iv To all the knowledge I have gained ...

v Table of Contents

Abstract ii

Preface iii

Acknowledgements iv

Dedication v

Table of Contents vi

List of Figures and Illustrations viii

List of Tables ix

List of Symbols, Abbreviations and Nomenclature x

1 Introduction 1 1.1 Motivation ...... 4 1.2 Thesis Focus ...... 7 1.3 Thesis Outline ...... 8

2 Literature and Review 9 2.1 Balanced Walking ...... 10 2.1.1 Wheeled Walking...... 11 2.1.2 Pedal walking...... 14 2.1.2.1 Bipedal (Biped) Walking...... 14 2.1.2.2 Beyond just Balanced Walking...... 19 2.2 Manipulation + Walking ...... 22 2.3 Touching Sensibility ...... 24 2.4 Safety of a Humanoid Robot ...... 26

3 Problem Statement 29 3.1 Definitions ...... 31 3.2 Assumptions ...... 33 3.3 Constraints ...... 34 3.4 The Inefficiencies in the Current Techniques ...... 34

vi 3.5 Proposed Solution ...... 35 3.6 Expectations of the Proposed Solution and Contributions ...... 37

4 Operation Architecture 38 4.1 Perception Layer ...... 38 4.2 Signal Analysis Mechanism ...... 39 4.2.1 Signal Storing Process...... 40 4.2.2 Dynamic Filtering the Stored Signals...... 42 4.2.3 Replacing Outliers with Modified Data...... 43 4.2.4 Processing Filtered and Stored values...... 44 4.2.4.1 1st Condition Data Processing...... 44 4.2.4.2 2nd Condition Data Processing...... 44 4.2.4.3 3rd Condition Data Processing...... 46 4.3 Collision Detection Mechanism ...... 46 4.4 Operation Architecture Summary ...... 51

5 Experimental Humanoid Robot Characteristics 52 5.1 Humanoid Robot Specifications ...... 52 5.2 Experimental Implementation ...... 56 5.3 Remarks ...... 58

6 Experimental Results 61 6.1 Experimental testing Set-up ...... 61 6.2 Motor’s Torque Dynamic Filtering ...... 62 6.3 Storing Period Validation ...... 64 6.4 Humanoid’s Real-time Collision Detection Results ...... 71 6.4.1 Collisions on Low Power Joints...... 71 6.4.2 Collisions on 100-Watt Power Joints...... 73 6.4.3 Collisions on 200-Watt Power Joints...... 75 6.4.4 Complex Robot Motions and Multiple Collision Tests...... 76 6.5 Discussion ...... 78

7 Conclusions 80 7.1 Future Work ...... 82

Bibliography 83

vii List of Figures and Illustrations

1.1 (a) Humanoid Robot and (b) Wheeled Humanoid Robot . . . . .3

3.1 Operation Architecture...... 35

4.1 Evaluation Layer...... 41 4.2 Robotic joints sensed torque values for two 200 W and a 100 W Dynamixel Pro motors...... 42 4.3 Identification Layer...... 47 4.4 Graphical demonstration of the final step of the collision detection...... 51

5.1 Dr. Alejandro Ramirez-Serrano standing next to the Humanoid Robot Taiko. 53 5.2 (a) Taiko physical specifications, (b) Front view, and (c) Side view...... 54 5.3 Schematic diagram of the humanoid’s hardware architecture...... 58 5.4 Taiko’s joint ID map...... 59 5.5 Stanley 51-104 rubber mallet...... 60

6.1 Using collision detection mechanism on Joint ID 29 using different values of Ws...... 64 6.2 Collision Detection on joint ID 28 and Outlier Filtering using m = 10. . . . . 66 6.3 Results of the Final Detection Matrix for various values of m for joint ID 28. 67 6.4 Affected joints for the experiment of different values of “m”...... 68 6.5 Collision Detection on joint ID 28and Outlier Filtering using m=20...... 69 6.6 Collision Detection on joint ID 28and Outlier Filtering using m=50...... 69

viii List of Tables

5.1 Control Table for the 200-Watt motors (Dynamixel H54-200-S500-R). . . . . 55 5.2 Control Table for the 100-Watt motors (Dynamixel H54-100-S500-R). . . . . 55 5.3 Control Table for the 20-Watt motors (Dynamixel H42-20-S300-R)...... 56

6.1 Estimated impact force applied to the humanoid robot...... 71 6.2 Collision detection on the humanoid’s 20 W servomotor joints...... 72 6.3 Real-time collision detection of Taiko’s 100 W servomotor joints...... 74 6.4 Real-time collision detection of Taiko’s 200 W servomotor joints...... 77 6.5 Real-time collision detection of Taiko’s servomotor joints in idle state. . . . . 78 6.6 Real-time collision detection of Taiko’s servomotor joints in moving state. . . 79

ix List of Symbols, Abbreviations and Nomenclature

Acronyms

Acronyms Definition

ANN Artificial Neural Network

ATV All Terrain Vehicles

CoM Center of Mass

Defense Advanced Research Projects DARPA Agency

DoF Degree(s) of Freedom

DPC Development Personal Computer

GIWC Gravito-Inertial Wrench Cone

GPU Graphics Processing Unit

HD High-Definition

HRI Human Robot Interaction

ID Identifier

IMU Inertial Measurement Unit

ISS International Space Station

LIPM Linear Inverted Pendulum Model

x MAD Median Absolute Deviation

MIT Massachusetts Institute of Technology

MPC Motion Personal Computer

National Aeronautics and Space NASA Administration

OPC Operating Personal Computer

PPC Perception Personal Computer

PSF Plane Segment Finder

PWM Pulse Width Modulation

RGB Red Green Blue

RMP Robotic Mobility Platform

ROS Robotic Operating System

W Watts

Functions Definition

d difference between two values, sets, or matrices

erfc−1 Inverse Complementary Error Function

MAM difference between Maximum and Minimum values

Matrices Definition

DJIDj Detected Joints’ ID for jth Condition (j=1, 2, and 3)

Detected Joints’ Current Readings for jth DJ cj Condition (j=1, 2, and 3)

DMCj Detection Matrix for jth Condition (j=1, 2, and 3)

Vector of the Difference between desired and dP i current values of Position for Joint ”i”

xi Maximum Difference between desired dP max i and current values of Position for Joint ”i” Vector of the Difference between desired and dV i current values of Velocity for Joint ”i”

Maximum Difference between desired max dVi and current values of Velocity for Joint ”i” Vector of the Difference between desired and dτ i current values of Torque for Joint ”i”

Maximum Difference between desired dτ max i and current values of Torque for Joint ”i”

FDJID Final Detected Joints’ ID

FDJc Final Detected Joints’ current readings

FDM Final Detection Matrix

difference between Max and Minimum values for MAMP ci current stored Position values for Joint ”i” difference between Maximum and Minimum values for MAMV ci current stored Velocity values for Joint ”i”

difference between Maximum and Minimum values for MAMτ ci current stored Torque values for Joint ”i”

Vector of the Slopes between Modified Torque Desired Sτˆ cii Values for Joint ”i”

max Maximum Slopes between Modified Torque Sτˆ ci Desired Values for Joint ”i”

Scalars Definition

Ws Window Size

∆t Storing Period

δt Control Cycle

δtm Iteration ”m” Control Cycle

xii  Error

Sets Definition

Pcii Vector of the Stored Current Values of Position for Joint ”i”

Pdii Vector of the Stored Desired Values of Position for Joint ”i”

S Vector of Set of Values sp Vector of Position Sub-set sv Vector of Velocity Sub-set sτ Vector of Torque Sub-set

Vcii Vector of the Stored Current Values of Velocity for Joint ”i”

Vdii Vector of the Stored Desired Values of Velocity for Joint ”i”

τcii Vector of the Stored Current Values of Torque for Joint ”i”

τdii Vector of the Stored Desired Values of Torque for Joint ”i”

τˆcii Vector of the Stored Modified Torque Current Value for Joint ”i”

Set Members Definition

P m di Position Desired Value at iteration ”m” for Joint ”i”

V m Velocity Desired Value at iteration ”m” for Joint ”i” di

τ k Detected Torque Outlier Values ci

τ m Torque Desired Value at iteration ”m” for Joint ”i” di

τˆm Modified Current Torque Value at iteration ”m” for Joint ”i” ci

xiii Chapter 1

Introduction

Human beings have been dreaming of trustworthy strong human-alike companionable ma- chines for more than a century [1]. This society’s expectations were initially depicted in the form of cartoon characters such as “Rosie the Robot Maid” during the 1960’s TV show “The Jetsons” [2]. A humanoid robotic maid and housekeeper that would take care of all the tedious and mundane household chores helping its owners with their daily tasks. Today, humanoid robots are finding their way into our homes and workplaces in the form of small and medium-sized interactive companions. Since the late 20th century, humanoid robots (humanoids) such as “ASIMO” have been introduced and produced by companies such as Robotis and Kawada industries (South Korea), PAL (Spain), who have shortened the gap between imagination and reality. Furthermore, diverse universities around the world, such as the Italian Institute of Technology, the Oxford Robotics Institute, and the MIT leg laboratory, have been developing legged machines since the early 21st century. Depending on the humanoids’ applications, design, and characteristics, they come in different levels of size, safety, and Human Robot Interaction (HRI) abilities. In the medical industry, for example, where the different models of humanoid robots are not very different in size in comparison to humans, they require the presence of a human operator for the execution of safe and successful maneuvers, when working amongst humans. In industry,

1 during heavy assembly or similar tasks, relatively large industrial robots are kept within safe distances from humans or hazardous objects with the goal to prevent accidents and increase safety [3].In the context of this thesis focusing on increasing safety in the deployment and operation of humanoid robots, the following aspects are considered to be required when discussing safety:

• The ability of the robot to operate amongst and within close proximity of humans and objects without undesired collisions.

• Robots capable of maneuvering in cluttered and confined spaces typically designed for humans’ everyday activities.

• Robots having the necessary mechanisms of not causing damages to both the robot and the individuals and objects around the robot.

Taking into account all of the above-mentioned aspects, in alliance with each other, increases the chance of safe HRI. First and foremost, robotic researchers’ goal is to prepare humanoid robots to be able to engage in people’s daily lives and tasks, while not causing any disturbance to humans’ daily routines in a large scope, and not endangering humans [4]. Humanoids’ high levels of control complexity in comparison to other types of robots such as wheeled ground robots, and autonomous aerial systems (aka drones) makes their safety and efficiency extremely dependable on their balance, locomotion abilities, percep- tion, decision making, and the computation time needed for them to make fast decisions. Balance in humanoids refers to the ability to perform a successful maneuver and not falling. Based on humanoids locomotion type (e.g., walking, running, jumping, etc.), their balance maintenance difficulty levels vary. Generally, humanoid robots are divided into two groups according to their means of locomotion: pedal such as biped and quadruped robots (e.g., Figure 1.1a) and wheeled humanoid robots (e.g., Figure 1.1b). Balance for pedal robots is more complicated in comparison with the balance of wheeled humanoid robots. However, pedal and especially biped robots have the capabilities, at least in theory, to significantly

2 Figure 1.1: (a) Humanoid Robot NAO and (b) Wheeled Humanoid Robot Justin. perform better in complex human-made terrains such as stairs as well as unstructured in- door and outdoor environments. The locomotion mechanisms of a robot refer to its ability to move in its surrounding environment using any available moving means (e.g. walking gaits). When robots move, they are required to perceive the environment. The perception which can be categorized into visual and non-visual strategies is the robot’s understanding of its sur- rounding space and the objects to be manipulated. Sensors are the equipment which enables the robots to perceive the world. To do so however, humanoids are required to sense their surroundings and infer from such information. Decision-making algorithms assist the robot in utilizing the data received from the environment effectively. Depending on the algorithms used for information processing and the equipment used for processing the new data (e.g., computers), a robot’s ability to make decisions in a timely manner can vary. Computation time is certainly a critical factor in a humanoid robot’s performance, especially when the robot is designed to operate in close proximity with humans when close proximity has been

3 referred to distances less than 500 mm [5]. Any delays in the decision making potentially caused by slow computation (or a slow processing algorithm) in a care-providing robot op- erating in a medical center, as an example, is surely not acceptable as it could be putting patients’ lives at risk.

1.1 Motivation

State of the art humanoid robots like , a life-size humanoid robot from [6] illustrate the advances that technology and research have made into preparing humanoid robots for their effective and safe interaction with their surroundings for humanoids safe daily-life involvement [7] [8]. However, the aspects of balance, locomotion, perception, and fast decision-making have not yet been developed enough for humanoid robots to be consid- ered 100% safe. Some of the tasks that need to be addressed to tackle the current challenges include:

• Improving balance especially in human-size humanoids where the mass to size ratio is large and reducing the possibility for robots to collapse.

• Enhancing locomotion in cluttered environments and uneven terrains by effective means such as multi-contact control and enhanced locomotion gaits among others.

• Developing reliable and robust means for the purpose of environment perception to complement the use of current technologies. i.e., not relying on only certain means of information in cases of hardware malfunction.

• Improving the decision-making process and the computational time for acceptable re- sponding time in cases of an emergency or a dangerous situation in real-life incidents.

Even though there have been numerous efforts aiming to solve all of the above challenges, the common presence of humanoid robots amongst humans or even in industrial settings

4 is yet to be seen. Therefore, the well-know high potential of humanoids is yet to be fully realized. Presently, a number of wheeled humanoid robots such as Justin (Figure 1.1b) and the associated R&D work have slightly eased the problems found in biped and other walking robots and have reduced the difficulties in the development of human-like locomotion gaits. Such robots, however, can only operate on even flat or semi-flat terrains and cannot be deployed to work on complex terrains and terrain features such as stairs, rough outdoor terrains, etc. as humans do. As a result, their application has been limited to plain and uncluttered terrains. Small-size biped humanoid robots such as NAO (Figure 1.1a) are available for more rela- tively comprehensive use in comparison to larger-size robots like ASIMO. Though operating rather balanced, NAO’s 50 cm height and 4.5 kg weight have limited capabilities in compar- ison to human-size humanoid robot’s characteristics and capabilities. Although balancing a smaller size humanoid robot having effective geometrical features such as large feet to increase balance is comparably easier than a full-size heavy bipedal model, even during bal- ancing small-size robots such as NAO users have experienced balancing challenges or failures when attempting to walk on even terrains. Increasing the scales of such failures with larger models makes their public accessibility and close presence amongst humans almost impos- sible and rather dangerous. Fortunately, due to the humanoids’ resemblance to humans’ physical characteristics, they can be developed to be capable of recovering their balance and preventing severe damages to their surroundings and themselves [9]. In the same way, a human can use its entire body (e.g., elbows, knees, etc.) and especially hands to recover from any imbalances to a stable position or reduce any damages when falling, a humanoid robot also has the potential of approximately securing itself in cases of pedal walking failure. The thorough application of their rehabilitation capabilities requires bipedal humanoids to be able to perceive the world, potentially in a similar way to humans as much as possible, developing such sensing capabilities has proven difficulties. Various sensors are employed in

5 a robot’s operation to assist it in acquiring information about its surroundings. Improved decision-making algorithms and faster computation time can better enable a humanoid to perceive the world similarly to a human. Currently, most humanoids are equipped with sensors and methods that help them compute visual cues to help them vi- sualize the world rather accurately [10]. The highly possible occurrence of errors in these computation-based technologies raise the need for complementary techniques to compensate for possible miscalculations. A healthy (having no disabilities) human, for example, mostly uses her or his eyes to decide where to walk, where to touch, or how to land his/her body in case of a fall. Humans’ tactile sensibility helps to compensate for probable misinterpretation of the surrounding conditions by the eyes as an example. The application of touching sensations, especially in people that are blind, shows the importance and the influence it can have on performing different tasks. Touch sensors, force sensors, and tactile sensors are hardware devices used in many humanoids that mimic the tactile sensibility of a human. However, currently, the applications of touch and force sensors is not noticeably common [11] [12] [13]. As explained in the previous paragraph, similar to humans, the application of tactile sensors in balancing tasks is considered beneficial. Force sensors are utilized in humanoids feet for pedal balancing tasks, but unfortunately, the use of tactile sensors in hands, especially to hold onto other objects such as stair rails, is not immensely common since good solutions to use the full potential of tactile sensors do not still exist. Moreover, although the use of touch and tactile sensors in the entire body can play an important role in a safe humanoid operation in crowded environments, humanoids hardly employ them in large numbers. Unfortunately, these sensors are noticeably expensive, and to fully equip a robot with such expensive hardware requests large budgets, which limits their affordability to the general public. In addition, the use of great numbers of sensors leads to more computation time needed to extract useful information from the raw sensor data which contradicts the aspect of efficient decision-making.

6 Medical centers and care-providing services, especially for older populations, can vastly benefit from the application of humanoid robots. However, the limitations mentioned prevent the common use of humanoid robots in our daily lives. Therefore, the use of new innovative techniques that compensate for the challenges mentioned is necessary.

1.2 Thesis Focus

As discussed, at least in theory, humanoid robots have a high capability of operating within close proximity to humans, traversing unstructured static and dynamic terrains, and phys- ically or virtually interacting with people. However, to fully realize their capabilities for a safe, reliable HRI, they will benefit from being equipped with effective sensing equipment and the most advantageous techniques possible to mimic humans’ abilities. Therefore, at their current state of strength and speed, the increased use of tactile sensation, especially in the hands seems to help robots to be able to prevent dangerous collisions, especially during physical HRIs and traversing on uneven terrains with the help of their hands on support- ing surfaces such as stair rails [14]. As mentioned, the touching sensibility and the ability to use humanoids’ hands assist them not only in terms of balance rehabilitation but also in more accurate recognition of their surroundings. Touch sensors, the current equipment which supports the application of human tactile sensation, have been introduced over the last 20 years, and their extensive utilization has been suggested by roboticists. Nonetheless, with the present expensive models of touch sensors, their vast use in robotics’ manufacturing will lead to non-affordable and relatively slow robots which will eventually require faster and more upscale computers to process the information robots receive from the environment. Based on the previous discussion, this thesis concentrates on the non-visual perception abilities of humanoid robots and the application of alternative methods for the tactile sensi- bility of humanoids that will enable them to compensate for robot motion inaccuracies and erroneous perception of the world, using less expensive and time-consuming techniques. A

7 low-cost and fast approach will be introduced, which will help robots benefit from the tactile sensibility without the employment of costly sensors and requiring heavy computations. The envisioned contributions of this thesis are as follows:

• The proposed work will develop a new inexpensive algorithm to prevent robots from causing any harm to themselves and their environment. This will increase the chance of a safe robot maneuver within cluttered spaces.

• The work will enable a collision-detection mechanism enabling robots to be equipped with a preliminary sense of direct tactile sensation (using the tactile sensibility to know and when the robot is being hit).

• The provided collision-detection mechanism (ability) which helps with the sense of touching of the humanoids, can later be used for balance rehabilitation purposes using other body parts in addition to legs.

1.3 Thesis Outline

This thesis is arranged as follows: Chapter 2 gives a background on the techniques relevant to this thesis. Starting with a discussion on the robots’ locomotion development and especially focusing on bipedal humanoid robots. Chapter 3 presents the problem statement to be tack- led in this thesis and its proposed solution. Chapter 4 focuses on the proposed signal analysis algorithm and the specification of the methods used for the least erroneous interpretation of these signals in addition to the mathematical model for the collision-detection algorithm. Chapter 5 discusses the implementation and specifications of the utilized equipment for de- veloping the proposed work. Chapter 6 describes the validation of the proposed methodology and discusses the results of experiments and tests used for validating the developed work. Chapter 7 summarizes the thesis work and discusses future work.

8 Chapter 2

Literature and Review

Throughout the years, humanoid robots have emerged from bulky fully administration- operating machines into versatile assisting machines with different levels of autonomy and environment awareness [15]. Different approaches and contributions have been made into enabling their more common existence in our daily lives. These contributions can be cate- gorized into the following three groups:

1. (Balanced) Walking

• Wheeled walking

• Pedal walking

– Bipedal (biped) walking

– Beyond just balanced walking

2. Manipulation + Walking

3. Touching sensibility

This chapter discusses the different approaches that have been developed during the past decades for the fulfillment of a safe humanoid HRI, the advantages and disadvantages of the research in this area, the limitations of the employed methods, and the hidden potential of some of the techniques that have been utilized less than other methods.

9 2.1 Balanced Walking

By the definition of Oxford Dictionary, the word “walk” means to “Move at a regular pace by lifting and setting down each foot in turn, never having both feet off the ground at once.” The concept of walking is meaningless with the concomitant of balance. Robots and especially humanoid robots need to meet specific levels of balance in order to operate an acceptable walking maneuver. A robot is considered to be balanced if the projection of its body’s Center of Mass (CoM) does not leave the support polygon. A support polygon is an area generated by connecting all the contact points of a body with the ground.” Mathematically the support polygon is defined as a convex hull, which is the smallest convex set including all contact points.” [16]. When the CoM resides in the center of the support polygon, the bipedal body is at its most balanced state. Accordingly, the bigger the support polygon, the longer the distance of CoM towards the support polygon boundary is which means the CoM of the body, has more space to move in. This implies a lower chance of the projection leaving the support polygon area and as a result better balance is achieved. It is favorable that the projection of CoM never leaves the support polygon which is called “static walking”. However, in real life scenarios this does not always apply and during a walking maneuver, there are times when the projection of CoM resides outside the support polygon. This is called “Dynamic walking” which is more similar to humans and most humanoid robots daily walking behaviors [16]. As humans, we naturally learn to walk and keep our balance at early ages. Though if looked at as a systematic machine, this natural ability is far more complicated than what it seems. Hence, it is important to equip the robots which we hope to walk amongst us, with the ability to also maintain their balance. In the following subchapter, robots are categorized into wheeled and pedal (mono-pedal, bipedal, and multi-pedal) robots’ groups based on their means of walking. The different ap- proaches toward humanoid robots’ walking methods, their limitations, and research that has been conducted with the potential to be used for safe balanced maneuvers will be discussed.

10 2.1.1 Wheeled Walking.

Some believe to include a humanoid robot amongst humans, especially for a safe HRI, it is not necessary to build the robot in a way that mimics all the physical characteristics of a human. One of such aspects is their ability to walk. The study and knowledge of wheeled transportation is vastly bigger and older than the knowledge about pedal walking. Therefore, the much simpler and the stable platform of the wheeled transportation, has found its way to the field of humanoid robots as well [17]. Unlike pedal robots, most wheeled robots such as wheeled versions, Justin the humanoid robot, hardly lose contact with the ground and mostly maneuver using static walking. However, in the case of single-wheeled and two-wheeled robots, maintaining a balanced position for single-wheeled robots is more complex compared to two-wheeled robots [18] [19]. In the context of this literature review, wheeled robots with at least 2 wheels will be discussed since mono-pedal humanoid robots are not common. The addition of more wheels to a wheeled robot will increase its stability, however it will complicate the control of the robot [20]. Moreover, extra techniques are required to ensure the contact of all wheels with the ground for 4 or more wheels (static maneuver) [21]. Nonetheless, wheeled robots are capable of operating a dynamic walking maneuver which facilitates traversing on uneven terrains. Overall, wheeled robots have been of interest due to their better balance and easier steering in comparison with legged robots. These advantages however come at the price of their limited maneuver on uneven surfaces. If not as long as pedal robots, wheeled robots have been popular in different industries for a long period of time. The first wheeled humanoid robot “ the Motorman” was introduced at 1939 [22][23]. A humanoid robot which operated with the employment of wheels under the cover of a human-like shell. Being able to follow few voice commands and performing moderately basic tasks, were amongst the short list of this gigantic humanoid robot abilities [24]. About 40 years after Elektro, NASA’s “Robonaut” was introduced [25].

11 Robonaut 1 which was a project in cooperation with the Defense Advanced Research Projects Agency (DARPA), was designed for missions outside earth and to help . The first generation of the Robonaut came in 3 different mobile bodies. Though non-autonomous and relying on teleoperation techniques, none of the different mobility-level humanoid robots were sent to space [17]. As mentioned, Robonaut 1 was mounted on several lower bodies such as zero-gravity (zero-g) leg, Robotic Mobility Platform (RMP), and the four-wheel Centaur 1 [26][27]. Though not a wheeled platform, the first model of the lower bodies, zero-g leg was designed for climbing purposes [26]. The latter model, RMP which was a 2-wheeled self-balancing transporter, was tested for mobile manipulation experiments on earth. Although this model illustrated to be practical under different circumstances on earth, its stability and durability was not sufficient enough for space maneuvers [28]. In the following, some of the challenges the RMP model was facing are discussed. 2-wheel coaxial platforms like RMP are designed based on an inverted pendulum model [29]. Segway RMP is equipped with a self-balancing mechanism and requires enough load mounted on top for its successful operation. It runs by controlling the CoM of the system where if the CoM tilts forward, the machine accelerates forward to keep the system balanced. The same mechanism is used for moving backward and deacceleration [30]. This robot which can be controlled using hardware, has many advantages over the typical statically stable mobile manipulators. However, it requires an active control mechanism to maintain its pose during different maneuvers and manipulation tasks [31]. The Segway RMP requires the upper body of Robonaut to be mounted in a low distance from the top of the Segway platform, and also to be heavy enough to ensure a high center of mass for control stability. Experiments with the RMP illustrated that if the weight of the robot on 2 wheels was increased, the aspect of safety could be threatened. With the increased possibility of a fall and no hardware to prevent it, the robot was more likely to put itself and its surrounding in danger. This problem was solved by the employment of physical

12 stands at the sides of the robot [32]. Unfortunately, after Robonaut 1 was mounted on top of the RMP, it was heavier than what the platform was initially designed for. This matter which caused unplanned motions by RMP, was later solved and the robot eventually showed great capabilities in terms of performing tasks like using tools, opening doors, and visually observing its surroundings using video cameras [33]. Robonaut RMP was later developed to carry large loads and its ability to traverse on multitude of terrains was expanded [34]. However, due to its 2-wheeled coaxial design, in cases of a power runout, it failed to stay upright and therefore would fall even with the presence of stands [35][18]. Eventually, NASA decided that the Robonaut RMP was not suitable for space tasks, however, it was concluded that Robonaut RMP design was sufficient for research and experiments on earth [36]. Combining a humanoid robot manipulation abilities, conception and the mobility of RMP, slightly eased the problems with the involvement of humanoid robots amongst humans and their task performing during the period of Robonaut’s development. Nonetheless, the RMP design failed to prepare Robonaut to help astronauts on International Space Station (ISS). In addition to the problems discussed, moving the arms of the robot, carrying different loads and the swaying motion of the platform, caused the head cameras not to be able to transfer stable views of the environment. Therefore, the stabilizer mechanism of the platform was required to be canceled during certain periods for a successful performance [18]. With the failure of the Robonaut-RMP project, researches attention was shifted towards a more balanced and stable mobility platform. Hence, Centaur 1, a 4 wheeled transported was introduced [37]. As mentioned before, though performing comparably better than the RMP version, With the technology of Robonaut 1 at the time, this project also did not end up outside earth. With the latest version of Robonaut, (Robonaut 2), the RMP was no longer utilized. The Centaur design however, seemed to gain NASA’s interest and attention. This new platform which is similar in size and capability to All Terrain Vehicles (ATV), seems to be able to

13 move on uneven terrains. Centaur 2 has surpassed the means of steering, suspension, power, speed, and shock and vibration protection while traversing on dangerous terrains [38]. In the new design, the Segway RMP is used for the development of the back wheels and the robot is able to speed up to 6 KM per hour to keep up with astronauts in the field. While meeting all the requirements of a high-speed autonomous maneuver, there is still a chance of losing balance if the upper body of the robot exceeds the boundary of the support polygon, therefore endangering the robot and its surroundings [39] [28]. Although Centaur 2 is capable of traversing uneven terrains, carrying various load, and performing dexterous human-like tasks, its size, weight, and power prevents it to be involved in crowded confined indoor spaces. As a result, further techniques are required to ensure robot and its surroundings’ safety [40][41][38].

2.1.2 Pedal walking.

It was discussed in the previous chapter that wheeled humanoid robots benefit from heavier weights and bigger sizes regarding their balancing mechanism. Unlike wheeled humanoids which are mostly statically stable, legged robots do not require to be of massive sizes or weight to be balanced. The current humanoid robots either rely on wheels, or legs, or both (e.g. The Hitachi EMIEW2) for walking maneuvers [32]. Since the focus of this thesis is on humanoid robots, the concept of bipedal walking will only be discussed amongst the different types of legged robots such as mono-pedal and multi-pedal robots.

2.1.2.1 Bipedal (Biped) Walking.

Amongst the many ways (using wheels, pedals, etc.) a humanoid body can maneuver on various terrains, bipedal walking has proved to be the most adaptable and versatile method [42][9][32]. Human’s regions of walking vary from the most plane terrains to slopes and stairs. With the closer to reality terrains like stairs, urban streets and pavements which are not perfectly flat, bumps in the ground, etc. which are acceptable for human’s bipedal walking,

14 the humanoid robots are expected to walk on such regions as well. This is due to bipedal humanoids’ similar physical characteristics and walking gaits. Overall, the bipedal ability of humanoids has shown to be more compatible with most common terrains in comparison to other robots such as wheeled robots [9][43]. Primitive humanoids or Legged machines have existed since before the beginning of 20th century. But the closer to human looking humanoid was first introduced by Dr. Ichiro Kato in 1973, called WABOT 1. The robot was able to do basic tasks like changing the direction of walking if encountering certain objects and to perform a static walking maneuver [44][45]. Later, the BIPER robots were introduced where their balance were controlled based on the motion of an inverted pendulum [46]. BIPER 1,2,3,4 and 5 were no longer statically stable, however considering they were able to perform a series of inverted pendulum motions between the 2 legs, they had an acceptable dynamic stability [46][44]. By the beginning of the 21st century, the humanoid robot ASIMO by Honda was able to perform some basic walking tasks such as walking forward and backward. ASIMO was also using the inverted pendulum model to generate its walking pattern. However, this led to the robot walking with bent knees to assure its balance by minimizing the length of the pendulum arm model. This would lead to increasing the possibility of keeping the CoM projection inside the support polygon. Unfortunately, because of the robot’s bent knees, ASIMO did not have a life-like walking gait and therefore was not ready to engage in human-like tasks [47]. PETMAN, is the name of a latter announced robot by Boston Dynamics, which was faster than ASMIO. PETMAN’s walking gait was more biological than the BIPER models and ASIMO. The more biological gaits of PETMAN helped its performance in terms of balance especially with the presence of external disturbances [48][44]. Other humanoid robots such as Kenshiro have been developed that mimic human body and its walking gaits more equivalently which facilitates humanoids walking maneuvers speed [44][49][50]. The first generations of bipedal humanoid robots such as ASIMO by Honda [51][52], HUBO by KAIST [53][54] , and PETMAN by Boston Dynamics [6] , etc. have demonstrated

15 to be able to maneuver on uneven terrains, comparably better than the wheeled humanoid robots like DLR’s humanoid manipulator Justin or NASA’s Robonaut-RMP [55][56][35]. Even though the mentioned bipedal robots have been able to operate better on modest slopes and slightly rugged surfaces, their employed technology is yet not ready for com- mon non-experimental terrains. Although the newer humanoid robots have more biological gaits, they still could not meet the speed requirements for daily life involvement. Moreover, the traditional walking models mostly follow the Linear Inverted Pendulum Model (LIPM) which does not efficiently mimic a human’s walking model. Unfortunately, in cases of huge perturbations, humanoids equipped with LIPM, fail to return to a balanced state and will fall. As the study and research on humanoids’ walking gaits continue to enable faster and smoother walking patterns, application of control algorithms which enables their robust and reliable maneuver, seems to be more necessary than before [57]. Moreover, these robots ben- efit from moving toward full autonomy faster in order to be able to move in humans’ living and working environments [57]. Therefore, the employment of other techniques which would assist the humanoid robots to perceive the information about their surrounding is necessary to assist the walking maneuver in complex environments which can be addressed as versatile walking [57]. Okada etal. [58] are one of the pioneers in the obstacle detection area especially for walking purposes. They developed an algorithm called the Plane segment finder (PSF) which was tested on a remotely operated robot which could distinguish between different heights of stairs and step on them in a real-time collision-free manner. Though declared as real time, the results for this method are comparably slow and inaccurate to the newer methods that have been proposed a decade afterwards [58]. Later, by utilizing a stereo-vision system, Sabe etal. advanced the previous work on obstacle avoidance and path planning with the robot QRIO. Unfortunately, this method was only showing acceptable results in local navigation and required the surrounding environment of the 0.6 meter robot to be highly textured [59]. With the augmentation of a Graphics processing unit (GPU) sensor, Michel

16 et al. proposed a more robust approach towards maneuvering over uneven terrains like stairs where the robot had no previous knowledge of the geometries and details of the surface it was walking on. Despite all the progress Michel et al made in autonomous obstacle avoidance and robot localization, the system was limited in the amount of motion exerted towards the perception sensors to successfully track the objects around it [60]. In [61] similar to Sabe etal. and Okada etal, with the support of stereo-vision camera, robot’s surroundings was segmented into a fairly accurate 3-D map to enhance obstacle avoidance while increasing the robustness of the camera in charge of information perception. This method however had the drawback of high computation time in cases of large arbitrary designed areas. Although this method has illustrated better results in comparison with previous approaches, the accuracy and qualification of this method was not fully demonstrated during enough practical exper- iments [61]. Thereafter, more general approaches were introduced by various authors which mostly focused on enhancing the safety aspect of step planning techniques [62][63][64][65]. Increasing the level of autonomy and generalization of methods like 2-D trajectory generat- ing of robots such as Lola and NAO, limited the robot from operating on different heights like stairs [66]. As the research on path finding and step planning grew, the need to prevent robots’ legs and feet to collide with each other and the platform they’re stepping on also became more important. As mentioned, Okada et al. considered the aspect of collision avoidance during their early work on humanoid robot path planning [57][58]. Hence, attempts for step planning started to be focused more on the collision avoidance aspect. One approach has been the divinding of a cycle of 2 feet step to static 1 foot steps, followed by a collision avoidance analysis. The upcoming steps would compensate for errors in the prior steps in terms of collision avoidance and eventually convert to a noticebaly delayed dynamic manuever [67]. Later works [68], suggested whole-body collision analysises which resulted in better autonomy and path planning in cluttered enviornments even when the humanoid did not have any knowledge about its surrounding [68]. In [68], by utilizing the information percieved from a

17 onboard depth camera, an accurate estimation of the robot’s body was used to compensate for the odometry errors which also resulted in the development of an accurate mapping of the environment with the help of the robust heightmap representation. Thereafter, a sequence of safe whole-body and footstep planning actions were generated while considering the more accurate estimatetion of body’s geometry which lead to a more robust collision examinating navigation [68]. While the aspect of collision avoidance and collision detection checking is undeniably important in path planning, paying attention to the stepping motion is so too. Complemen- tary works which enabled humanoid robots to perform various stepping maneuvers such as stepping on and down the stairs and over obstacles, targeted the aspect of whole-body bal- ancing for humanoids [60][69]. In [60] the robot’s “feasibility” was targeted which evaluates robot’s ability in overcoming various obstacles before implementing the planned motions. By considering the geometry of the obstacles, environment, and robot’s kinematics, optimiza- tion techniques have been employed to generate motion planning models which regard the collision avoidance and robot balance as constraints. Thereafter in [69], this prior knowledge of feasibility was used in the motion planning process to divide it heuristically into foot- step pattern generation and waist trajectory generation. The whole-body motion was then separated into the upper and lower body motions to meet the balance constraints as well as obstacle adaptive collision-avoidance for the development of humanoid robot HRP-2. In [70], following the work of [69] and [60] where the robot was using quasistatic stability, the humanoid robot HRP-2 was able to dynamically step over obstacles in 4 seconds which was 1/10th of the previous record length. Each of the mentioned approaches in this section explore the many capabilities of a hu- manoid robot walking gates like stepping over one object using onboard vision perception sensors and previously known information about robots’ body such as its kinematics. How- , they also limit humanoids’ potentials due to the assumption and approximations made during evaluating the robots’ surrounding such as humanoids’ kinematics, obstacles’ size and

18 shape approximation, surfaces’ evenness, etc [60][69][70]. Moreover, the concept of collision avoidance is only examined for the robot not colliding with other objects and not itself which contradicts with the aspect described in chapter 1 regarding the conditions that a robot need to satisfy for of its safety.

2.1.2.2 Beyond just Balanced Walking.

To this day, various works have been published which target both robots’ walking pattern generation and balance rehabilitation [70][69][60]. Engaging all the requirements for success- ful walking while maintaining balance demands the presence of a comprehensive controller which allocates different weights to each considered aspect based on their importance. As mentioned before, stability and balance of a humanoid robot are undeniably impor- tant to enable their presence in arbitrary spaces. Different approaches have been developed into integrating robots’ walking ability and balanced disturbance management for a safe operation. Most contributions to this field have been works on modifying the CoM and torso coordinated trajectory of the robot [71][72]. Thereafter, footstep planning approaches have been combined with the prior mentioned torso trajectory modification to generate safer trajectories especially in cluttered dynamic environments [73][74]. In the past 10 years, an unconventional approach has been of interest which targets the modification of footstep touchdown placement instead of the robots’ torso to prevent and reduce the risk of collisions involved with feet touchdown in dynamic environments [75]. Chestnutt et al. proposed a legged locomotion technique which computes safe regions for the robots’ feet touchdown point [75]. The walking pattern generator would later choose the walking trajectory from such safe regions. This approach prepares the robot for faster and more robust collision avoidance reaction and prevents massive failures such as a complete fall. Nonetheless, due to the assumptions and approximations of the environment geometry and details such as the assumption of a static environment due to slow scanning of the surroundings which is one scan per second, these safe regions limit the area onto which the robot can walk and the

19 ability to step over detected obstacles [75]. An extension to this work was later proposed which added convex objects to the safe region generation process as constraints [76]. The proposed solution in [76] enables the robot to avoid steeping on convex obstacles during its walking maneuver. Unfortunately, no practical experiments have been demonstrated beside the simulated results [76]. Kuindersma et al. introduced an optimization-based framework that considers all control processes and motion planning to achieve reliable locomotion in cluttered and uneven terrains. In this work an obstacle-free sequence of footsteps is generated by solving a mixed-integer optimized convex comprised of the estimation of robot’s walking terrain [77]. Although the information about the environment is not employed for stabiliza- tion purposes, the approach is advantaged in a way that the entire robotic body kinematics are included in combination with the torso and CoM dynamics for trajectory optimization [77]. Though the proposed work has been highly inclusive of different humanoids capabilities, it still does not fully utilize the numerous potentials of it such as fast dynamic locomotion on various terrains. Some of the discussed works have employed the information about the environment to apply the collision avoidance factor for footstep planning [73][74], while oth- ers addressed it separate from the footstep planner and during CoM trajectory generation [71][72]. However, most of these methods have tackled the collision avoidance factor in a static way. That is, the decisions made for the purpose of collision avoidance for the dynam- ically moving robot is made based on previous time steps of the robot with the assumption of the environment not moving. Therefore, it is necessary to present methods which can avoid dynamic obstacles whether the robot is moving or not. Hildebrandt et al. introduced a dynamic collision avoidance based framework which integrates information about the sur- rounding objects for task space whole-body motion generation and complex walking [78][79]. They continued their work by integrating the vision system into their hierarchical collision- avoidance framework [57]. The control process is divided into modules where each of them optimizes the result of the prior module to assure robustness. The same approach is taken towards the motion planning for efficient real-time solving [57] They address the coexistence

20 of versatile and robust walking. In [57], “versatility” indicates the various methodologies and techniques that enable a humanoid robot to utilize its physical capabilities towards maneu- vering in cluttered and arbitrary environments with no priori knowledge about them. The term “robustness” refers to the ability of the humanoid robot to go back to a balanced or stable condition after extreme perturbations have been applied to the robot, even by environ- mental model errors [57]. In [57], to achieve fully robust and versatile walking, the obstacles around the robot are no longer approximated just as convex hulls or edgy geometrical shapes. Such inequal shapes are used as constraints in the footstep modification optimizing amongst perturbations in real-time simulations and experiments [57]. Unfortunately, the authors in [57] proposed work has also some downsides. The framework cannot handle large constant disturbances as such disturbances steer the robot away from its planned trajectory. Since this hierarchal detailed approach does not use instantaneous information, the results might not be achievable for real-time actions. Even though this work to this day is one of the mostly comprehensive approaches toward collision-free and balanced walking, it still does not fully guarantee no collisions during robots’ maneuver due to some of the matters discussed. The framework however addresses their decision of prioritizing collision-avoidance over balance in humanoids operation since it has been concluded in their work that recovering from an unbalance situation is easier achieved than recovering from a severe collision which might lead to falling [57]. As an example, during walking maneuvers stepping on a round obstacle will most probably lead to the humanoid falling as this case also does the same to a human. While the humanoid can simply avoid contact with such obstacles even by slightly shifting to an unbalanced condition, it won’t be easily able to recover its balance after it steps on the round object. Despite the fact that preventing the robots’ walking gaits from collisions actually assists the robots for a lower chance of failure such as falls, it illustrates the need of employing other humanoids’ capabilities to enable a safe maneuver. Amongst the discussed approaches, what seems to be not getting enough attention is the way biped humanoid robots assure they have made the targeted contact with the ground or

21 other surfaces. The sensors that assist with visual perception are mostly used for the collision avoidance and environment investigation, to avoid or allow system decided contacts. Robots such as Lola [57] which rely on such techniques and position control are the target of this discussion. However, to the best of my knowledge, most humanoid robots like Lola are equipped with force sensors at the bottom of their feet where these sensors transfer the information regarding the applied impact force with the ground. Such data considerably contribute to their versatile and robust walking. The application of such sensors and their assistance in other humanoid robots’ operation challenges (not solely during walking) will be further discussed in the following chapters.

2.2 Manipulation + Walking

As discussed in the previous section, humanoid robots cannot just rely on their locomotion gaits, pedal balance techniques, and vision system to operate a safe collision-avoiding ma- neuver. The application of humanoids’ upper body parts seems to be considerably beneficial for humanoids versatile and robust walking. As mentioned in Chapter 1, humanoids mimic humans’ body in so many ways and just like humans use their hands to prevent or recover from falls, humanoids could potentially assist their balance and dynamics by the use of their hands. Recent humanoid robots are becoming able to utilize their entire body to maintain their balance, while also performing different tasks such as lifting or pushing object, and heavy objects moving [80][81]. The conventional methods for operating interaction and bal- ance related maneuvers used to run separate independent controllers for the upper and lower body of the robot [82][83][80]. Generally, external forces exerted to the upper body would be acknowledged as disturbances to the lower body balance control system which the lower body joints had to respond to [82][83][80]. The separately targeted control of the upper and lower body would not allow the humanoid robot to fully employ its physical abilities for balancing purposes such as in confined spaces. Hence the humanoid body would only rely

22 on its lower body joints for balancing purposes. Later, new whole-body control frameworks were introduced which explored the capabilities of a humanoid robot like benefiting from joints (e.g., knees and elbows) contact with the surrounding to maintain balance [84][85]. This would enhance the support polygon of the humanoid robot and therefore making the robot more robust and agile. Compliant whole-body balancing controllers like [86] have pro- posed techniques to utilize force and momentum control instead of the traditional position control methods. Even though the work of [86] allows the simultaneous accurate control of CoM trajectory behavior, body posture, internal forces, and operational tasks, it still was not able to manage large disturbances, and model singularities were not analyzed. Other whole-body balancing controllers have also been proposed such as approaches which focus on solving the inverse-kinematics (e.g., CoM trajectory tracking control) or dynamics of the robot (e.g., torque controlled robots) [87][88]. Others have proposed passivity-based approaches for whole-body control instead of inverse-kinematics and inverse-dynamics for torque controlled robots to be able to distribute their end effectors for both balancing and interacting purposes [12][81]. Amongst the different approaches, multi-contact whole body motions seems to benefit more from hierarchical oriented methods since they allow multiple control objectives [86][89][90]. In [84], where the robot was able to use all of its end effectors and in-between joints to maintain its balance, a hierarchical approach was also used for the purpose of multi-contact balancing. In addition to the upper and lower body of a humanoid robot operating together and with respect to the robots’ balance, the robots need to manage high external forces exerted to their body to be able to lift, push, or pull various objects [91]. For a robot to use its full potential to handle maximum disturbances its physical char- acteristics allow, its overall control system has to operate with respect to different factors like supporting contacts’ characteristics, and kinematic and actuators constraints. One of the state-of-the-art methods which accounts for all these factors is called Gravito-Inertial Wrench Cone (GIWC) which is also known as the polyhedron of feasible balancing wrenches [80]. One of the recently explored advantages of GIWC is that it does not require an explicit

23 predefined CoM trajectory. In addition to being able to resist maximum disturbance and maintaining balance at a specific configuration, this approach can also help the humanoid robots to exert maximum interaction force in a certain body posture [92][93]. Unfortunately, this polyhedral method has mostly been used for offline planning before and not instan- taneous posture control. Recently, efforts have been made towards using the polyhedral method of GIWC to ensure online stability of the robot for an instantaneous high force interaction with the unknown environment [94]. Even though work in [94] showed great results, throughout experiments ran with new GIWC proposed approach, what seemed to lack was the ability of the robot to calculate the exact value of forces exerted on its end effectors (hands) during multi-contact balancing positions. It would be beneficial to employ techniques which would allow for the accurate perception of the exerted forces by the robot and the environment for balancing purposes, especially in highly dynamic environments.

2.3 Touching Sensibility

As presented in the previous sections, different approaches have been taken towards the safety and collision avoidance of a humanoid robot such as collision avoidance during walking planning and balance enhancement. Most of these approaches rely on the vision system’s perceived information to avoid collisions and plan safe trajectories [95][57]. Additionally, in Section 2.2.2, it was suggested that humanoid robots can benefit from equipment that mimic the tactile sensation of a human, like touch sensors which can poten- tially help facilitate the collision avoidance and detection factor of a robot. Therefore, such sensors would help the HRI activities to run more efficiently and safely especially for robots designed to work in close proximity to human. Currently, vision sensors such as HD cameras, deep sense cameras, RGB cameras, etc. deliver information about robots’ environment to the vision system to subsequently fulfill the collision-avoidance, and safety aspects and also to make sure that a robot is making a planned contact with its surroundings [96][95].

24 In [84] with the help of employed technique of multi-contact whole-body control, the humanoid robot TORO was able to use its knee and exert the controller computed value of torque for balance enhancement purposes in confined spaces. The robot’s knee however was not provided with any equipment which could enable the robot to sense the amount of force exerted on its knee from the environment. Therefore, extra force sensors in the environment were used during the experiments to make sure the robot is approximately applying the amount of torque the controller was computing. Tactile, touch, and force sensors are gears which deliver different levels of touch sensation to a robot. Force sensors as an example are mostly used in humanoid robots’ feet for walking and balance purposes, but their application is also beneficial in other robotic tasks. Recently in [97], experiments on an artificial skin were illustrated where the tactile sen- sor, could detect information regarding the magnitude, direction, and localization of tactile interactions with the environment. Tactile sensors have gained interest in the past years [98][99][100]. Lumelsky et al. designed a control framework for a teleoperated hybrid robot which could get close to objects for teleoperation-manipulation tasks while not endangering robot’s safety [101]. Woesch et al. proposed a work where humans could interact with a robot relying only on its tactile interface [102]. In this robot which had similar kinematics to a human, resistive flexible tactile sensors covered the forearm of the robot [14]. The robot ARMAR-III had differently shaped skin sensors for different parts of the robot body [103]. The iCub robot also had skin sensors covering different parts of its body which helped it detect contact with its surrounding [99]. A combination of force sensors and tactile sensors were later used for the whole-body control of iCub robot as well [104]. Subsequently, the mentioned combination was used for balancing purposes during multi-contact scenarios [105]. Cirillo et al [6] introduce a flexible skin sensor which can transfer information regarding the exerted forces and their location on the robot’s body [106]. In one of the latest works on tac- tile sensors, a very advanced sensor was introduced and its application in humanoid robotics was explored. However, with such advanced sensors, the control system of the robot also

25 needs to be upgraded. Moreover, the high number of robot skin cells (1260 cells) demands high computation time and effort [107]. Overall, skin sensors including the various types such as tactile and force sensors are developed based on many factors such as their application and the platform, they are required to be used in. However, the final goal in their development should be to compensate for the limitations caused by other sensors and system constraints [108]. Despite tactile sensors’ current broad application and high potential, they are still in an experimental phase. Difficult installation, Low durability, fragility, complexity of wiring with the other equipment, complicated maintenance, and most importantly, their high ex- penses are amongst the problems associated with their usage [109] Hence, alternative and complementary methods are needed to help convey the information they acquire from their surroundings at lower cost.

2.4 Safety of a Humanoid Robot

In the previous sections of Chapter 2, Different aspects of humanoid robots’ safety where discussed. The most Important aspect of a humanoid robot’s operation which is its ability to walk with various gaits was divided into the groups of wheeled and pedal walking. It was shown that even though wheels have had a longer history and seemed to be more reliable, are not efficient for humans’ arbitrary terrains such as stairs. Pedal walking showed to be more agreeable with the terrains that humans generally walk on. Moreover, with the employment of pedal gates over wheeled gates with less than 3 wheels, a lower chance of humanoid robots collapsing or falling on their surrounding objects will be possible. Therefore, pedal gates contribute to a safer robotic maneuver amongst humans. Once pedal gates where chosen as the superior means of walking, humanoids various techniques for maintaining a balanced posture were briefly overviewed. While a robot should generate a pattern for its walking maneuver based on the perceived information about its surrounding, it is also able to

26 prevent collisions during its walking movements. As the aspect of collision avoidance highly allows a safer walking, it’s beneficial to target collision avoidance and step generating in various cluttered environments at the same time. However, to do so most humanoid robots employ their vision system to examine the environment while eventually only their feet and lower body are used for balancing purposes. It is a well-known fact that human beings can rely on their hands in case of falling or to have a more balanced posture. Due to humanoids’ close physical characteristics to humans they can also benefit from the engagement of their hands and other body parts besides their feet for balance enhancement. With a better and more comprehensive methodology to maintain balanced, humanoid robots can be considered safer for close proximity to humans’ interactions. However, to fulfill this theory humanoid robots need to engage complementary technologies and methods to their current vision and kinematic system to assure the least environmental and systematic errors. Humans use their sense of touching to compensate for the errors associated with their visual and hearing system. Humanoids can also benefit from the contribution of their tactile sensing to the overall balancing and interaction operations. While enhancing humanoids balance and interaction mechanisms increases the possibility of close proximity human-robot interaction, it also considerably contributes to the aspect of human-robots’ safety. It is undeniable that a more balanced robot which has numerous complementary methods to compensate for possible systematic and environmental errors can be considered safe. Not only the robot will be able to guarantee its surrounding objects’ and humans’ safety but also it will prevent damaging its own self, hence not endangering its own safety. Overall the employment of tactile sensing in a more broadly spectrum seems to provide a better chance of safe humanoid robot interaction. Currently humanoid robots use tactile, force, and touch sensors which as an example are used in their feet for pedal balancing purposes. There are some humanoid robots that are equipped with tactile sensors in their hands and other body parts. Even though this has allowed more applications for human robots, unfortunately

27 these sensors are very expensive, computational heavy, and fragile. The utilization of such sensors all over humanoid body might enable the robot to have a closer tactile sensing to a human and a higher possibility of a safe maneuver, but it will considerably affect its computation and reaction time, and the expense of production. In addition, these sensors require high maintenance due to their complexity and fragility. As a result the employment of complementary techniques which could mimic similar information the family of tactile sensors (tactile, touch, and force sensors) can provide seem highly beneficial. Surely such robots would be able to use their tactile sensation to contribute to their balance and interaction system while not causing high expenses or a burden on the computational time for fast, safe, real-life interactions.

28 Chapter 3

Problem Statement

As discussed in previous chapters, humanoid robots are theoretically capable of safe op- eration through arbitrary cluttered environments, navigate on rough terrains and perform harmlessly amongst humans. However, to be able to benefit from their assistance during var- ious tasks such as medical procedures or assisting in urban search and rescue, it is important to develop mechanisms for humanoids towards a safe operation. Different approaches have been taken towards improving humanoids’ movements, improving the detection, mapping, etc. of their surrounding objects, while taking into account people’s safety. Despite the nu- merous efforts that have been made in making humanoids perform safely amongst humans, there are still numerous unresolved challenges towards humanoids’ safe operation. Many of the different techniques utilized which enable the close-proximity HRI, and safe humanoid robots’ operation use visual perception or benefit from sophisticated sensing equipment such as tactile sensors to feedback control robots’ maneuvers and prevent dangerous collisions. Even though the techniques associated with visual perception and the sensors which enable visual perception have been used in the field of robotics longer than tactile sensors, they are yet to be completely error-free and reliable. However, the sensors which inform the robot about a physical connection with their surrounding (aka tactile sensors) help to compensate for visual perception errors. Even though equipment which enables tactile sensibility (e.g.,

29 force, touch, and tactile sensors, etc.) is considerably beneficial towards facilitating close safe HRI, it is noticeably expensive, relatively fragile, and might require heavy computa- tional techniques to process the perceived sensing data [109]. Due to reduced budgets (for example) such characteristics prevent the realization of humanoids’ full potentials in various fields. In order to fully enable safe, reliable, dependable, friendly, and somehow inexpensive HRI, it is favorable to minimize the use of sensing and hardware equipment and use the available hardware with advanced control, data analysis, motion planning, etc. techniques to maximize the potential of using the available systems and components. Although many and diverse characteristics have and can be thought to be required to enable 100% error-free robot’s safe operation (e.g. perception, motion control, path planning, etc.), the general component considered is to prevent collisions. However, in real world activities (and in part because the world is dynamic and not perfect) robots have to adapt and realistically colli- sions will occur. Thus there is a need to minimize the effects of collisions to the robot, the environment, and humans. Based on this and within the context of this thesis, the following problem statement is formulated:

Develop a fast (instantaneous) collision-detection algorithm applicable to hu- manoid robots which will enable humanoids to safely maneuver in arbitrary clut- tered environments and operate safely in any HRI activity.

A methodology which addresses the above problem statement must enable humanoids to detect planned and unplanned collisions within a certain given range of applied force and determine (as precise as possible) the location on where the collision occurred (e.g., the im- pacted joints) so that effective and safer robot activities can be constructed. Before a solution is proposed to address the identified problem statement a set of definitions, assumptions, and constraints will be included.

30 3.1 Definitions

Collison: A collision will be identified as two or more bodies apply forces on each other during a relatively short amount of time. While it is usually known as incidents when two objects exert great forces on one another, in its scientific use, the magnitude of such exerted forces has not been clarified. In the context of this thesis this force magnitude will be 1 N. Thus, forces lower than 1 N will not be considered as collisions but rather these will be considered to be interactions (which it will be assumed are desired).

Planned collision: In the context of this thesis, a planned collision will be defined as planned (or desired) force impact from the robotic body to the environment or objects. In this thesis it is desired to also be able to detect planned collisions as it is envisioned that properly controlling and being aware of such collisions will enhance safer HRI.

Unplanned Also known as accidents, unplanned collisions are defined when collision: two or more bodies exert an amount of force, F ≥ 1N, on each other that causes a change in velocity or position on at least one of the bodies. This incident can but does not have to necessarily damage one or more of the bodies.

31 HRI : “HRI is a field of study dedicated to understanding, designing, and evaluating robotic systems for use by or with humans. In- teraction, by definition, requires communication between robots and humans. Communication between a human and a robot may take several forms, but these forms are largely influenced by whether the human and the robot are in close proximity to each other or not. Thus, communication and, therefore, interaction can be separated into two general categories: Remote interaction and Proximate interactions [110]”.

Humanoids’ In the context of collision avoidance, humanoid robot’s safety is safety: defined as keeping the humanoid’s body away from unplanned collisions. For this, the robot should be equipped with mech- anisms to either avoid unplanned collisions or to safely manage such incidents in a way that does not involve the robot in further hazards such as permanent damages.

Joint’s current Are defined as the directly joint read values from the robotic ar- values: ticulated parts. Joint’s current values include Position, Velocity and exerted Torque as well as others such as power, current, rpm, etc. These values are instantaneous.

32 Joint’s desired Are defined as values that are used to reach robotic joints to values: specific computed values to do commanded or programmed ma- neuvers. Similar to joint’s current values, joint’s desired values include Position, Velocity and exerted Torque as well as others such as power, current, rpm, etc. Depending on the robot’s con- trolling operation mode (which will be discussed in Chapter 4), at least one of the current values has to be calculated by the computer for robot’s maneuver.

3.2 Assumptions

The work presented in this thesis is complex as it focuses on humanoid robots operating in arbitrary confined environments with the goal to be applicable to any humanoid robot with different complex geometries and dynamic characteristics. These complexities are due to the high amount of perception methods’ errors and inaccuracies. While numerous HRI safety related methods rely on using excessive equipment, this comprehensive approach simply benefits from the signal analysis techniques. In order for the work to simplify the associated challenges in this work, the following four assumptions are made:

• The dynamic model of the humanoid robot accurately represents its dynamic behavior.

• The Humanoid onboard sensors provide instantaneous robot state and environment information (i.e., no noticeable delays in receiving sensory data exists).

• The collisions happened during the development of the proposed methodology and during the detection period do not damage the robot parts nor the servomotor joints and does not decrease sensors’ accuracy.

33 • The exerted torque on joints by their operating motors do not significantly affect the neighbor joint motors and their movements.

3.3 Constraints

Since every experimental and research work has some limitations that affect the developed approach, it is important to mention them before the discussion on the proposed solution. In the proposed Collision Detection approach, the following have been considered as constraints:

• Due to lack of exterior force sensors and machinery for measuring applied values of torque, the exerted torque values are estimated using simple equipment like mallets and scales. such equipment are thoroughly introduced in Chapter 5.

• Since maneuvers which include the entire body joints (upper and lower body servomo- tors) are not developed yet for the utilized humanoid robot in this thesis, experiments using the upper and lower body joints together have not been conducted (Chapter 6).

3.4 The Inefficiencies in the Current Techniques

As discussed in Chapter 2, humanoid robots have benefited from various techniques to in- crease their safety and their surroundings’ safety as well. Since the early works on humanoid’s safety, most of these techniques rely on visual perception. However, the sensors and equip- ment used for visual perception are not fully accurate. In the past years, equipment which mimic the tactile sensibility have been utilized for humanoids’ safety purposes in addition to previous techniques which have contributed significantly to the safety aspect of humanoids. However the state-of-the-art tactile sensors are also accompanied with numerous complexi- ties such as their difficult installation, high fragility, high expenses and low durability [110]. Therefore, to fully use the potentials of tactile sensors for safe HRI experiences, developing

34 new low-expense methods which can provide the robots with a preliminary sense of touching, are beneficial.

3.5 Proposed Solution

Based on the above requirements and observations this thesis proposes a real-time safe collision detection methodology which consists of four layers: Perception, Evaluation, Iden- tification, and Execution (layers) (Figure 3.1). The solution uniquely combines previous work in perception and develops novel Evaluation, and Identification layers. These two layers store and process the received signals from the robotic motors in addition to the com- puter commanded values during maneuvers to recognize collisions and their locations on the humanoids’ body. As can be observed from Figure 3.1 the methodology is designed in a

Figure 3.1: Operation Architecture.

35 combined serial way which is repeated during every control cycle (Chapter 4). Quantitive current and desired states of the robotic joints (joint current and desired values) are gathered by the Perception layer. The joint current values are then passed to the Evaluation layer. The Evaluation layer records a set of data in a repetitive short period of time which aims to eliminate the possible erroneous values received from the joints in the Perception layer and generates a safe range which is used in the Identification layer. According to the so-called safe range, the robot kinematics, and the desired goal values, a number of conditions will be developed. The developed conditions during the Evaluation layer are then used to identify collisions and the affected robotic joints during the Identification Layer. Consequently, The Identification layer detects collisions in accordance to the results from the Evaluation layer and the desired and current joint values from the Perception layer. The Evaluation layer then helps to decide on whether or not to continue a specific maneuver where this decision is then carried to the Execution layer. The Execution layer then sends this data back to the robotic joints to stop or continue the on-going maneuver. If the overall algorithm does not detect a collision, the above cycle will continue simultaneously as the robot operates any maneuver. The proposed methodology does not interfere with the humanoid robot’s maneuverability nor operation speed and proposes a real-time low-cost computation friendly approach. The reliability and speed of the algorithm is only limited by the data received from the joint motors and the controller used for updating and sending back this data. The proposed approach has three main advantages over other approaches. First, this approach avoids damages to the robot and its surroundings without using any tactile, touch, or force sensors. Moreover, it saves the effort, expenses and computation time required by such sensors. As a result of the less computational heavy behavior, considerable damages are prevented in a timely efficient manner. By detecting the affected joints, such information can also be utilized for multi-contact balance supporting techniques without the use of touch sensors. This approach also helps to compensate for possible errors in the vision perception-

36 based collision avoidance or multi-contact techniques.

3.6 Expectations of the Proposed Solution and Con-

tributions

The proposed solution to the problem statement adds to the existing knowledge base in the following meaningful way:

• Development of a collision-detection technique applicable to any humanoid robot op- eration system which will enable humanoids to safely maneuver in arbitrary cluttered environments and operate a close proximity safe HRI.

This contribution is achieved through a number of sub-contributions:

• Development of a time efficient signal analysis algorithm (as fast as 0.16 seconds).

• Development of a computation friendly pattern for detecting collisions (as fast as 0.008 seconds)

• With the application of the proposed collision detection algorithm, the humanoid robot will be able to stop relatively instantaneous in case of unplanned encounters and pre- vent further damages. Furthermore, it will be able to diagnose the impact location on the robot’s body. Moreover, the proposed algorithm will equip the robot with a preliminary sense of touching which can subsequently be used for multi-contact bal- ance enhancement purposes (as discussed in Chapter 2 for whole body balancing tech- niques).

Based on the description above, the following chapters will further develop this proposed solution and develop the detailed mathematical framework on which the solution is based.

37 Chapter 4

Operation Architecture

While Chapter 3 introduced the proposed solution to the problem statement, this chapter describes in detail each of the elements comprising the proposed operation architecture. That is, the following sections will describe the Perception layer, the signal analysis mechanism, and the collision detection mechanism (Figure 3.1). Since many of the concepts of the proposed signal analysis mechanism and collision detection mechanism are based on the output of the Perception layer, the Perception layer is discussed first in Section 4.1. Section 4.2 will focus on the signal analysis mechanism followed by the Collision Detection mechanism in Section 4.3. Section 4.4 summarize the concepts presented in this chapter.

4.1 Perception Layer

Robots, especially humanoid robots’ joints and motors generally run using one of of typical different control modes: i) Torque Control Mode, ii) Velocity Control Mode, iii) Position Control Mode, and iv) PWM Control Mode (Voltage Control Mode). Based on the robot’s control mode used, the robot operates by computing and propagating the desired future values of its control mode which are then sent to the body joints which try to match their current readings with the desired values. For instance, a position-controlled robot calculates the desired trajectory of a certain maneuver and then the joints (or motors) are subsequently

38 commanded to meet the desired values as quickly and accurately as possible. Since the purpose of this thesis is to approximately mimic the information of a tactile or touch sensor regarding detecting planned and unplanned collisions (Section 3.2) on the humanoid’s body, the information perceived from the robots’ motors regarding their current, and information computed by the commanding mechanism for future (planned) state are considered as the main element (sensing information) of the Perception layer. The motor information used in the proposed approach includes the position, velocity, and internal applied torque (or force) of all its joints. The sensor data captured are utilized in parallel with the future joint’s desired values. The joint values to use depend on the robot’s control mode used (e.g. joint’s position values for a position-controlled robot). The joint values are updated every control cycle δt , where δt is the time that one cycle of the control algorithm takes to compute. The current state of the joints and the goal (calculated) joint values are then indepen- dently fed to the signal analysis mechanism and the collision detection mechanism for further evaluation and interpretation for increased accuracy of diagnosing the affected joints during collisions (Figure 4.1).

4.2 Signal Analysis Mechanism

The signal analysis mechanism is responsible for evaluating and filtering the raw received data from the robotic joints’ motors. Due to the fact that the output of the perception layer (i.e., sensors readings) could be erroneous due to the characteristics of the sensors themselves (e.g., sensitivity) as well as the numerous challenges associated with sensor measurements (e.g., accuracies, etc.), the aim of the signal analysis mechanism is to reduce (eliminate) such erroneous sensor measurements . In the following sections, the sequence of processes taken during the signal analysis mechanism are discussed.

39 4.2.1 Signal Storing Process.

According to the robots’ control mode (e.g. position-controlled robots), one of the current received values’ categories is used by the computation mechanism of the robot for calculating the desired (goal) values which the joints have to follow. Throughout this thesis, the desired position, velocity, and torque values (calculated by computer) of a joint i at iteration cycle m are denoted as P m, V m , and τ m respectively. Likewise, the current values (received from di di di the motors) are denoted as P m, V m , and τ m . The full set of these values can be defined as: ci ci ci S ={ P m, V m τ m, P m, V m , τ m } . In short, the current values are directly read from the ci ci ci di di di joints and the desired values are derived from the computation mechanisms of the robot.

As illustrated in Figure 4.1, a subset sx ∈ S of the current and desired position, velocity, Pl and torque values for the robotic joints is stored for each period 4t = m=1 δtm where δtm stands for the mth iteration of the robot’s control cycle. Previously in Section 4.1 δt was defined as a constant value, but due to communication and computation complexities δt can slightly differ from its constant value. Therefore, throughout this thesis δtm simply represent

th the value for δtm at m iteration in case of a minor differences from the initially programmed value of δt . Thus, δtm can be defined as δtm = δt +  . Depending on the robot’s control mode, only one of the computed values will be available during the robot’s operation. The subsets of current and desired values S available during the robot’s operation (for a given control mode) are defined as follows:

• For a Position-controlled robot, subset s = { P m, P m, V m τ m } are available. p di ci ci ci

• For a Velocity-controlled robot, subset s = { V m, P m, V m τ m } are available. v di ci ci ci

• For a Torque-controlled robot, subset s = { τ m, P m, V m τ m } are available. τ di ci ci ci

Upon availability, the values for S are stored during the time period ∆t in arrays denoted

40 by Pcii , Vcii , τcii , Pdii , Vdii , and τdii . As an example, the array Pcii is shown in Equation 4.1:

  1 Pc  i   2  Pc  P =  i  (4.1) ci  .   .      P m ci

where Pcii is the vector including the values of computed position for joint i during the period ∆t . That is, such vectors compromise “m” data values captured in the last (previous) “m” control cycles.

Figure 4.1: Evaluation Layer.

41 4.2.2 Dynamic Filtering the Stored Signals.

Of the three types of sensed data (joint values) captured (i.e., position, velocity, and torque), it was observed via experimental tests that the torque data includes the greatest number of outliers. In this thesis outliers are considered to be values substantially different from the average value of captured patterns of data. Figure 4.2 shows an example of recorded patterns of measured joint torque values for three different servomotor robotic joints (two 200 W and 100 W servomotors) during a 98 seconds time span in an idle state (not moving). As it can be noticed from the first and second graphs of Figure 4.2 (signals in green and yellow), the majority of the recorded data lies within an explicit range whereas the outliers tend to leave the so-called range. Figure 4.2 illustrates some of the identified outliers from the example collected data, where the more significant outliers have been circled in blue. Although there are already numerous methods for interpreting erroneous data with considerable amounts of false irregularities, developing methods which result in higher accuracies in terms of detecting and replacing the outliers without noticeable data loss in a time efficient manner are highly beneficial.

Figure 4.2: Robotic joints sensed torque values for two 200 W and a 100 W Dynamixel Pro motors.

In this thesis, for the purpose of time and computation efficiency, a dynamic filter was

42 developed. The proposed filter is a moving (dynamic) filtering method for detecting local outliers according to a window size Ws . The moving method used in this thesis identifies outliers as elements more than three local scaled Median Absolute Deviations (MAD) from the local median over the sliding window (Figure 4.2) with the size Ws . It must be noted that the condition for m > 2Ws is required for efficient filtering purposes. Chapter 6 will further discuss and justify the reason for this condition through a variety of examples and illustrations. For joint i during ∆t, the Scaled MAD is defined as following:

Scaled MAD = C × median |τ m − median(τ )| (4.2) ci ci where

√ 3 C = −1/( 2 × erfc−1( )) (4.3) 2

In this formula, erfc−1 stands for the Inverse complementary error function where as the result the value of C is equal to 1.4826 [111] .

4.2.3 Replacing Outliers with Modified Data.

Once for each array of τcii , outliers are identified, they are replaced with modified values. The outliers are filled with the computed threshold values using “linear interpolation neighboring” method. The linear interpolation neighboring is chosen as the filling method for outliers in arrays of τ m according to the patterns of received torque values from robot’s joints. As it ci was illustrated in Figure 4.2, torque values that are read directly from robotics’ joints have a constantly rising and falling pattern. Therefore, to maintain this original pattern which will be required for further steps during the proposed methodology (Equation 4.6), the linear interpolation neighboring showed the most beneficial results in comparison with other filling methods such as piecewise cubic spline interpolation. Moreover, with the condition of

43 m > 2Ws , the linear interpolation neighboring method leads to less data loss while pursuing the purpose of eliminating outliers. Based on the experiments that will be illustrated in the following chapters, the condition of m > 2Ws for the window size Ws and the number of storing iterations m, with the chosen interpolation method shows to reduce the number of outliers while not filtering excessive torque values which indicate a possible collision.

4.2.4 Processing Filtered and Stored values.

In this section, the processes required for generating the three final conditions’ requirements that lead to the detection of collisions and their locations will be discussed.

4.2.4.1 1st Condition Data Processing.

Depending on whether the robot is performing an accelerating or non-accelerating maneuver, the methodology proposes a different approach towards processing the stored values. Assum- ing that the humanoid robot is performing a non-accelerating maneuver, the Maximum and

minimum values for Pcii , Vcii , and τcii are calculated according to data availability (robot’s control mode). The differential value of the maximum and minimum values are stored as MAM , MAM ,and MAM . Note that if the robot is performing a non-constant Pci Vci τci velocity maneuver, only the value of MAM is valid for further steps. Overall the results τci of this step are used for the collision detection first condition’s requirements generation.

4.2.4.2 2nd Condition Data Processing.

Subsequently and based on robot’s control mode, for each P m, V m τ m,P m, V m , and τ m the ci ci ci di di di difference between the current and desired values for either position, velocity or torque for time period ∆t are computed. The differentiated values are stored into new arrays as dPi

, dVi , and dτi . For a position, velocity, and torque-controlled robot, the followings stand

44 respectively:

dPi = |Pci − Pdi | (4.4a)

dVi = |Vci − Vdi | (4.4b)

dτ = τ − τ (4.4c) i ci .i

Thereafter, the Maximum values for each array of dPi , dVi , and dτi are calculated and

Max Max Max Max Max Max stored as dPi , dVi , and dτi . The values of dPi , dVi , and dτi are updated during each ∆t and are used during the collision detection mechanism based on the robot’s control mode (position, velocity, or torque). These calculations are possible upon data availability and it is predicted that according to robot’s control mode, one of the three

Max Max Max dPi , dVi , and dτi can be calculated. i.e. for Position-controlled robots:

Max dPi = max(dPi) (4.5a)

Velocity-controlled robots

Max dVi = max(dVi) (4.5b)

and Torque-controlled robots:

Max dτi = max(dτi) (4.5c)

Max Max Max equations are applied. The values of dPi , dVi , and dτi are later utilized during the collision detection mechanism as the second condition’s requirement for final decision making.

45 4.2.4.3 3rd Condition Data Processing.

Based on the filtered data in Section 4.2.3, this section aims to use the filtered values of arrays

τˆcii for the final condition during detecting the collisions impacting a humanoid robot body. The last condition uses the slopes betweenτ ˆm variables to check for possible collisions. Even ci though the torque values follow a constantly rising and falling pattern between eachτ ˆm , the ci absolute values of the slopes between eachτ ˆm for each δt can still be utilized for detecting ci possible collisions. For this purpose, using the following formula, the m − 1 absolute slope values betweenτ ˆm variables are found during the control cycle δt for each joint i: ci

 τˆ1 −τˆ2  ci ci δt  2 3   τˆc −τˆc   i i   δt  Sτˆc = (4.6) im−1×1  .   .     τˆm−1−τˆm  ci ci δt

As it can be observed, S is a matrix with the size m − 1 × 1 which is calculated for the τˆcii time period ∆t for each joint i . As the next step, the maximum value for the S matrix τˆcii is calculated as SMax . The SMax matrices which are updated for each ∆t are used for the τˆci τˆci third and last condition for detecting collisions for the humanoid body. In the next section, the overall logic and mechanism in using the 3 defined conditions will be discussed in full details.

4.3 Collision Detection Mechanism

In the previous sections it was explained how the robotic joints current and desired values such as position, velocity, and torque can be stored, filtered and processed for the purpose of collision detection. In this section, the previously explained 3 requirements for detection conditions are utilized for the purpose of a time and computation efficient humanoid robot collision detection. The algorithm has a startup time of ∆t before the detection mecha-

46 nism can be initiated. This is due to the minimum time the Signal Analysis Mechanism (Evaluation Layer) requires to store, filter, and process its input data. The collision detection mechanism is initiated with the first condition which uses the values of MAM , MAM , and MAM that are calculated as explained in Section Pci Vci τci 4.2.4.1. Once the algorithm has finished its startup time, for each δt which is the period between each array of current (and also desired) data, the current values for the present time m and the previous time iteration m−1 current values are compared with the previously developed (during the preceding ∆t) MAM , MAM , and MAM as follows for each joint i: Pci Vci τci

τ m−1 − τ m ≤ MAM (4.7a) ci ci τi P m−1 − P m ≤ MAM (4.7b) ci ci Pi V m−1 − V m ≤ MAM (4.7c) ci ci Vi

Figure 4.3: Identification Layer.

m m m where Pci , Vci , and τci , stand for current (time iteration m ) position, velocity, and

47 toque values , and P m−1 , V m−1, and τ m−1 stand for the preceding time iteration m − 1 ci ci ci position, velocity, and torque values for each joint i. As mentioned before, the comparison of the 3 categories is valid in case of a non- accelerating maneuver. In case of an accelerating maneuver, only the first condition can be used for the purpose of collision detection (Equation 4.7a). Equations 4.5 which were initially explained in Section 4.2.4.2, are used for developing the second condition for detecting collisions impacting humanoid robots. Depending on the robot’s control mode, one of the 4.5 Equations is used for detecting a collision. According to Section 4.2.4.2, the absolute differential value for current time (iteration m ) current and

Max Max Max desired for either torque, position, or velocity are compared with dPi , dVi , and dτi which are computed during ∆t for each joint i as following:

τ m − τ m ≤ dτ max (4.8a) di ci i P m − P m ≤ dP max (4.8b) di ci i V m − V m ≤ dV max (4.8c) di ci i

If the absolute differential between current (iteration m ) current and desired values (e.g. position) is less or equal than the maximum of absolute differential between current and desired values in the preceding ∆t period, then the system does not detect a collision. Meeting the 2nd condition alongside the first and the last condition assist in accurately detecting any possible collisions. During Section 4.2.4.3 the requirements for developing the last condition in detecting collisions were explained. According to Equation 4.6, at each control cycle δt , the absolute slope value between the current time m and the previous cycle (time m − 1) is calculated

and compared with the result of Equation 4.6, Sτˆc as following for each joint i: iim−1×1

τ m − τ m−1 ci ci max ≤ Sτˆ (4.9) δt ci

48 If the above condition is met, alongside the other 2 conditions (Equations 4.7 and 4.8), the system will not detect a collision. The results for the 3 conditional statements are published for all joints i as binary values 0 as in no collision has been detected, and 1 as a detected collision in addition to the current readings from the robotic joints P m , V m, and τ m, which ci ci ci have resulted the detection of a collision, and the joints ID number for better clarification. Hence, for a humanoid robot with i=1, 2, ..., n joints we have:

  0|1     0|1   DMCj =  .  (4.10a)  .      0|1 i×1   0|1     0|2   DJIDj =  .  (4.10b)  .      0|i i×1   τ m|0 P m|0 V m|0  c1 c1 c1   . . .  DJcj =  . . .  (4.10c)    m m m  τc |0 Pc |0 Vc |0 i i i i×3

where, DMCj, DJIDj , DJcj stand for “Detection Matrix for jth Condition”, “Detected Joints’ ID for jth condition ”, and “Detected Joints’ current readings for jth condition” for j=1, 2, and 3 conditions. The final decision is made by the combination and the intersection of the results of the

49 3 overall conditions as following:

  0|1     0|1   FDM = (DMC1 ∧ DMC2 ∧ DMC3) =  .  (4.11a)  .      0|1 i×1   0|1     0|2   FDJID = (DJID1 ∧ DJID2 ∧ DJID3) =  .  (4.11b)  .      0|i i×1   τ m|0 P m|0 V m|0  c1 c1 c1   . . .  FDJc = (DJc1 ∧ DJc2 ∧ DJc3 ) =  . . .  (4.11c)    m m m  τc |0 Pc |0 Vc |0 i i i i×3

where FDM, FDJID, and FDJc stand for “Final Detection Matrix”, “Final Detected Joints’ ID”, and “Final Detected Joints’ current readings” respectively. According to Equations 4.11, the final result is calculated with the help of the binary operator “AND” (Figure 4.4). Since the values for exceeding torque, position and velocity are also identified, the proposed methodology finds the approximated value for the applied torque on the robotic joint (joints) in addition to the amount of displacement and change in velocity. Consequently, the results of the Identification layer are passed to the Execution layer for further steps to complete the operation architecture. At this stage, the robot decides to stop or continue a maneuver based on the results of the collision detection mechanism.

50 4.4 Operation Architecture Summary

During Chapter 4, different layers of the humanoid robot collision detection operation ar- chitecture were explained. The Perception layer is responsible for reading the current values received from the robotic joints which are updated per each control cycle δt . Evaluation layer stores these data and the data for the future desired values based on robots’ control mode, then processes and filters these data if necessary. Thereupon, the requirements for 3 main conditions which evolve the Identification layer are calculated. The identification layer Pl which is updated during each ∆t = m=1 δtm , decides whether the robotic joints have been affected by a collision or not per each δt which can be as fast as 8 msec. Once the robot joints are examined for possible collisions, the results are sent to the Execution layer to stop or continue a maneuver. Overall, after time ∆t from the beginning of any maneuver, the collision detection operation architecture starts to inspect the robotic joints for collisions to stop or continue their maneuver. This chain sequence of layers is repeated based on Figures 4.1, 4.3, and 3.1 as long as the robot is running and helps the robot to benefit from a time and computation-efficient safe operation.

Figure 4.4: Graphical demonstration of the final step of the collision detection.

51 Chapter 5

Experimental Humanoid Robot Characteristics

In order to test and analyze the collision detection proposed theoretical framework developed in Chapters 3 and 4, Chapter 6 will present a number of tests and experimental results for the collision detection of a position-controlled life-size humanoid robot performing a variety of maneuvers such as moving its arms and walking on smooth surfaces. The robot was subjected to different collisions having various intensities and impacting the robot in different orientations. This chapter discusses how the computer testing and experimental environments were implemented, and the details of the operation architecture model used for the position-controlled humanoid robot in addition to details about the robot’s abilities especially regarding environmental perception.

5.1 Humanoid Robot Specifications

As described in previous chapters, the focus of this thesis is to enable autonomous humanoid systems to detect planned and unplanned collisions with the goal to improve balance, in- crease safe HRI and perform responsive path planning among other things. By detecting unplanned collisions in real-time as defined in Chapter 3, it is expected that robots will

52 increase their own safety, the safety of the people they’re interacting with, and enhance their interactions with the equipment they interact with. Furthermore, by knowing the location of the impacts applied to the humanoids’ body, robots can more effectively employ this knowledge for balance enhancing purposes and effective physical interaction with humans and their environment. In this R&D work a life-size humanoid robot called “Taiko” will be used as the experi- mental robot (Figure 5.1). Taiko is a 42 Kg metallic humanoid robot having 29 Degrees of Freedom (DoF), and a height of 137.7 cm (Figure 5.2). This humanoid is controlled using the ROS (Robot Operating System) running on two Intel R NUC 5i minicomputers (Figure 5.2a).

Figure 5.1: Dr. Alejandro Ramirez-Serrano standing next to the Humanoid Robot Taiko.

The robot is equipped with a set of sensors including Logitech C920 HD Camera, an Intel Realsense, 3D point cloud Lidar, a Hokuyo UTM-30LX-EW laser, MIcroStrain 3DM- GX4-25 IMU (Inertial Measurement Unit) sensor, and two Ati Mini58-SI-2800-120 force sensors located on the robot’s ankles sensors. Schematics showing how these components are assembled is shown in Figure 5.2a and Figure 5.3. The connection schematic has been

53 Figure 5.2: (a) Taiko physical specifications, (b) Front view, and (c) Side view. demonstrated in Figure 5.3. The experimental humanoid has 29 DoF distributed as follow: 2 in the head, 7 in each arm in addition to a gripper in each arm, 1 in the waist, and 6 in each leg (see Figure 5.2b and c). The joint servomotors include ten 200W, eleven 100W, and eight 20W DYNAMIXEL PRO H-Series actuators. Each of the robotic joints (servomotors) has sensors for voltage, torque, position, velocity, acceleration, and temperature that can be used for control purposes. Based on these 29 actuators and the 2 grippers at the end of each hand and in the control of this thesis, there are 31 points (joints) where collisions can be detected (using the proposed collision detection approach described in Chapter 4). The sensing range values for the three different motor models (i.e., 200 W, 100 W, and 20 W) vary as shown in Tables 5.1, 5.2, and 5.3. The variety of the joints, in addition to the high number of DoF, gives Taiko the high potential of maneuvering in cluttered and confined spaces while also having the capability

54 Table 5.1: Control Table for the 200-Watt motors (Dynamixel H54-200-S500-R).

Table 5.2: Control Table for the 100-Watt motors (Dynamixel H54-100-S500-R).

to interact with its environment (e.g., objects, tools in diverse ways). Additionally, as it can be concluded from the control Tables 5.1, 5.2, and 5.3, Taiko’s robotic joints can exert a considerable amount of force at high speeds during its maneuvers which can damage the robot and its surroundings in case of a collision. Hence, Taiko’s complex and broad capabilities enables it to illustrate the full capacities of the proposed real-time collision detection solution which was presented in Chapter 4 and the testing and experimental results provided in the following chapter.

55 Table 5.3: Control Table for the 20-Watt motors (Dynamixel H42-20-S300-R).

5.2 Experimental Implementation

The proposed method described in Chapters 4 has been implemented in ROS (Robot Operat- ing System) Kinetic Kame, and simulated in MATLAB where the Perception and Execution layers of the proposed solution (see Figures 5.3 and 3.1) composing ROS Kinetic Kame are used to operate Taiko. MATLAB on the other hand is utilized throughout the evaluation and identification layers of the proposed approach (see Figure 5.3). The outputs of the Perception Layer include the real-time received values from the robotic joint motors in the ROS Kinetic platform which are then fed into the developed MATLAB code for further processing. The transition between the computer receiving the data in the ROS platform to the computer utilizing MATLAB is done using a D-Link DIR-816L modem shown in Figure 5.3. The output of the proposed Identification layer is fed into the Execution layer in the ROS Kinetic. Under the ROS system the humanoid robot Taiko publishes its current joints’ states (formerly mentioned in Chapter 4), and the desired joints’ position values. The control cycle for updating the joints’ state (referred to as control cycle δt in Chapter 4) was programmed to be 8 milliseconds in the ROS Kinetic platform. Thus, the output of the Perception Layer which is provided as the input of the Evaluation and Identification layers is updated each 8

56 msec. The Identification and Evaluation layers however are designed in a way to be flexible in terms of updating, therefore in cases of faster or slower data publishing, the developed mechanisms are able to adjust their processing rate accordingly. The overall process presented in Figure 3.11 can be started immediately upon the robot’s motors power initialization when the joint motors’ sensor readings and desired trajectory values are available. While the humanoid robot has a control cycle δt of 8 msec, the number of control cycles, m , used for the storing of the joint sensor readings process for the developed collision detection mechanism is equal to 20. Therefore, according to the equation presented in Section 4.3, the startup time ∆t to initialize the mechanism of collision detection is 0.016 seconds.

The proposed methods have been executed using a set of 4 computers: 2 Intel R NUC equipped with Intel R CoreTM i5 Processors and DDR4 RAM 8GB using Ubuntu 16.04 accountable for communicating with the servomotors and the robot’s sensors (e.g., camera, lidar, IMU, etc.) mounted on the robotic body (MPC and PPC computers in Figure 5.3

& 5.2). The operating computer (OPC in Figure 5.2) consists of an Intel R CoreTM i7- 2600k @3.4 GHz and 16 GB RAM which is running on Ubuntu 16.04. OPC is connected to the robot’s mounted router D-Link DIR-806A with a Cat5 LAN cable. The 4th and last computer used in the humanoid hardware experimental equipment is an Intel R core TM I7-7700 HQ CPU @ 2.8GHz with 16 GB RAM running Windows 10 (DPC computer in Figure 5.3). The development computer (DPC) is connected to the operating computer (OPC) through a Wi-Fi connection via D-link DIR-816L modem. Both minicomputers (i.e., MPC and PPC) are connected to the D-Link DIR-806A router mounted on the robot and are synchronized with the operating computer (OPC) prior to the experimental operations. The MPC computer controls the 29 body motor joints assembly using the joints shown in Figure 5.4 which also provides the joint names. To test the proposed collision detection approach, the humanoid robot was subjected to diverse controlled collisions each applied on different parts of the robot’s body and with a

57 Figure 5.3: Schematic diagram of the humanoid’s hardware architecture. controlled intensity. The applied collisions on Taiko’s joints are made using a Stanley 51-104 rubber mallet with a 454 g weight (Figure 5.5). The rubber mallet was used to ease the collision incidents with the robot and minimize the impact with the metallic robot depending on their force threshold to prevent damages to the internal and external system of the robot.

5.3 Remarks

Although the specified experimental humanoid robot (controlled via a joint Position control mechanism) was selected to test the developed collision detection approach, this mechanism is generic and can be applied to any type of humanoid robot provided the robot is able to publish feedback sensory data regarding the condition of its joints (i.e., torque, position of the output shaft, and rpm). Humanoid robots like Taiko with the ability to publish their joints’ current states and future planned (desired) states, with numerous DoF are great candidates for safe HRI if equipped with mechanisms like the proposed collision detection mechanism. The ability to detect collisions and stop any motion creating the collision in real-time can prevent hazardous

58 Figure 5.4: Taiko’s joint ID map.

59 Figure 5.5: Stanley 51-104 rubber mallet. maneuvers based on the feedback from the joints in addition to the complementary techniques which might utilize the data provided by other sensors. This approach allows us to focus on the collision detection task itself which can also be employed for balance enhancement purposes. That is, by equipping the robots with the knowledge of where their body is being touched or impacted by other objects, they can employ such information for better balanced positions without the use of expensive touch, tactile or force sensors (some of which such as robotic skins are still under development).

60 Chapter 6

Experimental Results

In this chapter the capabilities of the proposed approach discussed in Chapters 3 and 4 are presented. This chapter will present diverse experimental results of the collision detec- tion mechanism with a humanoid hardware experimental platform (Chapter 5) using posi- tion control. The detection of various intensity collisions applied on Taiko (the humanoid) follows a number of sub-tasks (Chapter 4) such as, dynamic filtering of collision outliers. Section 6.1 presents the experimental testing set-up used to test the proposed approach. Subsequently, Sections 6.2 and 6.3 illustrate results of the proposed filtering and detection techniques respectively. Finally, this chapter will demonstrate the overall ability of the pro- posed methodology not only to detect collisions but also the corresponding contact locations on Taiko’s body when the robot is idle (not moving-standing) and moving (e.g., walking, moving its arm(s), or kicking) (Section 6.4).

6.1 Experimental testing Set-up

Since the proposed methodology is being tested on a Position-controlled humanoid, as de- scribed in Section 4.2.1, the subset of values for position controlled robots, s = { P m,P m, p di ci V m, τ m }, are to be stored during the time period ∆t = Pl δt . The control cycle δt for ci ci m=1 m the humanoid robot was set to be equal to 0.008 seconds and the values for sp are stored

61 for twenty data captured cycles, m = 20. As a result, the algorithm captures motor sensor data for a time period of ∆t = 0.16 seconds. Therefore, the collision detection mechanism can start to detect collisions 0.16 seconds after the initialization of the robot’s desired and current joints data publishing. Whilst the detection time of 0.16 seconds facilitates the purpose of fast, real-time, and safe collision detection, different choices of the data captured period m will be explored in Section 6.3 through a series of collision detection experiments with various values of m.

In the following section, the proposed choice of Ws (i.e., Window Size) which satisfies the

condition m > 2Ws mentioned in Section 4.2.2 and provides a good compromise between fast and accurate collision detection will be discussed.

6.2 Motor’s Torque Dynamic Filtering

Throughout Sections 4.2.2 and 4.2.3 the filtering technique for the motor “i” torque stored values, τ m, were explained. In the implemented proposed methodology, the signal storing ci process stores values of τ m for m = 20 iterations. The window size for the filtering mechanism ci

was set to Ws = 9 which satisfies the condition of m > 2Ws. For the suggested storing

iteration number, Ws = 9 provides enough sensory data points for filtering outliers, while

preserving the impact that collisions can have on the values of τcii (current torque values). This window size alongside the scaled MAD (Median Absolute Deviation) (Section 4.2.2) method for detecting signal outliers, and the linear interpolation neighboring method used in this thesis for modifying the outliers demonstrates better dynamic filtering results for the iteration number of 20 as explained in Section 4.2.3. In the following plots, various values of

Ws will be used to validate the suggested value in this thesis.

For selecting a suitable Ws value a number of tests were performed while capturing τci

torque sensed values of different joint motors. In Figure 6.1, three different values of Ws

(Ws = 9, Ws = 5,and Ws = 15) have been used for filtering the torque values of τci for

62 the pitch motion of the head actuated by the motor having an ID 29 value (see Figure 5.4) during a experimental recording time of 20 seconds (2500 sensory data capturing iterations). The y-axis in Figure 6.1 represent the sensed torque value while the x-axis represents the first 250 number of iterations of the entire 2500 iterations (δt × 2500 = 0.008 × 2500 = 20 seconds) during which the corresponding sensed data was captured . The spike seen at time t= 0 seconds is the initialization of the motor where the motor goes from applying zero torque to applying a torque suitable to maintain the corresponding humanoid’s joint in place. Subsequently at time corresponding to iteration δt=48 a discontinuity in the sensed torque is observed. Since the motor was commanded to remain in its current state, the sudden change in the sensed torque at δt=48 is clearly an outlier. During this experiment while the robot is in an idle condition (i.e., standing in place and not moving), the closest joint to joint ID 29 (head-pitch motion joint), with the ID of 28 (head-yaw motion joint) was hit and the impact of such collision propagated to the demonstrated joint (ID 29). The captured data for joint ID 29 is shown in Figure 6.1 where the time of the collision is highlighted in a yellow line. At first glance, the three graphs appear similar, however, after careful investigation it was determined that the proposed algorithm

failed to filter the torque propagation before the iteration 50 when using Ws=5. Even though

Ws=5 satisfies the condition m > 2Ws , it fails to operate an accurate filtering mechanism as seen by the dashed filtered torque values which assumes the torque discontinuity was an actual change of torque by the motor when it wasn’t (i.e., the motor was never commanded to rotate its output shaft). On the other hand, Ws=15 appears to be performing an acceptable filtering in terms of modifying the outliers and the propagations from other joints’ collisions, although it doesn’t meet the condition of m > 2Ws. However, it can be noticed that the proposed filtering is flattening the recorded data more than the necessary ratio especially around iteration 127 (illustrated in orange dashed line) which leads to altering the nature of the sensed data. As explained in Section 4.2.1, the filtering mechanism is required to not alter the falling and rising pattern of the captured torque data and Ws=15 is performing the

63 contrary. As a result, when Ws=15, Equation 4.9 which utilizes the result of Equation 4.6 (the slopes between τ m values) is no longer practical. ci

Figure 6.1: Using collision detection mechanism on Joint ID 29 using different values of Ws.

Thus, it can be concluded the choice of Ws=9 amongst the other choices for Ws shows

better results for capturing motor sensor data with m = 20. Therefore, Ws=9 was used when analyzing the effectivity of the proposed algorithm since it demonstrated better filtering results. That is, the method has been able to filter the torque propagation of the neighbor joint to the joint ID 29 so that ID 29 will not be detected as the location of the collision which increases the accuracy of the proposed mechanism.

6.3 Storing Period Validation

In the previous section, it was mentioned that a value of m=20 was used. In this section the reason behind selecting m=20 is presented. The results of each of the four stored values of

64 s (i.e., P m,P m, V m, and τ m) are then used for the next ∆t=0.16 seconds (i.e., ∆t = 20×δt) p di ci ci ci to detect collisions. In order to determine an effective value of “m’ diverse experimental tests were performed where the collision detection algorithm was used. In what follows, and for illustration pur- poses only (as not all experimental tests can be presented) three different values of m (i.e., m=10,m=20, and m=50) for the collision detection applied to one of the robot’s joints are presented. To illustrate the results a scenario where the robot could hit objects with its reach will be presented. In such test the robots “head-y” joint (ID 28) was hit one time with a mallet while the robot’s head (neck) moved. The test was conducted during a 20 seconds period (2500 iterations) as presented in Figure 6.2. Figure 6.2 provides two plots, one (Figure 6.2a) representing the unfiltered and filtered sensed torque values captured from the motor. Figure 6.2b provides the results of the collision detection algorithm where each spike in the signal shown in Figure 6.2b represents a collision being detected. Thus, the collision detection is a logical TRUE (1) or FALSE (0) value where TRUE represents a collision being detected and FALSE represents the time when no collision was detected. In Figure 6.2a the empty circles (raw sensor data) represent values that were identified as outliers and thus removed (filtered data). The “X” data points without a circle enclosing them represent the data points that were added to the filtered values (keeping the size of the data set constant). Finally, the circles with an “X” within (i.e., N) represent the data that was not removed from the collected sensing data during the filtering process. In Figure 6.2, where a m=10 was used, it can be observed that the filtering mechanism performs an acceptable procedure, but the final detection matrix’s result (Equation 4.11a) shows that the proposed algorithm is detecting 2 non-consecutive collisions (Figure 6.2b) as opposed to one. As a result, the number of collisions does not agree with reality in this case. The reason for this is due to the fact that the time gap between each iteration of δt is too small (i.e., δt=0.008 seconds). Therefore, m=10 does not provide enough information

65 Figure 6.2: Collision Detection on joint ID 28 and Outlier Filtering using m = 10. for the accurate detection of the collision on the impacted joint while the robot is moving it. Furthermore, when a collision takes place, the effects of such collision which might be experienced by nearby components (e.g., linkages and joints) as vibrations, deformations, etc. are transmitted through the robot’s body. Figure 6.3 shows the real-time results of Equations 4.11a and 4.11b for the same collision presented in Figure 6.2 where the collision effects on three other robot joints are included (i.e., joint IDs 8, 11, and 29). Figure 6.3 also shows the collision detection results using three different values for m (i.e., m =10, 20, and 50). The four joints (Servomotors) included in Figure 6.3 are the “head-yaw” (ID 28), the “head-pitch” (ID 29), the “left-arm-elbow” (ID 8), and the “right-arm-wrist” (ID 11) as shown in Figure 6.4. While one of these joints (ID 29) is in close proximity to the impacted joint (ID 28), the other two joints, ID 8 and 11, are away from the impacted joint by three and four joints, respectively. Although the overall final detection matrix for all three values of the parameter

66 Figure 6.3: Results of the Final Detection Matrix for various values of m for joint ID 28. m results in 2 collisions being detected at each joint, some of the collisions are identified far apart from each other (e.g., 0.7 seconds). From Figure 6.3 it can be observed that using a m=20 provides the best collision detection

67 Figure 6.4: Affected joints for the experiment of different values of “m”. results due to the fact that the joints affected during collision have been identified within consecutive iterations (e.g., 3 × 0.008 = 0.024 seconds) where the non-affected joints have been successfully filtered, which shows the accuracy of the identified location ( in comparison to m=10). Moreover, m=20 demonstrates a fast detection of the collided joints in comparison to bigger values of m (as illustrated in Figures 6.5 and 6.6). Figure 6.5 shows the results of the previously described experiment using m=20 for the signal analysis process. In addition to showing satisfactory results in the current torque filtering process, the final detection matrix (Equation 4.11a) has also managed to detect the location of the collision for 4 consecutive δt iterations (0.032 seconds) right when the changes in the values of τ m were observed (Figure 6.5b). According to the real-time results ci from Figure 6.3, the detection collision mechanism has also demonstrated to accurately detect the location and the time of the collision in the targeted joint ID 28 and its neighbor joint ID 29. Hence m=20 shows good results in terms of accuracy and fast collision detection

68 (collision detected 0.008 seconds after the collision took place).

Figure 6.5: Collision Detection on joint ID 28and Outlier Filtering using m=20.

Figure 6.6: Collision Detection on joint ID 28and Outlier Filtering using m=50.

69 Figure 6.6 illustrates the results of the collision detection (Figure 6.6b) and torque filtering process (Figure 6.6a) for the same experiment utilizing m=50. The filtering process looks similar to the results obtained when using m=10 and m=20 with the small difference that a decrease in the sensitivity when identifying and removing outliers at iterations 120 to 150 was observed. This has had the consequence that even though the collision has been identified, the detection has been to what we consider a late collision detection (detected 6 × 0.008 = 0.048 seconds after the collision took place). Unfortunately, the high number of data capture iterations has led to the late detection of the collision where the first noticeable changes in the τ m readings have been identified as normal (no collision) recordings. At ci later iterations after the collision incident had been observed, the final detection mechanism (Equation 4.11a) used the results from changes in P m (Equation 4.10b) to detect the collision ci with a considerable delay. That is, only with significant displacement in the joint position, and once the impact of the applied torque has potentially harmed the joint, had the collision detection mechanism been able to detect a collision. The real-time results shown in Figure 6.3 also follows the same results as Figure 6.6 where the collision detection mechanism (Equation 4.11a) has detected the neighbor joint ID 29 with a noticeable delay in addition to a faulty detection of a non-collided joint (joint ID 11). Thus m=50 does not demonstrate to be an acceptable candidate based on the results shown in Figures 6.3 and 6.6. According to the results shown for the collision detection mechanism using the different values of m, the storing value of m=20 shows fast and accurate collision detecting in comparison with bigger and smaller values of m. Thus, after numerous experimental tests (not shown in this thesis document), it was determined that a value of m=20 alongside the optimal value of the window size Ws=9 are very effective and thus such parameter values were utilized in the subsequent tests to demonstrate the validity of the proposed work in real-life collision detection scenarios using the humanoid robot Taiko.

70 6.4 Humanoid’s Real-time Collision Detection Results

In this section, the results of the proposed methodology and using the identified parameter values for detecting collisions applied to the life-size (1.375 meters tall) humanoid “Taiko” are presented. For the experimental tests all collisions (impacts on the humanoid robot were performed with the Stanley 51-104 rubber mallet (Figure 5.5). A series of experimental tests have been executed on each group of three types of servomotors used in the robot (i.e., 20,100, and 200 W) when the robot is in one of the following two states: idle or moving. The collisions applied to the humanoid by hand (swinging the mallet) targeted each joint with various intensities (light, medium, and hard) where each fuzzy force intensity had an estimated force impact as shown in Table 6.1. The values of the collision forces applied aimed to perform the experiments in a way for the servomotors to detect changes in their τ m readings while not damaging the equipment. ci

Table 6.1: Estimated impact force applied to the humanoid robot. Force intensity Range of values [N] and [Kg] Color vector used to identify the force Light [1,5] N or [0.1 to 0.5] Kg Green -> Medium [5,10] N or [0.5 to 1.0] Kg Yellow -> hard [10,15] N or [1.0 to 1.5] Kg Red ->

6.4.1 Collisions on Low Power Joints.

To test the proposed collision detection approach on the low power joints (i.e., actuated by 20 W servomotors) such as the neck and hands of the robot (see Figure 5.4) light intensity forces (see Table 6.1) were applied. Table 6.2 shows illustrative examples of the test conducted on the right wrist (ID 13) and neck (ID 29) joints of the robot where only one impact (collision) was applied on the robot. The place of impact is shown in Table 6.2 via a green force vector arrow. The upper half of the table shows the results of collision detection mechanism during a collision experiment with joint ID 13 where the robot has been idle (i.e., standing still) throughout the experiment. The targeted joint has been hit with a light intensity force due

71 Table 6.2: Collision detection on the humanoid’s 20 W servomotor joints.

to the fact that the 20 W servomotors’ torque threshold is low in comparison with other groups of servomotors (which can resist stronger disturbances). As shown in the last column of Table 6.2 the the collision detection mechanism has been able to accurately detect the collision location (joint ID 13) and the effects on neighbor joints to the collision location (joints 9, 5, and 11) with a relatively fast speed (at 0.356, 0.424, 0.34, and 0.432 seconds respectively) after the first changes in the joints condition has been initiated. The time when the collision was identified at each of the affected joints is shown in the last column of Table 6.2. Joints 30, 4, and 29 (joints not in the vicinity of the impacted joint) however, have been also identified by the collision detection mechanism as affected joints outside the time span of the collision (took longer for the collision to affect such joints). The lower half of the Table 6.2, shows the results of the same experiment performed on a different joint of the same group of 20 W servomotors while the robot was moving its head side to side. The targeted joint during this experiment (joint ID 29) is responsible for performing

72 such movement action. Joint ID 29 and its neighbor joint, ID 28, have been identified as the location of the collision for repeatedly consecutive iterations (4th column,9 and 10 row data). As it can be seen from Table 6.2, during the next iterations, the propagation of the torque impact has affected the farther neighbor joints which has also been identified by the collision detection mechanism to have received an impact. For example, joint ID 27 which is the torso servomotor of the robot, has been detected almost after the initial detection and the joints close to the end effectors have been detected about 0.056 seconds later. Even though such high sensitivity is not directly aimed for during the proposed methodology, it can later facilitate in better impact detection of the overall robotic body.

6.4.2 Collisions on 100-Watt Power Joints.

Similar to the collision detection process described on the 20-Watt motors on the humanoid diverse tests were performed on the 100-Watt motors. Table 6.3 shows two of the real- time results of the collision detection mechanism when impacts were applied on the 100 W servomotor joints. As before, in these tests only one impact was exerted on the humanoid when the robot was either idle or moving in diverse ways. The upper part of Table 6.3 illustrates the results of the proposed mechanism on joint ID 4 (i.e., one of the left side shoulder motors) while the robot was in an idle state. The targeted joint for this illustrative experiment was hit with medium intensity as this group of servomotors (100 Watts) have higher torque threshold when compared with the 20-Watt motor group. The table shows the time taken to detect the collision (after the collision occurred) as well as the collision’s location where the impacted joint, ID 4, and the neighbor arm joints (IDs 6 and 12) have all detected a collision in a short time span from one another. However, joint ID 12 has detected three collisions (see last column fourth row of the Table 6.3) which is understandable based on the higher sensitivity of the 100 W servomotors. Even though joints ID 24, 9, and 19 are not close to the location of the collision, they have also detected a collision in the later iterations (cycles) of the proposed algorithm as the result of the propagated torque, vibrations, and

73 Table 6.3: Real-time collision detection of Taiko’s 100 W servomotor joints.

movement that resulted from the collision throughout the robot’s body. As it can be seen from the first row and two last columns of Table 6.3, joint ID 29 has been identified faultily again which shows the very high sensitivity and high number of outliers for this servomotor. For the second illustrative experiment with the 100 W servomotor joints presented in Table 6.3 (2nd row), the robot was set in motion by moving both arms (mimicking the human’s waving in the air motion when saying hi or goodbye). In this test joint ID 8 was impacted with the Stanley mallet with a medium intensity force. The proposed mechanism was able to detect the collisions on joint ID 8 right after the collision as well as in the neighboring joints (i.e., IDs 30, 2, 6, and 4) in subsequent cycles of running the proposed algorithm. Similar to the previous tests when a medium or large collision force is applied to the humanoid, it is observed that the servomotors on the opposite (right) arm of the humanoid also detect the collision. This collision detection takes place approximately 0.12 second after

74 the left arm joints detect the collision. As before, it is believed that this is a result of the motion on the robot caused by the collision that propagates throughout the robot’s body.

6.4.3 Collisions on 200-Watt Power Joints.

In contrast to the Light and Medium collision forces applied to the 20 W and 100 W motor joints respectively, Hard collisions were applied to the 200 W servomotors comprising the humanoid’s body. As before, for these tests only one collision was applied on the robot. Table 6.4 shows example results of the collision detection mechanism for the 200 W group of servomotor joints. The upper row of the table shows the collision detection experiment results for a hard collision applied to joint ID 25 (the ankle joint) while the robot was in an idle state. The joint has been targeted with high intensity since the 200 W group of servomotors are less sensitive to torque, position, and velocity changes in comparison with the other two servomotor groups. The results show that even though the impacted joint and its neighbor joints (i.e., IDs 23, 21, and 19) have detected the collision, the other joints that detected the collision are not close to the impacted location and some of such joints (e.g., IDs 11, 13, and 23) have erroneously detected a collision before the actual collision took place. This is due to the lower sensitivity of the 200 W servomotors in terms of sensing changes in torque and position. Hence, the more sensitive group of 20 W servomotors have erroneously detected a collision prior to the 200 W impacted motors. Joints 4, 6, 18, and 16 have also faultily detected collisions prior to the accident which needs further investigation to correct. The lower half of the Table 6.4 illustrates the results of the collision detection mechanism when the humanoid was performing a kicking maneuver with its right leg and slightly shifting its left leg to keep its balance (Center of Mass). For this (illustrative) test the location where the hard collision took place was joint ID 21.The obtained results again demonstrate higher sensitivity with the less powerful servomotor joints (joint IDs 16 and 27 which are 100 W servomotors and joint IDs 10 and 29 which are 20 W motors).For such joints the collision detection algorithm detected a collision prior to the timing of the actual collision by sometime

75 more than 0.008 seconds (e.g., ID 27) (a clear failure of the proposed algorithm). At this time, it is unknown why this occurred as probably the robot was inevitably touched prior to hitting it with the mallet. Joints 17, 21, and 15 however did detect the actual collision in sequence starting by joint ID 17 and finishing with joint ID 15 in close time spans from one another. Overall, the 200 W group of servomotors did demonstrate accurate results as the collisions were detected but some aspects need to be investigated.

6.4.4 Complex Robot Motions and Multiple Collision Tests.

The proposed methodology was also tested by applying on multiple successive collisions dur- ing both the idle and moving states. The purpose of this type of tests had the objective to evaluate the performance of the collision detection algorithm under more aggressive cases. Table 6.5 show the results of the test while the robot was hit three times with varied in- tensities (i.e., Hard, Medium, and Light). The impacted joints during this experiment were joint IDs 13, 16, and 25 which are 20 W, 100 W and 200 W motors, respectively. Such joints were hit with the mallet in sequence Light hit (ID 13), Medium hit (ID 16), and Hard hit (ID 25). The time interval between each of the three collisions was approximately 2 seconds. The first collision was accurately detected by joint ID 13 (right wrist) and also detected by joint ID 5 (right elbow rotation) after 0.02 and 0.28 seconds (i.e., in the following detection cycles). Joint ID 29 (neck) also detected the 1st collision at a later time (0.108 seconds after the collision occurred). This late detection is shown via the yellow filled cell in Table 6.5. For the 2nd collision on joint ID 16 (left leg rotation) the results show that such collision was accurately detected by joint ID 16 and joint ID 18 (0.196 seconds after the collision). However, results show that the left hands’ servomotors (i.e., joint IDs 14 and 12) also de- tected the 2nd collision approximately 0.05 seconds before the collision was detected by the impacted joint (this late detection is shown via the blue filled cells in Table 6.5). For the 3rd and last collision, which took place 2 seconds after the 2nd collision, the neighbor joints (i.e., IDs 21 (right knee) and 23 (2nd ankle joint)) to the impacted joint

76 Table 6.4: Real-time collision detection of Taiko’s 200 W servomotor joints.

(right ankle) detected a collision prior to the actual collision. It is thought that such false detection was due to the effects of the 1st and 2nd collisions’ effects still being propagated through the robot body. Thus, the joints highlighted in yellow are faulty detections or possible results of the previous torque propagation. In contrast with the previous test where each of the three consecutive collisions was applied to different groups of joints, a test was conducted where the consecutive collisions were applied to the same (or close) group of motors. The experiment consisted on hitting the humanoid Taiko with the mallet on joint ID 28 (neck), followed by a Light hit on joint ID 13 (right wrist), and subsequently a Medium collision on joint ID 8 (left elbow). The time between the 1st and 2nd collisions was 3.5 seconds while the time between the 2nd and 3rd collisions was less than 2 seconds. During this test the robot executed a simultaneous clapping and nodding its head move-

77 Table 6.5: Real-time collision detection of Taiko’s servomotor joints in idle state.

ments (moving both its arms and head). Table 6.6 shows the results of such test. The impacted joint for the first collision detected the collision two times at times 0.032 and 0.144 after the collision occurred. The neighbor joint to joint ID 28 also detected the 1st collision at 0.168 seconds after the collision occurred. For the second collision the impacted joint did not detect the collision. However, its neighbor joints (joint IDs 31, 5, and 9) detected the collision. For the third and last collision on joint ID 8, the proposed mechanism detected such collision at 0.124 seconds after the collision took place. The neighbor joints with IDs 6 and 2 also detected the collision. However, such detection was late by approximately 0.02 seconds. Unlike the previous experimental results, this last experiment detected a high number of faultily collisions which are highlighted in yellow in Table 6.6.

6.5 Discussion

The results presented in this chapter provide a number of examples illustrating the capabili- ties of the proposed methodology. The developed filtering technique for the current readings of the torque values from the servomotors was tested alongside the detection method re- liability. The presented experimental results show the ability of the proposed solution to

78 Table 6.6: Real-time collision detection of Taiko’s servomotor joints in moving state.

successfully detect collisions on the humanoid robots’ body. It also demonstrated the ability of the collision detection mechanism to find the location of the collisions made to the robot’s body via the time taken by each joint to detect the collision. The validity of the work was tested through several experimental tests on a real life-size humanoid robot. With the help of the proposed work, humanoid robots and humans will be able to be engaged in close proximity HRI maneuvers in a safe manner. Moreover, humanoids will benefit from safe op- eration where they will be able to prevent severe damages and injuries to their surroundings and themselves.

79 Chapter 7

Conclusions

The tasks for which humanoid robots can be used are growing each year; from applications such as helping in medical centers to helping the astronauts in space stations for repairs and burdensome mechanical work. Although significant research has been conducted to enable humanoids to execute these tasks and to operate amongst humans, they are not quite ready to share the same spaces with humans and to collaborate with humans. The field of HRI, focuses on challenges to make robots ready for close proximity with humans’ activities. One of the very important aspects of HRI is safe HRI for humanoids which assures the safety of these robots and their surrounding objects and people. Two of the areas of safe HRI are the collision avoidance and collision detection for humanoids which are highly reliant on visual perception via sensors and numerous equipment. However, visual perception and the equipment which enable it have shortcomings and that is why in the past years scientists and researchers have been paying more attention to complementing methods. Hence, tactile, touch, and force sensors which mimic the tactile sensibility of humans have been developed in the past years which have made great contributions to the field of safe HRI and collision detection. However, these sensors are highly expensive, fragile, and complex to install and maintain. Therefore, they are not the best candidates to detect collisions and prevent damages to the robotic equipment and their surroundings especially in cases of a

80 failure in visual perception. Such constraints and complexities limit the applications of humanoid robots and prevent to fully benefit from their potentials. Thus, this thesis presents a solution which enables humanoids to operate in close proximity with humans and engage in safe HRI. A couple of requirements need to be met to solve the above challenges. The utilized solution needs to be applicable for all kinds of humanoids (i.e., various control mechanisms, sizes, etc.), detect collisions with a comparably to tactile sensors fast pace, and ensure a high accuracy in terms of identifying the location and time of the collisions. In the proposed solution, these requirements were met by avoiding the use of tactile sensors and developing a collision detection algorithm which analyzes the feedbacks and signals received from the robotic joint motors. The proposed solution takes 0.016 seconds to analyze such signals and based on the demonstrated results in Chapter 6 is able to identify the impacted by collisions’ joints as fast as 0.008 seconds. Moreover, the location of the collision on the humanoid body in addition to the other impacted areas could be found within less than 0.5 seconds. Using the proposed approach, humanoid robots can detect and locate collisions on their body which enables them to work in close proximity with humans and to engage in HRI activities in a safe manner even with the possibility of failures in visual perception collision avoidance and detection mechanisms. These contributions will be shared with the scientific community through the publication and presentation of a number of conference papers in the upcoming conferences:

• IEEE International Conference on Multisensor Fusion and Integration, Germany, Septem- ber 2020

• IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), USA, October 2020

·

81 7.1 Future Work

Although results shown in Chapter 6 demonstrate that the proposed collision detection framework can successfully identify and locate the impacted collisions on humanoids’ bodies with a high accuracy and fast detection time, future work which improves the proposed work still remains. This future work will mainly be concerned with:

• The development of a more accurate and precise collision detection mechanism using Artificial Neural Networks (ANN) and machine learning techniques such as neural network training. Currently this task does not present a practical approach since the training data for this purpose is limited.

• The development of a mechanism which can locate the impacted joints, not in proximity of an applied collision in addition to the neighbor and identified joints during a collision in this thesis’ work, with the employment of robotic kinematics and ANN methods. Similar to the previous planned work, this task requires abundant training data which is currently limited.

Even though the proposed work achieved a real-time performance, future work will be re- quired to decrease computation time and effort to implement the proposed method on hu- manoid robots. It should be noted that the proposed method was implemented both in MATLAB and ROS (C++) and between 4 different computers which significantly reduced the computation time due to the wireless and wired connections for data transfer between the utilized computers. Based on this, one can expect a reduction in computation time when using a lower level programming language such as C/C++ and solely employing ROS for the implementation step. Therefore reducing the number of utilized computers and using the Linux operating system with the platform of ROS can potentially demonstrate faster results.

82 Bibliography

[1] R. Christianna, “When Will Humanoid Robots Enter Our

Homes and Transform Our Lives.” https : / / futurism . com / when-will-humanoid-robots-enter-our-homes-and-transform-our-lives, 2017. [Online; accessed 19-Jun-2017].

[2] K. McSweeney, “Rosie the robot is finally on her way.” https://www.zdnet.com/ article/rosie-the-robot-is-finally-on-her-way/, 2017. [Online; accessed 07- Aug-2019].

[3] A. M. Zanchettin, N. M. Ceriani, P. Rocco, H. Ding, and B. Matthias, “Safety in human-robot collaborative manufacturing environments: Metrics and control,” IEEE Transactions on Automation Science and Engineering, vol. 13, pp. 882–893, April 2016.

[4] J. Duysens and A. Forner-Cordero, “Walking with perturbations: a guide for biped humans and robots,” Bioinspiration & Biomimetics, vol. 13, p. 061001, Sept. 2018.

[5] A. Bradeley, “an introduction to robot and robot system safety.” https://www.mc-mc. com/ASSETS/DOCUMENTS/CMS/EN/PDH/RobotSystemSafety.pdf, 2015. [Online; ac- cessed 2015].

[6] “Boston dynamics.” https://www.bostondynamics.com. [Online; 30-Jul-2019].

83 [7] G. M. Atmeh, I. Ranatunga, D. O. Popa, K. Subbarao, F. Lewis, and P. Rowe, “Imple- mentation of an adaptive, model free, learning controller on the atlas robot,” in 2014 American Control Conference, pp. 2887–2892, June 2014.

[8] M. DeDonato, F. Polido, K. Knoedler, B. P. W. Babu, N. Banerjee, C. P. Bove, X. Cui, R. Du, P. Franklin, J. P. Graff, P. He, A. Jaeger, L. Li, D. Berenson, M. A. Gennert, S. Feng, C. Liu, X. Xinjilefu, J. Kim, C. G. Atkeson, X. Long, and T. Padır, “Team WPI-CMU: Achieving reliable humanoid behavior in the DARPA robotics challenge,” Journal of Field Robotics, vol. 34, pp. 381–399, Jan. 2017.

[9] C. D. S.Gayathri, V.Mythreyee, “Human body motion control for humanoids and bipedal robots,” International Journal of Pure and Applied Mathematics, vol. 119, pp. 705–719, Jan. 2018.

[10] J. W. Wells, “Visual perception system and method for a humanoid robot,” August 2014. United States Patent.

[11] T. Ishida, “Development of a small biped entertainment robot ,” in Micro- Nanomechatronics and Human Science, 2004 and The Fourth Symposium Micro- Nanomechatronics for Information-Based Society, 2004., pp. 23–28, Oct 2004.

[12] S. Hyon, J. G. Hale, and G. Cheng, “Full-body compliant human–humanoid interac- tion: Balancing in the presence of unknown external forces,” IEEE Transactions on Robotics, vol. 23, pp. 884–898, Oct 2007.

[13] B. J. Stephens and C. G. Atkeson, “Dynamic balance force control for compliant humanoid robots,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1248–1255, Oct 2010.

[14] O. Kerpa, K. Weiss, and H. Worn, “Development of a flexible tactile sensor system for a humanoid robot,” in Proceedings 2003 IEEE/RSJ International Conference on

84 Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), vol. 1, pp. 1–6 vol.1, Oct 2003.

[15] B. GA, “On autonomous robots knowledge engineering review,” vol. 13, no. 2, pp. 882– 893, 1998.

[16] S. Kajita, H. Hirukawa, K. Harada, and K. Yokoi, Introduction to Humanoid Robotics. Springer Berlin Heidelberg, 2014.

[17] G. A. Landis, “Teleoperation from mars orbit: A proposal for human exploration,” Acta Astronautica, vol. 62, pp. 59–65, Jan. 2008.

[18] S. M. Goza, R. O. Ambrose, M. A. Diftler, and I. M. Spain, “Telepresence control of the / robonaut on a mobility platform,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’04, (New York, NY, USA), pp. 623–629, ACM, 2004.

[19] “Robot Platform, Types of Wheeled Robots.” http://www.robotplatform.com/ knowledge/Classification_of_Robots/Types_of_wheeled_robots.html. [Online; accessed 2010].

[20] “Robotics/Types of Robots/Wheeled.” https : / / en . wikibooks . org / wiki / Robotics/Types_of_Robots/Wheeled, 2018. [Online; accessed 2018].

[21] “How to build a robot tutorials - society of robot.” http://www.societyofrobots. com/robot_omni_wheel.shtml. [Online; accessed 2005].

[22] J. D. Han, S. Q. Zeng, K. Y. Tham, M. Badgero, and J. Y. Weng, “Dav: a humanoid robot platform for autonomous mental development,” in Proceedings 2nd International Conference on Development and Learning. ICDL 2002, pp. 73–81, June 2002.

[23] S. Behnke, “Humanoid robots - from fiction to reality?,” KI, vol. 22, pp. 5–9, 01 2008.

85 [24] “Elektro.” Available:https://en.wikipedia.org/wiki/Elektro. [Online; accessed 2013].

[25] W. Bluethmann, R. Ambrose, M. Diftler, S. Askew, E. Huber, M. Goza, F. Rehnmark, C. Lovchik, and D. Magruder Autonomous Robots, vol. 14, no. 2/3, pp. 179–197, 2003.

[26] F. Rehnmark, I. Spain, W. Bluethmann, M. Goza, R. Ambrose, and K. Alder, “An experimental investigation of robotic spacewalking,” pp. 366 – 384 Vol. 1, 12 2004.

[27] R. O. Ambrose, R. T. Savely, S. M. Goza, P. Strawser, M. A. Diftler, I. Spain, and N. Radford, “Mobile manipulation using nasa’s robonaut,” in IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004, vol. 2, pp. 2104–2109 Vol.2, April 2004.

[28] J. S. Mehling, P. Strawser, L. Bridgwater, W. K. Verdeyen, and R. Rovekamp, “Cen- taur: Nasa’s mobile humanoid designed for field work,” in Proceedings 2007 IEEE International Conference on Robotics and Automation, pp. 2928–2933, April 2007.

[29] X. shan Gao, F. quan Dai, and C. quan Li, “Two types of coaxial self-balancing robots,” Journal of Central South University, vol. 20, pp. 2981–2990, Nov. 2013.

[30] S. Man, “Robotic riding mechanism for segway personal trans- porter,” Master’s thesis, The Chinese University of Hong Kong, https://core.ac.uk/download/pdf/48551589.pdf, 4 2010.

[31] S. Kara, Control of Two Wheel Self Stabilizing With a Simple Arm. PhD thesis, 10 2014.

[32] R. A. Grupen and P. Deegan, “Whole-body strategies for mobility and manipulation,” 2010.

86 [33] L. Li, S. Jiang, F. Dai, and X. Gao, “Dynamic model and balance control of two- wheeled robot with non-holonomic constraints,” in Proceeding of the 11th World Congress on Intelligent Control and Automation, pp. 503–508, June 2014.

[34] M. A. Diftler, R. O. Ambrose, K. S. Tyree, S. M. Goza, and E. L. Huber, “A mo- bile autonomous humanoid assistant,” in 4th IEEE/RAS International Conference on Humanoid Robots, 2004., vol. 1, pp. 133–148 Vol. 1, Nov 2004.

[35] M. A. Diftler, R. O. Ambrose, S. M. Goza, K. S. Tyree, and E. L. Huber, “Robonaut mobile autonomy: Initial experiments,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 1425–1430, April 2005.

[36] H. Nguyen, J. Morrell, K. Mullens, A. Burmeister, S. Miles, N. Farrington, K. Thomas, and D. Gage, “Segway robotic mobility platform,” Proceedings of SPIE - The Inter- national Society for Optical Engineering, vol. 5609, 10 2004.

[37] G. A. Landis, “Robots and humans: synergy in planetary exploration,” Acta Astro- nautica, vol. 55, pp. 985–990, Dec. 2004.

[38] A. I. K. J. S. Bauman, P. Newman and P. Abel, “A basic robotic excavator - description,

design, and initial operation.” https://ntrs.nasa.gov/archive/nasa/casi.ntrs. nasa.gov/20160004045.pdf, 2016. [Online; accessed March-2016].

[39] F. Rehnmark, R. Ambrose, S. Goza, L. Junkin, P. Neuhaus, and J. Pratt, “Centaur: a mobile dexterous humanoid for surface operations,” Proceedings of SPIE - The In- ternational Society for Optical Engineering, 05 2005.

[40] “Hrs: Rover technologies hrs: Space robotics challenge.” Https://ntrs.nasa.gov/ archive/nasa/casi.ntrs.nasa.gov/20150013829.pdf.

87 [41] F. Rehnmark, W. Bluethmann, J. Mehling, R. O. Ambrose, M. Diftler, M. Chu, and R. Necessary, “Robonaut: the ’short list’ of technology hurdles,” Computer, vol. 38, pp. 28–37, Jan 2005.

[42] E. LECOMPTE and R. RONSSE, “Balance improvement of a humanoid robot

while walking.” https://dial.uclouvain.be/memoire/ucl/en/object/thesis% 3A14825/datastream/PDF_01/view, 2018.

[43] Qiang Huang, S. Kajita, N. Koyachi, K. Kaneko, K. Yokoi, H. Arai, K. Komoriya, and K. Tanie, “A high stability, smooth walking pattern for a biped robot,” in Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), vol. 1, pp. 65–71 vol.1, May 1999.

[44] M. S. A. Curran, K. Colpritt and S. M. Moffat, “Humanoid walking robot.”

https://web.wpi.edu/Pubs/E-project/Available/E-project-042618-110958/ unrestricted/HumanoidWalkingRobot_MQPFinalReport.pdf, 2018.

[45] I. Kato, S. Ohteru, K. Shirai, T. Matsushima, S. Narita, S. Sugano, T. Kobayashi, and E. Fujisawa, “The robot musician ‘wabot-2’ (waseda robot-2),” Robotics, vol. 3, pp. 143–155, June 1987.

[46] H. Miura and I. Shimoyama, “Dynamic walk of a biped,” The International Journal of Robotics Research, vol. 3, pp. 60–74, June 1984.

[47] I. Harvey, E. Vaughan, and E. Di Paolo, “Time and motion studies: the dynamics of cognition, computation and humanoid walking,” 12 2019.

[48] J.-Y. Kim, I.-W. Park, and J.-H. Oh, “Experimental realization of dynamic walking of the biped humanoid robot khr-2 using zero moment point feedback and inertial measurement,” Advanced Robotics, vol. 20, no. 6, pp. 707–736, 2006.

[49] R. Niiyama, “A pneumatic biped with an artificial musculoskeletal system,” 2008.

88 [50] Y. Nakanishi, Y. Asano, T. Kozuki, H. Mizoguchi, Y. Motegi, M. Osada, T. Shirai, J. Urata, K. Okada, and M. Inaba, “Design concept of detail musculoskeletal humanoid “kenshiro” - toward a real human body musculoskeletal simulator,” in 2012 12th IEEE- RAS International Conference on Humanoid Robots (Humanoids 2012), pp. 1–6, Nov 2012.

[51] “Asimo by honda — the world’s most advanced humanoid robo.” https://asimo. honda.com/default.aspx, 2019. [Online; accessed 2019].

[52] K. Hirai, M. Hirose, Y. Haikawa, and T. Takenaka, “The development of honda hu- manoid robot,” in Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146), vol. 2, pp. 1321–1326 vol.2, May 1998.

[53] “Hubo humanoid robot.” http://wiki.ros.org/Robots/HUBO., 2019. [Online; ac- cessed 30-July-2019].

[54] Ill-Woo Park, Jung-Yup Kim, Jungho Lee, and Jun-Ho Oh, “Online free walking tra- jectory generation for biped humanoid robot khr-3(),” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., pp. 1231– 1236, May 2006.

[55] A. Dietrich, T. Wimb¨ock, A. Albu-Sch¨affer,and G. Hirzinger, “Singularity avoidance for nonholonomic, omnidirectional wheeled mobile platforms with variable footprint,” in 2011 IEEE International Conference on Robotics and Automation, pp. 6136–6142, May 2011.

[56] P. R. Giordano, M. Fuchs, A. Albu-Schaffer, and G. Hirzinger, “On the kinematic modeling and control of a mobile platform equipped with steering wheels and movable legs,” in 2009 IEEE International Conference on Robotics and Automation, pp. 4080– 4087, May 2009.

89 [57] A.-C. Hildebrandt, R. Wittmann, F. Sygulla, D. Wahrmann, D. Rixen, and T. Buschmann, “Versatile and robust bipedal walking in unknown environments: real-time collision avoidance and disturbance rejection,” Autonomous Robots, vol. 43, pp. 1957–1976, Feb. 2019.

[58] K. Okada, S. Kagami, M. Inaba, and H. Inoue, “Plane segment finder: algorithm, im- plementation and applications,” in Proceedings 2001 ICRA. IEEE International Con- ference on Robotics and Automation (Cat. No.01CH37164), vol. 2, pp. 2120–2125 vol.2, May 2001.

[59] K. Sabe, M. Fukuchi, J. . Gutmann, T. Ohashi, K. Kawamoto, and T. Yoshigahara, “Obstacle avoidance and path planning for humanoid robots using stereo vision,” in IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004, vol. 1, pp. 592–597 Vol.1, April 2004.

[60] Yisheng Guan, K. Yokoi, and K. Tanie, “Feasibility: Can humanoid robots over- come given obstacles?,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 1054–1059, April 2005.

[61] J.-S. Gutmann, M. Fukuchi, and M. Fujita, “3d perception and environment map generation for humanoid robot navigation,” The International Journal of Robotics Research, vol. 27, pp. 1117 – 1134, 2008.

[62] J. Chestnutt, P. Michel, K. Nishiwaki, J. Kuffner, and S. Kagami, “An intelligent joystick for biped control,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., pp. 860–865, May 2006.

[63] J. Chestnutt, Y. Takaoka, K. Suga, K. Nishiwaki, J. Kuffner, and S. Kagami, “Biped navigation in rough environments using on-board sensing,” in 2009 IEEE/RSJ Inter- national Conference on Intelligent Robots and Systems, pp. 3543–3548, Oct 2009.

90 [64] K. Nishiwaki and S. Kagami, “Online design of torso height trajectories for walking patterns that takes future kinematic limits into consideration,” in 2011 IEEE Inter- national Conference on Robotics and Automation, pp. 2029–2034, May 2011.

[65] M. F. Fallon, P. Marion, R. Deits, T. Whelan, M. Antone, J. McDonald, and R. Tedrake, “Continuous humanoid locomotion over uneven terrain using stereo fu- sion,” in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Hu- manoids), pp. 881–888, Nov 2015.

[66] P. Karkowski and M. Bennewitz, “Real-time footstep planning using a geometric ap- proach,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1782–1787, May 2016.

[67] N. Perrin, O. Stasse, L. Baudouin, F. Lamiraux, and E. Yoshida, “Fast humanoid robot collision-free footstep planning using swept volume approximations,” IEEE Transac- tions on Robotics, vol. 28, pp. 427–439, April 2012.

[68] D. Maier, C. Lutz, and M. Bennewitz, “Integrated perception, mapping, and footstep planning for humanoid navigation among 3d obstacles,” in 2013 IEEE/RSJ Interna- tional Conference on Intelligent Robots and Systems, pp. 2658–2664, Nov 2013.

[69] Yisheng Guan, K. Yokoi, Neo Ee Sian, and K. Tanie, “Feasibility of humanoid robots stepping over obstacles,” in 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), vol. 1, pp. 130–135 vol.1, Sep. 2004.

[70] O. Stasse, B. Verrelst, B. Vanderborght, and K. Yokoi, “Strategies for humanoid robots to dynamically walk over large obstacles,” IEEE Transactions on Robotics, vol. 25, pp. 960–967, Aug 2009.

[71] S. Kajita, F. Kanehiro, K. Kaneko, K. Fujiwara, K. Yokoi, and H. Hirukawa, “A realtime pattern generator for biped walking,” in Proceedings 2002 IEEE International

91 Conference on Robotics and Automation (Cat. No.02CH37292), vol. 1, pp. 31–37 vol.1, May 2002.

[72] K. Nishiwaki and S. Kagami, “High frequency walking pattern generation based on pre- view control of zmp,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., pp. 2667–2672, May 2006.

[73] J. Chestnutt, J. Kuffner, K. Nishiwaki, and S. Kagami, “Planning biped navigation strategies in complex environments,” IEEE-RAS International Conference on Hu- manoid Robots, 04 2009.

[74] J. Chestnutt, P. Michel, J. Kuffner, and Takeo Kanade, “Locomotion among dynamic obstacles for the honda ,” in 2007 IEEE/RSJ International Conference on Intel- ligent Robots and Systems, pp. 2572–2573, Oct 2007.

[75] J. Chestnutty, Y. Takaokaz, M. Doiz, K. Sugaz, and S. Kagamiy, “Safe adjustment regions for legged locomotion paths,” in 2010 10th IEEE-RAS International Conference on Humanoid Robots, pp. 224–229, Dec 2010.

[76] M. Naveau, M. Kudruss, O. Stasse, C. Kirches, K. Mombaur, and P. Sou`eres,“A reactive walking pattern generator based on nonlinear model predictive control,” IEEE Robotics and Automation Letters, vol. 2, pp. 10–17, Jan 2017.

[77] S. Kuindersma, R. Deits, M. Fallon, A. Valenzuela, H. Dai, F. Permenter, T. Koolen, P. Marion, and R. Tedrake, “Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot,” Autonomous Robots, vol. 40, pp. 429–455, July 2015.

[78] M. Schwienbacher, T. Buschmann, S. Lohmeier, V. Favot, and H. Ulbrich, “Self- collision avoidance and angular momentum compensation for a biped humanoid robot,” in 2011 IEEE International Conference on Robotics and Automation, pp. 581–586, May 2011.

92 [79] A. Hildebrandt, R. Wittmann, D. Wahrmann, A. Ewald, and T. Buschmann, “Real- time 3d collision avoidance for biped robots,” in 2014 IEEE/RSJ International Con- ference on Intelligent Robots and Systems, pp. 4184–4190, Sep. 2014.

[80] F. Abi-Farraj, B. Henze, C. Ott, P. R. Giordano, and M. A. Roa, “Torque-based bal- ancing for a humanoid robot performing high-force interaction tasks,” IEEE Robotics and Automation Letters, vol. 4, pp. 2023–2030, April 2019.

[81] B. Henze, M. A. Roa, and C. Ott, “Passivity-based whole-body balancing for torque- controlled humanoid robots in multi-contact scenarios,” The International Journal of Robotics Research, vol. 35, pp. 1522–1543, July 2016.

[82] A. Ibanez, P. Bidaud, and V. Padois, “Unified preview control for humanoid postural stability and upper-limb interaction adaptation,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1801–1808, Oct 2012.

[83] C. Ott, B. Henze, and D. Lee, “Kinesthetic teaching of humanoid motion based on whole-body compliance control with interaction-aware balancing,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4615–4621, Nov 2013.

[84] B. Henze, A. Dietrich, M. A. Roa, and C. Ott, “Multi-contact balancing of humanoid robots in confined spaces: Utilizing knee contacts,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 697–704, Sep. 2017.

[85] L. Sentis, “Compliant control of whole-body multi-contact behaviors in humanoid robots,” in Motion Planning for Humanoid Robots, pp. 29–66, Springer London, 2010.

[86] L. Sentis, J. Park, and O. Khatib, “Compliant control of multicontact and center-of- mass behaviors in humanoid robots,” IEEE Transactions on Robotics, vol. 26, pp. 483– 501, June 2010.

93 [87] B. J. Stephens and C. G. Atkeson, “Push recovery by stepping for humanoid robots with force controlled joints,” in 2010 10th IEEE-RAS International Conference on Humanoid Robots, pp. 52–59, Dec 2010.

[88] M. Mistry, J. Buchli, and S. Schaal, “Inverse dynamics control of floating base systems using orthogonal decomposition,” in 2010 IEEE International Conference on Robotics and Automation, pp. 3406–3412, May 2010.

[89] B. Henze, A. Dietrich, and C. Ott, “An approach to combine balancing with hierarchi- cal whole-body control for legged humanoid robots,” IEEE Robotics and Automation Letters, vol. 1, pp. 700–707, July 2016.

[90] A. Escande, N. Mansard, and P.-B. Wieber, “Hierarchical quadratic programming: Fast online humanoid-robot motion generation,” The International Journal of Robotics Research, vol. 33, pp. 1006–1028, 05 2014.

[91] M. Murooka, S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba, “Whole-body pushing manipulation with contact posture planning of large and heavy object for humanoid robot,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 5682–5689, May 2015.

[92] S. Caron and A. Kheddar, “Multi-contact walking pattern generation based on model preview control of 3d com accelerations,” in 2016 IEEE-RAS 16th International Con- ference on Humanoid Robots (Humanoids), pp. 550–557, Nov 2016.

[93] A. Del Prete, S. Tonneau, and N. Mansard, “Fast algorithms to test robust static equilibrium for legged robots,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1601–1607, May 2016.

[94] F. Abi-Farraj, B. Henze, C. Ott, P. R. Giordano, and M. A. Roa, “Torque-based bal- ancing for a humanoid robot performing high-force interaction tasks,” IEEE Robotics and Automation Letters, vol. 4, pp. 2023–2030, April 2019.

94 [95] A. Alhusin Alkhdur, B. Schmidt, and L. Wang, “Active collision avoidance for hu- man–robot collaboration driven by vision sensors,” International Journal of Computer Integrated Manufacturing, vol. 30, pp. 970–980, 01 2017.

[96] D. H. P. Nguyen, M. Hoffmann, A. Roncone, U. Pattacini, and G. Metta, “Compact real-time avoidance on a humanoid robot for human-robot interaction,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI 18, ACM Press, 2018.

[97] Q. Leboutet, E. Dean-Leon, F. Bergner, and G. Cheng, “Tactile-based whole-body compliance with force propagation for mobile manipulators,” IEEE Transactions on Robotics, vol. 35, pp. 330–342, April 2019.

[98] V. Duchaine, N. Lauzier, M. Baril, M. Lacasse, and C. Gosselin, “A flexible robot skin for safe physical human robot interaction,” in 2009 IEEE International Conference on Robotics and Automation, pp. 3676–3681, May 2009.

[99] G. Cannata, M. Maggiali, G. Metta, and G. Sandini, “An embedded artificial skin for humanoid robots,” in 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 434–438, Aug 2008.

[100] M. W. Strohmayr, H. W¨orn,and G. Hirzinger, “The dlr artificial skin step i: Uniting sensitivity and collision tolerance,” in 2013 IEEE International Conference on Robotics and Automation, pp. 1012–1018, May 2013.

[101] V. J. Lumelsky and E. Cheung, “Real-time collision avoidance in teleoperated whole- sensitive robot arm manipulators,” IEEE Transactions on Systems, Man, and Cyber- netics, vol. 23, pp. 194–203, Jan 1993.

[102] T. Wosch and W. Feiten, “Reactive motion control for human-robot tactile interac- tion,” in Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), vol. 4, pp. 3807–3812 vol.4, May 2002.

95 [103] T. Asfour, K. Regenstein, P. Azad, J. Schroder, A. Bierbaum, N. Vahrenkamp, and R. Dillmann, “ARMAR-III: An integrated humanoid platform for sensory-motor con- trol,” in 2006 6th IEEE-RAS International Conference on Humanoid Robots, IEEE, Dec. 2006.

[104] F. Nori, S. Traversaro, J. Eljaik, F. Romano, A. Del Prete, and D. Pucci, “ whole- body control through force regulation on rigid non-coplanar contacts,” Frontiers in Robotics and AI, vol. 2, 03 2015.

[105] V. Padois, S. Ivaldi, J. Babiˇc, M. Mistry, J. Peters, and F. Nori, “Whole-body multi-contact motion in humans and humanoids: Advances of the CoDyCo european project,” Robotics and Autonomous Systems, vol. 90, pp. 97–117, Apr. 2017.

[106] A. Cirillo, F. Ficuciello, C. Natale, S. Pirozzi, and L. Villani, “A conformable force/tactile skin for physical human–robot interaction,” IEEE Robotics and Automa- tion Letters, vol. 1, pp. 41–48, Jan 2016.

[107] E. C. Dean-Leon, J. R. Guadarrama-Olvera, F. Bergner, and G. Cheng, “Whole-body active compliance control for humanoid robots with robot skin,” 2019 International Conference on Robotics and Automation (ICRA), pp. 5404–5410, 2019.

[108] V. J. Lumelsky and E. Cheung, “Real-time collision avoidance in teleoperated whole- sensitive robot arm manipulators,” IEEE Transactions on Systems, Man, and Cyber- netics, vol. 23, pp. 194–203, Jan 1993.

[109] A. Yamaguchi and C. G. Atkeson, “Recent progress in tactile sensing and sensors for robotic manipulation: can we turn tactile sensing into vision?,” Advanced Robotics, vol. 33, no. 14, pp. 661–673, 2019.

[110] Kanda, “Introduction — human-robot interaction.” http : / / humanrobotinteraction . org / 1-introduction/, 2012. [Online; accessed 08- Feb-2012].

96 [111] C. L. O. K. P. B. L. L. Christophe Leys, “Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median,” Journal of Experimental Social Psychology, vol. 49, no. 4, pp. 764–766, 2013.

97