AUTONOMOUS INTELLIGENT ROBOTIC FOR ON-ORBIT SERVICING

BENOIT P. LAROUCHE

A DISSERTATION SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF

DOCTORATE OF PHILOSOPHY

GRADUATE PROGRAM IN EARTH AND SPACE SCIENCE YORK UNIVERSITY TORONTO, ONTARIO AUGUST 2012 Library and Archives Bibliotheque et Canada Archives Canada

Published Heritage Direction du 1+1 Branch Patrimoine de I'edition 395 Wellington Street 395, rue Wellington Ottawa ON K1A0N4 Ottawa ON K1A 0N4 Canada Canada Your file Votre reference ISBN: 978-0-494-92801-1

Our file Notre reference ISBN: 978-0-494-92801-1

NOTICE: AVIS: The author has granted a non­ L'auteur a accorde une licence non exclusive exclusive license allowing Library and permettant a la Bibliotheque et Archives Archives Canada to reproduce, Canada de reproduire, publier, archiver, publish, archive, preserve, conserve, sauvegarder, conserver, transmettre au public communicate to the public by par telecommunication ou par I'lnternet, preter, telecommunication or on the Internet, distribuer et vendre des theses partout dans le loan, distrbute and sell theses monde, a des fins commerciales ou autres, sur worldwide, for commercial or non­ support microforme, papier, electronique et/ou commercial purposes, in microform, autres formats. paper, electronic and/or any other formats.

The author retains copyright L'auteur conserve la propriete du droit d'auteur ownership and moral rights in this et des droits moraux qui protege cette these. Ni thesis. Neither the thesis nor la these ni des extraits substantiels de celle-ci substantial extracts from it may be ne doivent etre imprimes ou autrement printed or otherwise reproduced reproduits sans son autorisation. without the author's permission.

In compliance with the Canadian Conformement a la loi canadienne sur la Privacy Act some supporting forms protection de la vie privee, quelques may have been removed from this formulaires secondaires ont ete enleves de thesis. cette these.

While these forms may be included Bien que ces formulaires aient inclus dans in the document page count, their la pagination, il n'y aura aucun contenu removal does not represent any loss manquant. of content from the thesis. Canada AUTONOMOUS INTELLIGENT ROBOTIC MANIPULATOR FOR ON-ORBIT SERVICING

by Benoit P. Larouche

By virtue of submitting this document electronically, the author certifies that this is a true electronic equivalent of the copy of the dissertation approved by York University for the award of the degree. No alteration of the content has occurred and if there are any minor variations in formatting, they are as a result of the conversion to Adobe Acrobat format (or similar software application).

Examination Committee Members:

1. Prof. Gordon Shepherd

2. Prof. Jinjun Shan

3. Prof. Anne Moore

4. Prof. Alfred Ng

5. Prof. GuangJun Liu Abstract

The doctoral research is to develop an autonomous intelligent robotic manip­ ulator technology for on-orbit servicing (OOS). More specifically, the research is focused on one of the most critical tasks in OOS- the capture of a non-cooperative object whilst minimizing impact forces and accelerations. The objective of the re­ search is: the development of a vision-based control theory, and the implementation and testing of the developed theory by designing and constructing a custom non- redundant holonomic robotic manipulator. The research validated the newly devel­ oped control theory and its ability to (i) capture a moving target autonomously and (ii) minimize unfavourable contact dynamics during the most critical parts of the capture operations between the capture satellite and a non-cooperative/tumbling object. A custom robotic manipulator functional prototype has been designed, assembled, constructed, and programmed from concept to completion in order to provide full customizability and controllability in both the hardware and the soft­ ware. Based on the test platform, a thorough experimental investigation has been conducted to validate the newly developed control methodologies to govern the be­ haviour of the robotic manipulators (RM) in an autonomous capture. The capture itself is effected on non-cooperative targets in zero-gravity simulated environment. The RM employs a vision system, force sensors, and encoders in order to sense its environment. The control is effected through position and pseudo-torque inputs to three stepper motors and three servo motors. The controller is a modified hy­ brid force/neural network impedance controller based on N. Hogan’s original work. The experimental results demonstrate the set objectives of this thesis have been successfully achieved. D edication

I would like to dedicate this thesis to my family, my friends, and my colleagues.

”Face new challenges, seize new opportunities, test your resources against the unknown and in the process, discover your own unique potential. ”

-John Amatt Acknowledgements

This dissertation would not have been possible without the guidance and help of several individuals who in one way or another contributed and extended their valuable assistance in the preparation, conduction, and completion of this study. First and foremost, I would like to thank my wife, Andrea. Without your support, your encouragement, and your understanding, none of this would have been possible. It has been a long and winding road which I’m happy to say we’ve travelled together. To my family, thanks for keeping me grounded, focused, and sane. Although you may never read this thesis, know that you inspired it and are partly responsible for it. I would like to extend my utmost gratitude to Dr. George Z.H. Zhu, my su­ pervisor and mentor whose sincerity, encouragement, late nights and tireless efforts helped shape my focus and inspire me as I hurdle all the obstacles and barriers in the completion of this work. To my co-conspirator and office mate, Danut Tabacaru. Thank you for fuelling my work with premium coffee, fielding my frustrations with humour, and keeping my sanity mostly intact with your company. It would have been a much different experience had you not been around. To my software guru Christopher Prevoe. Without his voice of reason and experience, there would have been no version control, not nearly enough comments, and a lot less interesting discussions occurring over lunch. To my friends and colleagues, you are the reason the past four years have been a series of challenges and successes rather that trials and frustrations. Thanks for the games, the late nights, and the constant access to the helpline.

vi Contents

Abstract iv

Acknowledgements vi

Table of Contents vii

List of Tables xii

List of Figures xiii

Preface A xviii Abbreviations xix 0.1 Notation ...... xix 0.2 Convention ...... xix 0.3 List of Sym bols ...... xx

1 Introduction and Justification 1 1.1 Introduction ...... 1 1.2 in Space ...... 1 1.3 Justification of the Study ...... 5 1.3.1 Challenges of On-Orbit Servicing Robotic Manipulator . . . 5 1.3.2 Limitations of Existing T reatm en ts ...... 7 1.4 Research Objectives...... 7 1.5 Research Methodology ...... 8 1.6 Layout of Thesis ...... 10

2 Literature Review 12 2.1 Robotic Design ...... 12 2.2 Robotic Control Theory ...... 13 2.2.1 Vision-based Control ...... 14 2.2.2 Impedance C o n tro l ...... 16 2.2.3 Speed and Acceleration C o n tro l ...... 18 2.2.4 Input Shaping Control for Flexible J o i n t ...... 18 2.3 A utonom y ...... 19 2.3.1 Learning Neural Network ...... 20 2.3.2 Genetic A lg o rith m ...... 21 2.4 Sensors ...... 22 2.4.1 Force S en sin g ...... 22 2.4.2 ...... 22

3 Theoretical Development of Control Law 24 3.1 Control Problem Definition ...... 24 3.2 Robotic Manipulator M odel ...... 26 3.2.1 Kinematic Model of Robotic Manipulator ...... 26 3.2.2 Dynamic Model of Robotic M a n ip u la to r ...... 29 3.2.3 Denavit-Hartenberg Transform ...... 31 3.3 Camera Model for Vision-Based 3D Control ...... 33 3.4 Vision-Based Control Strategy ...... 36 3.4.1 Position-Based Visual Servoing ...... 36 3.4.2 Vision-Based Impedance C o n tr o l ...... 39 3.4.3 Visual Servoing Enhanced by Kalman F ilte r ...... 41 3.5 Speed and Acceleration Control of Stepper Motors ...... 45 3.6 Flexible Joint Control using Input Mode Sh a p in g ...... 47 3.7 Pseudo-Force C ontrol ...... 49 3.8 Digital Dynamic D am p in g ...... 50 3.9 Grasping Algorithm ...... 50

4 Computer Validation and Controller Tuning 52 4.1 MatLab Simulink Environment ...... 52 4.2 Robotic Manipulator M o d e l ...... 53 4.2.1 2-Dimensional Robotic Manipulator Simulator ...... 53 4.3 Impedance Control Simulation ...... 56 4.3.1 Test Scenarios ...... 57 4.4 Kalman Filter Simulation ...... 65 4.4.1 Tuning P aram eters ...... 68 4.4.2 Initialization ...... 69 4.4.3 R esults...... 70

5 Photogramm etry 72

viii 6 Software 76 6.1 Introduction ...... 76 6.2 Robotic Manipulator On-Board Software ...... 77 6.3 Main Program, Photogrammetry, Filtering, and Tracker ...... 80 6.4 Inverse K inem atics...... 82 6.5 Communication P rotocol ...... 84 6.6 Kalman Filter and Algorithm ...... 85 6.7 Damping, Grasper, and Controller ...... 86 6.7.1 Dynamic Digital D am p in g ...... 86 6.7.2 G rasping ...... 88 6.7.3 Controller ...... 88

7 Experimental Set-up 89 7.1 Target Structure ...... 89 7.2 Evaluative Components ...... 90 7.2.1 Strain G a g e ...... 90 7.2.2 Inertial Measurement U n it...... 90 7.3 Dynamic Path Planning of Target ...... 91 7.4 Target Paths ...... 93 7.4.1 Static ...... 93 7.4.2 Target Paths ...... 97 7.4.3 Challenges at Low Velocity ...... 97 7.4.4 Dynamic High S p e e d ...... 97 7.4.5 Challenges at High V elo city ...... 99 7.4.6 Comparison of Captures with and without Kalman Filtering 99

8 Experimental Validations 102 8.1 Note on Capture Graphs ...... 102 8.2 Static C apture ...... 102 8.2.1 Initial Static Capture ...... 102 8.2.2 Static Capture - F iltered ...... 104 8.3 Low Velocity C a p tu re s ...... 117 8.3.1 Initial Capture Attempts at Low Velocity without Kalman F iltering ...... 117 8.3.2 Capture at Low Velocity with Kalman Filtering ...... 123 8.3.3 Difficulties in Low Velocity C a p tu re s ...... 125 8.4 High Velocity C a p tu r e ...... 128 8.4.1 Initial Dynamic High Velocity C a p tu re ...... 128 8.5 Controlled Dynamic High Velocity Capture ...... 129 8.5.1 Difficulties in High Velocity Cap tu res...... 130 ix 8.5.2 Summary of High Velocity Capture A nalysis ...... 130

9 Comparison of Post-Capture Controllers 141 9.1 Vision-Based Control ...... 141 9.1.1 Vision Control L a w ...... 141 9.1.2 Comparison of R e s u lts ...... 142 9.2 Pseudo-Force C ontrol ...... 147 9.2.1 Pseudo-Force Control L a w ...... 147 9.2.2 Comparison of R e s u lts ...... 147 9.3 Impedance Control ...... 152 9.3.1 Impedance Control L a w ...... 152 9.3.2 Comparison of R e s u lts ...... 15-3

10 Conclusions 158 10.1 Validation ...... 158 10.2 F indings ...... 158 10.3 Caveats ...... 159 10.4 Summary of C ontributions ...... 159 10.4.1 Neural Network Phase Transition Algorithm ...... 159 10.4.2 Vision-Based Compliant Capture Control ...... 159 10.4.3 Vision-Based Force Estimation ...... 160 10.4.4 Smooth Stepper Motor C o n tro l...... 160 10.4.5 Digital Dynamic D am p in g ...... 160 10.4.6 Vision System and Scalable Target Design ...... 160 10.4.7 Kalman Intercept Prediction ...... 160 10.5 Future Work ...... 161

A Appendix - Mechanical Design 163 A.l Introduction ...... 16-3 A.2 Actuator ...... 163 A.3 Kinematics ...... 164 A.4 Robotic Manipulator ...... 165 A.4.1 Torso C olum n ...... 166 A.4.2 Joint D esign ...... 167 A.4.3 W rist...... 167 A.4.4 B a s e ...... 169 A.4.5 Construction & A ssem b ly ...... 169

x B Appendix - Electrical Design 174 B.l Introduction ...... 174 B.2 Stepper Motor System ...... 174 B.3 Servo Motor System ...... •...... 179

C Appendix - Hardware 182 C.l Electronic Hardware ...... 182 C.2 Power Hardware ...... 185 C.3 Physical H a rd w a re ...... 186 C.4 Target H a rd w a re ...... 188

D Appendix - System Dynamics Matrices 191

E Appendix - Dynamics Equations 194 E.l Dynamics of an Open-Chain Manipulator ...... 194 E.1.1 General Lagrangian Dynamic Equation Development .... 195 E.2 Kinematics of Robotic Manipulator ...... 197 E.3 Dynamics of Robotic Manipulator ...... 202 E.3.1 Lagrangian Dynamics ...... 202 E.3.2 Iterative Newton-Euler Closed Loop Dynamic Formulation . 204

F Appendix - Detailed Look at Kalman Filter Cycle 207

Bibliography 210

xi List of Tables

1.1 Summary of Autonomous Space Robotic Missions ...... 3

3.1 Robotic Manipulator Identification C a rd ...... 27 3.2 Robotic Actuation C a rd ...... 29 3.3 Kalman Filter State Space ...... 42 3.4 Kalman Filter Cycle summary ta b le ...... 43

4.1 Kalman Filter Standard Deviation and Variance ...... 70

5.1 Test results of photogrammetry algorithm ...... 75

6.1 Communication Table ...... 84 6.2 Kalman State Space and Initial V alues ...... 86 6.3 Kalman State Transition Probability and Initial Values ...... 87 6.4 Kalman System Model Error and Measurement Error Matrices . . . 87

10.1 Comparison of Capture Operations ...... 159 A.l Stepper Motor Specifications ...... 164 A.2 Identification Card for Robotic M anipulator ...... 165

B.l Power Requirement T a b l e ...... 175

E.l Robotic Manipulator Identification C a rd ...... 198

xii List of Figures

1.1 On-Orbit Survey Model [1] ...... 4 1.2 Research Methodology Theory Focus ...... 9 1.3 Research Methodology Validation Focus ...... 11

2.1 Different Robotic Configuration ...... 13 2.2 Bio-mimetic Pieper Configuration [2 ] ...... 14 2.3 Neural Network Capture Structure ...... 21

3.1 Schematic of an Eye-in-Hand, Bio-Mimetic Pieper Type of Robotic Manipulator in a Work S p ace ...... 25 3.2 Schematic of Open Chained Manipulator Model in Joint Space . . . 27 3.3 Denavit-Hartenberg Frame Definitions ...... 28 3.4 Photogrammetric Model of Cam era ...... 34 3.5 Target Coordinates in M eters ...... 35 3.6 Photogrammetric Target: (a) is the Real Targets, (b) Seen Through the Vision P rogram ...... 36 3.7 Robotic Manipulator Control Flowchart ...... 37 3.8 Kalman Filter C ycle ...... 44 3.9 Speed and Acceleration Control Diagram ...... 46

4.1 2D Robotic Manipulator Dynamics Set U p ...... 53 4.2 Impedance Controller in Simulink Environment ...... 57 4.3 Simulation Setup ...... 58 4.4 X & Y Motion of End-Effector (Meters): Collision with Wall .... 61 4.5 X & Y Motion of End-Effector (Meters): Collision with Wall at Angle 62 4.6 X & Y Motion of End-Effector (Meters): Trace Wall Path ...... 63 4.7 X & Y Motion of End-Effector (Meters): Trace Wall Path With Increased Damping ...... 64 4.8 True Path of the Kalman F i l t e r ...... 65 4.9 Noisy Path of the Kalman F ilter...... 66 4.10 Kalman Filter O u t p u t ...... 71

xiii 5.1 Photogrammetric Target: (a) Figure, (b) Target as seen Through the Vision Program ...... 72

6.1 Flowchart: Main Software ...... 78 6.2 Flowchart: On-board Software ...... 79 6.3 Flowchart: On-board Software Detailed ...... 80 6.4 Inverse Kinematics: Initial jo in t ...... 83 6.5 Inverse Kinematics: Multiple Solutions for Joint Redundancy .... 83 6.6 Inverse Kinematics: Multiple Solutions for Joint Redundancy at Wrist 84 6.7 Communication Buffer: Signal for Robotic D rivers ...... 85

7.1 Physical Target ...... 90 7.2 Target and Grasping Bar with Integrated Sensors ...... 91 7.3 Target Set U p ...... 92 7.4 Target Motor D r iv e r ...... 93 7.5 Transformations for system ...... 94 7.6 Target Offset ...... 95 7.7 Top View of Joint Alignment for Y aw ...... 95 7.8 Top View of Compensated Joint Alignment for Y aw ...... 96 7.9 Yaw Offset Through Experim entation ...... 96 7.10 Oscillation Difficulty ...... 98 7.11 Oscillation Difficulty Fixed ...... 98 7.12 Example of High Velocity L a g ...... 99 7.13 Example of Capture without Kalman Filtering ...... 100 7.14 Example of Capture with Kalman Filtering ...... 100

8.1 Strain Gage Reading for Static Capture ...... 103 8.2 Vision Input for Static C a p tu r e ...... 104 8.3 Capture Attempt of a Static Target ...... 105 8.4 Butterworth Magnitude (Red) and Phase (Blue) vs. Frequency . . . 106 8.5 Butterworth Impulse Response ...... 106 8.6 Butterworth Step Response ...... 107 8.7 Joint Tracking for Static C a p tu re ...... 108 8.8 Kalman Filter Input and Output ...... 108 8.9 Strain Gage Readings ...... 109 8.10 Vision Information ...... 109 8.11 Capture Attempt of a Static Target IMU Acceleration, Velocity, and Position Readings ...... 110 8.12 IMU Gyro and Angle R ead in g s ...... I l l 8.13 Joint Tracking for High Yaw Static Ca p tu re ...... 112

xiv 8.14 Kalman Filter Input and Output for High Yaw Static Capture . . 113 8.15 Kalman Filter Input and Output Angles for High Yaw Static Capture 113 8.16 Strain Gage Readings for High Yaw Static C a p tu re ...... 114 8.17 Vision Information for High Yaw Static C apture...... 114 8.18 Capture Attempt of a Static Target IMU Acceleration, Velocity, and Position Readings for High Yaw Static C apture ...... 115 8.19 IMU Gyro and Angle Readings for High Yaw Static Capture . . . 116 8.20 Joint Tracking for Low Velocity C a p tu re ...... 118 8.21 Kalman Filter Input and Output for Low Velocity Capture .... 118 8.22 Kalman Filter Input and Output Angles for Low Velocity Capture 119 8.23 Strain Gage readings for Low Velocity C a p tu r e ...... 119 8.24 Vision Information for Low Velocity Capture ...... 120 8.25 Capture Attempt of a Static Target IMU Acceleration, Velocity, and Position Readings for Low Velocity C apture ...... 121 8.26 IMU Gyro and Angle Readings for Low Velocity C apture ...... 122 8.27 Joint Tracking for Static Capture at Low Velocity ...... 124 8.28 Kalman Filter Input and Output at Low Velocity ...... 124 8.29 Kalman Filter Input and Output Angles at Low Velocity ...... 125 8.30 Strain Gage Readings at Low V e lo city ...... 126 8.31 Vision Information at Low V elocity ...... 126 8.32 Capture Attempt of a Static Target IMU Acceleration. Velocity, and Position Readings at Low V elocity ...... 131 8.33 IMU Gyro and Angle Readings at Low Velocity ...... 132 8.34 Joint Tracking for Static Capture for Initial Dynamic High Veloc­ ity Capture ...... 132 8.35 Kalman Filter Input and Output for Initial Dynamic High Velocity C a p tu re ...... 133 8.36 Kalman Filter Input and Output Angles for Initial Dynamic High Velocity C a p tu r e ...... 133 8.37 Strain Gage Readings for Initial Dynamic High Velocity Capture 134 8.38 Vision Information for Initial Dynamic High Velocity Capture . . 134 8.39 Capture Attempt of a Static Target IMU Acceleration, Velocity, and Position Readings for Initial Dynamic High Velocity Capture . 135 8.40 IMU Gyro and Angle Readings for Initial Dynamic High Velocity C a p tu re ...... 136 8.41 Joint Tracking for Static Capture for Dynamic High Velocity Capture 136 8.42 Kalman Filter Input and Output for Dynamic High Velocity Capture 137 8.43 Kalman Filter Input and Output Angles for Dynamic High Veloc­ ity Capture ...... 137

xv 8.44 Strain Gage Readings for Dynamic High Velocity Capture .... 138 8.45 Vision Information for Dynamic High Velocity Capture ...... 138 8.46 Target IMU Acceleration, Velocity, and Position Readings for Dynamic High Velocity C apture ...... 139 8.47 IMU Gyro and Angle Readings for Dynamic High Velocity Capture 140

9.1 High Velocity Vision Controlled Capture - Position Information . . 142 9.2 High Velocity Vision Controlled Capture - Orientation Information 143 9.3 High Velocity Vision Controlled Capture - Strain Information . . . 144 9.4 Capture of a High Velocity Target: IMU Acceleration, Velocity, and Position Readings ...... 145 9.5 Capture of a High Velocity Target: IMU Gyro and Angle Readings 146 9.6 High Velocity Force Controlled Capture - Position Information . . 148 9.7 High Velocity Force Controlled Capture - Orientation Information 148 9.8 High Velocity Force Controlled Capture - Strain Information . . . 149 9.9 Capture of a High Velocity Target: IMU Acceleration, Velocity, and Position Readings ...... 150 9.10 Capture of a High Velocity Target: IMU Gyro and Angle Readings 151 9.11 High Velocity Impedance Controlled Capture - Position Information 153 9.12 High Velocity Impedance Controlled Capture - Orientation Infor­ mation ...... 154 9.13 High Velocity Impedance Controlled Capture - Strain Information 155 9.14 Capture of a High Velocity Target: IMU Acceleration, Velocity, and Position Readings ...... 156 9.15 Capture of a High Velocity Target: IMU Gyro and Angle Readings 157

A.l Various Actuators and Modifiers ...... 164 A.2 Robotic Manipulator ...... 165 A.3 Central T ube ...... 166 A.4 Shoulder and Elbow J o i n t ...... 168 A.5 Wrist A ssem bly ...... 168 A.6 Grasper ...... 169 A.7 Base A ssem b ly ...... 170 A.8 Link 1 ..."...... 171 A.9 Link 2 ...... 172 A. 10 Link 3 ...... 173

B.l Schematic Layout of the Stepper Motor Driver ...... 176 B.2 Schematic Layout of the H-Bridge Driver ...... 177 B.3 Stepper Motor: How a Stepper Functions ...... 177

xvi B.4 Stepper Motor: Half S te p p in g ...... 178 B.5 Stepper Motor: M ulti-pole ...... 178 B.6 Stepper Motor: 6 w i r e ...... 179 B.7 Schematic Layout for the Servo Motor Driver ...... 180 B.8 Servo Pulse: Example of a Signal to a Servo M o to r ...... 180 B.9 Servo Pulse: Example of Signals ...... 181

C.l Atmega Microchip and Target Driver System ...... 182 C.2 Little Step U and SSC-32 ...... 18-3 C.3 USB TO RS232 Converter and USB H u b ...... 184 C.4 Programmer and Stepper Driver ...... 184 C.5 W ebcam ...... 185 C.6 Low and High Voltage Power Supplies ...... 186 C.7 Computer Power Supply and Emergency S to p ...... 187 C.8 DB9 Connector and Flexible Shaft Coupling ...... 187 C.9 Electromagnetic Brakes and Rotary D a m p e r...... 188 C.10 Strain Gage and Data Acquisition Unit ...... 189 C .ll XBEE Wireless and IM U ...... 190 E.l Open Chained Manipulator ...... 194 E.2 Denavit-Hartenberg Frame Definitions . i ...... 197 E.3 Robotic M anipulator ...... 198 E.4 Manipulator Mass Distribution ...... 202

xvii Preface

Robotic autonomy in space will provide humanity’s next stepping stone in science and exploration. Human exploration in space holds a unique place and will always be required in order to connect us. However, robotics can be used to enhance, expand, and energize the sector. thrives where humans struggle in tasks such as repetitive, hazardous, high precision, and long duration solo missions. Space is the next logical focus point for mankind, providing a wealth of resources, energy, and expansion. Tapping into the potential is currently limited by cost and technology, somewhat intertwined together. Autonomy can breach the current gap in access to this resource and it is through the pursuit and overcoming of small steps that this can be achieved. On-orbit servicing is a dual purpose bridge that can serve as the stepping stone to more versatile autonomy. On-orbit servicing is a viable economic model for a near term business providing extendibility to current space assets freeing up funding for expansion and exploration. It is the hope of this research to provide an examination of a specific portion of an autonomous operation in the shape of a free-flying capture of a non-cooperative satellite simulate.

xviii Abbreviations

0.1 Notation 2D - Two Dimensional 2.5D - Two-and-a-Half Dimensional 3D - Three-Dimensional AIS - Adaptive Input Shaping ATV - Autonomous Transfer Vehicle DART - Demonstration of Autonomous Rendezvous Technology DLR - Germany’s National Research Center for Aeronautics and Space ESA - European Space Agency IMU - Inertia Measurement Unit MDA - MacDonald Dettwiler and Associates Ltd. M-i-t-1 - Man-in-the-loop or Human-in-the-loop NASA - National Aeronautics and Space Administration OOS - On-Orbit Servicing PID - Proportional Integral Derivative PD - Proportional Derivative RM - Robotic Manipulator SRMS - Shuttle Remote Manipulator System USB - Universal Serial Bus

0.2 Convention

The convention for tensor and vector subscripts and superscripts is employed as follows: A left superscript denotes the time of the configuration in which the quantity occurs. A left subscript can have two different meanings. If the quantity considered is a derivative, the left subscript denotes the time of the configuration in which the

xix coordinate is measured with respect to which it is differentiated. Otherwise, the left subscript denotes the time of the configuration in which the quantity is measured. Right lower case subscripts denote the components of a tensor or vector. Differentiation is denoted by a right lower case subscript following a comma with the subscript indicating the coordinate with respect to which it is differentiated.

0.3 List of Symbols

Following is a list of symbols used throughout the thesis.

Symbol Description Ci Frame of reference i. Tij Transformation matrix from frame j to i. Oi Joint angle i. x e Coordinate frame e. .Oti-l Denavit-Hartenberg angle between Z axes. flj—i Denavit-Hartenberg distance between Z axes. di Denavit-Hartenberg distance between X axes. J(6) Jacobian. J Pseudo Jacobian. T Kinetic Energy. U Potential Energy. rrii Mass of link i. v* Velocity of center of mass of link i. <*>i Rotational velocity of link i. h Inertia of link i. hd Height of link i. 9 Gravitational Acceleration. L LaGrangian equation. 1(0) Inertial Matrix in Joint Space. C(9.0) Coriolis and Centrifugal Matrix in Joint Space. N(0) Gravitational Matrix in Joint Space. T Torque input. Fext External Force Matric. M (X) Inertia Matrix in Cartesian Space. D(X) Damping Matrix in Cartesian Space. K(X) Stiffness Matrix in Cartesian Space. N(X) Gravitational Matrix in Cartesian Space.

xx xc,zc Project coordinates onto camera frame. X tc, Ytc„ Ztc Target coordinates in camera frame. X t,Yt,Z t Target coordinates in target frame. Vij Rotational transformation between target and camera frame. X 0, Y0, ZQ Translational transformation between target and camera frame. n Camera Yaw. $ Camera Pitch. 0 Camera Roll. e Error vector. PK Error Covariance Matrix. Fk Predictive Model Matrix. Hk Measurement Model Matrix. K k Kalman Gain Matrix. R k Measurement Covariance Matrix. Qk System Model Error Matrix. XK State Space. UJ Natural Frequency. C Damping Ratio. v ( ^ 0 Vibrational Response. A Amplitude of response. xrP Digital Dynamic Damping. A G Grasping Potential. Mi Grasping Offset. 6 Beam Deflection. E Young’s Modulus.

xxi 1 Introduction and Justification

Summary: In this chapter, we define the problem, justify the undertaking of the study and outline the method of approach adopted in achieving the set objectives. Furthermore, we provide a summary of the layout of the thesis.

1.1 Introduction

The pursuit of autonomy in space robotics has been in effect since the first man- made probe left the gravity well of Earth on the 4th October, 1957 [3]. The use of autonomous robotics is considered by many to be the holy grail of space exploration and servicing; unleashing the potential to perform dangerous and repetitive tasks, the chance to have persistent omnipresence in space, to repair or refuel previously extortionate assets, and to realize as-yet undreamed of undertakings. Autonomy in robotics is providing the ability to perform decision making tasks without relying on a human-in-the-loop. One of the main disadvantages of oper­ ations in space is the communication delay due to vast distances. For example, a Mars rover experiences an average communication delay of 3 minutes at the short­ est and 22 minutes at the largest possible superior conjunction, estimated by the aphelion of Mars and the aphelion of Earth [4]. This renders tele-operation imprac­ tical if not completely impossible. In situ decision making and control can provide a huge improvement in productivity and avoid costly delays that could result in missed opportunities, or even worst, collisions.

1.2 Robotics in Space

Robotics employed in space applications is not a novel concept with such as the CANADARM (1981), the Sojourner Rover (1997), the twin Mars rovers Spirit and Opportunity (2003), and the Phoenix Mars lander (2008) being excellent examples of the work performed in the past. However, there is still a large portion

1 of a potential field that has yet to be fully exploited in space: autonomous robotics [5]. A high percentage of robotics in space currently employs human-in-the-loop controls [4. 6], which lay the majority of the decision-making and guidance decisions into human hands. The use of autonomous robots in space missions would greatly reduce costs and risks, allowing the extension of life and configurability of current space assets [7]. However, the potential use of autonomous robotics in space is a vastly un­ explored area with few venturing the risk and costs needed to develop the tech­ nologies needed. Examples of forays into the field can be seen in current space missions such as the European Space Agency’s (ESA) Autonomous Transfer Vehi­ cle (ATV) currently used to replenish the ISS [8], Orbital Express’ mission from MDA and DARPA [9], JERICO from ESA [10], and NASA’s DART program for the Demonstration of Autonomous Rendezvous Technology [11]. Table 1.1 summarizes autonomous space robotic missions activities in the past three decades. One important and growing area for the application of space robotics is the on- orbit servicing (OOS) of failed or failing spacecraft. Typical OOS missions involve tasks like inspecting, repairing, refueling, and/or reconfiguring or upgrading existing space assets [12, 13, 14], Figure 1.1 illustrates the servicing baseline for satellites and where failures (red) typically occur, and where on-orbit servicing (yellow) can be applied. Recently, OOS has extended their reach into the space debris mitigation [15] area. Currently, OOS is an expensive endeavour where purpose-built missions are launched with trained astronauts and tools to service one specific space asset. For instance, NASA has recently explored (February 2012) the technical feasibility of refuelling a satellite (NOAA weather satellite) in space using the Dexter robotic manipulator manufactured by MDA. The mission would be costly, dangerous, and focus on the refueling of only one space asset. The generalization of OOS proce­ dures using an autonomous robotic manipulator would greatly reduce the costs and risks, and provide life-extendibility and upgradability to assets for a cost order of magnitudes less than launching a new asset or a human-driven mission. A typical OOS mission involves the approaching and capturing of the target spacecraft to be serviced by the robotic manipulator on the servicing spacecraft. The most dangerous phase for any autonomous capture operation is the initial contact between the target and the manipulator. The anticipated risks include damage from collision, separation between the servicing and target spacecraft from improper initial contact, a missed grab, or simply software mistakes that can lead to many unforeseen scenarios. Therefore, the robotic manipulator on the servic­ ing spacecraft should include machine vision, sensor mechanisms, and autonomous

2 Table 1.1: Summary of Autonomous Space Robotic Missions Name Date Agency Atnrns M-i-t-L Comments ETS-VIII 1997 NAS DA Yes No First mission to con­ duct autonomous ren­ dezvous docking oper­ ations successfully Deep Space One 1998 NASA No No Test-bed for new tech­ nologies, e.g. au­ tonomous control sys­ tem JERICO 1998 ESA / ASI Yes Yes Ground operation and simulation prior to command initiation. Mars Rover 2003 NASA Yes Yes Followed commands and reported success. DART 2005 NASA Yes No Mission aborted due to propellant deple­ tion. Orbital Express 2007 DARPA Yes Yes Human verification at each step. Aborted during first approach. ATV 2008 ESA Yes Yes Human verification at each step.

3 OOS Market Mode: Baseline

►j New Sets Launched: 100% ©JKIC-2002 I Launch Failure Rate: 5% Iota Loss: 5 E: Emergency S: Scheduled

R<>-Orbit>nq {E Remote Inspection (E) I Oe Oroitin jlvage (t) ' 9»lrof.t « '■ * Pocked Inspect on (E) ,

On 0*bit 00' Total Loss or Reduced Performance , End-ofJJfe Failure Rate; 50% End-of Life o r Service • # «t* «#** *t s **************** t* ** **#'♦** « *** ti **•«**# at # * # * »*«**»** ******** *■« * » 'f — -W ...... v Dp-Orbitmg (S) Salvage (S) Mainloancc (SJ Retrofit is; j Remote Inspection (S) z On-Orbit End-of Life Recovery Docked Inspection (S)

Figure 1.1: On-Orbit Survey Model [1]

4 control algorithms to accomplish autonomous rendezvous and grapple of a variety of interfaces traceable to future spacecraft servicing operations. The doctoral research presented forthwith will develop a control theory em­ ployed to track and capture a non-cooperative dynamic target and bring it to a relative halt within its operational space. The theory will be validated through a custom-built robotic manipulator and sensor suite, assembled and quantitatively analysed with the use of a space-simulate target and with the necessary sensors.

1.3 Justification of the Study

1.3.1 Challenges of On-Orbit Servicing Robotic Manipulator

In a complex system such as the proposed one, many difficulties will arise. The standard anticipated complications will come from integration of complex systems, debugging and mapping the interactions of the subsystems, and troubleshooting any errors that occur. The technical challenges which arise from this research are varied, detail ori­ ented, and complex. The first will be the theoretical development of the control theory and associated models. Secondly, the integration of the several subsystems and ensuring reliable performance will prove a consistent and persistent problem. In order to validate the experience, the following will be required: vision system, mobile target, target sensors, robotic manipulator, manipulator software, control software, target software, power subsystem for all hardware, autonomy algorithm, and communication program. All of these need to interact flawlessly with each other in a real-time environment in order to accomplish the given task. Further­ more, in terms of challenges, the real-time nature of this system, seeing as it will be operated in a real world environment, places strict hardware and software require­ ments as well as maximizing the possible control loop in order to ensure running time. The development of the custom controller provided a few unique challenges, mainly due to the nature of the hardware selection. To date, only a few advanced robotic laboratories such as Germany’s DLR lab, have examined hardware testable autonomous capture operations in a simulated space environment., and these have the habit of being expensive to set up. We are trying to accomplish a validating procedure for a similar environment, and keep it at a relatively low-cost. Much research has been performed in the area of space robotics, such as, dy­ namic tracking [16] and capture [11] of cooperative or non-cooperative target space­ craft autonomously, autonomy control theories [9], flexible joint control [17] to elim­ inate the vibration of robotic manipulator, and the integration of aforementioned systems [11].

5 The dynamic tracking of a target spacecraft requires a reliable and generic vision system to provide a bio-mimetic equivalent sensing ability. This general and flexible sensing scheme shall allow for a wide range of tracking and target variability as well as providing the necessary information. However the challenges associated with the vision system are numerous. First, is the field noise level, making it difficult to distinguish between desired targets and background information [18]. Secondly is tracking of features and ensuring that the information provided from the initial lock is employed in order to reduce cost and increase speed. Third is the possibility of the vision system to invert the picture due to the two-dimensionality nature of the information. Additionally the standard algorithms require either an iterative convergence algorithm or a subtractive background algorithm to signal in order to determine the orientation and position of the object. Both approaches can become computationally expensive or cumbersome and may result in signal lag or frequent loss of target locks. Predicting the path of the target becomes essential in order to align the gripper and the grasping bar of the target. While the system is in motion, both the arm and the target are moving relative to each other as well as relative to the universal frame of reference. Tracking and predicting the intercept orientation as well as location is very complicated when attempting to do the operation in real time. The autonomy control theory is required to capture the non-cooperative target spacecraft autonomously. It involves the autonomy decision making and executes the capture in a controlled manner. The challenges which arose in autonomy in­ volved deciding on the autonomy algorithm [9] and establishing the layers and establishing the node network. In order to function properly, an appropriate met­ ric needs to be selected [6, 19] which links the decision making procedure, and the stages of the operation succinctly. Lastly, establishing the network in a reliable and repeatable fashion, while maintaining the framework of a real-time system [20] rounds out the challenges for the autonomy. Finally, flexibility exists in real joints of space robotics due to joint coupling for shaft misalignments, lightweight structure, and joint elasticity. Past mission experiences showed that it caused residual vibration of robotic arms after executing control commands if not controlled properly [21]. The critical issue in flexible joint control is the sacrifice of precision and the complexity in the dynamic behaviour of the structure [22]. Many works have been done in this area in terms of force control, force compensation, dynamic positioning, and speed response [23, 24, 25]. However, the precision control of flexible joint with variable natural frequency is still an engineering problem needing to be solved [26]. The proposed functional prototype system that will be discussed is a test-bed that has been designed, built, and programmed from the ground up. It is a 6

6 degree-of-freedom bio-mimetic robotic manipulator equipped with vision and in­ telligence. The goal is to demonstrate autonomous capture of a non-cooperative target and to provide a dynamic and responsive arrest. In order to accomplish this feat, several tools and methodology were employed including: photogramme- try, Kalman filtering, dynamic modelling and simulation, complex hardware and software interactions and a quantitative test set-up.

1.3.2 Limitations of Existing Treatments

Current applications and research in the field attempt to treat specific limitations in the field of robotics and joint control. Several limitations exist in terms of both the individual components as well as the full system. The first limitation occurs when examining the controller. Compliant adaptive control is a popular field of study [27, 28, 29, 30], however those authors make assumptions that are not applicable to our field of employment. These include (i) non-real-time, (ii) specific sensor dependence, and (iii) rigid joint requirements. Additionally, existing research often relies on simulations in order to validate their findings [11, 16, 31, 32], Experimentation provides a deeper understanding, and often times complications, that expose weaknesses or lack of knowledge in the field.

1.4 Research Objectives

It is vitally important that when overcoming the difficulties inherent in developing the capture theory, the solution should provide as much freedom and dexterity as possible. The proposed study is to develop viable vision-based autonomous robotic technology which combines all aspects of the autonomous capture operation of a space asset and to demonstrate the newly developed technology by designing and constructing a functional working prototype. The objectives of the research are as follows:

1. Control Theory - To develop an innovative control theory which fuses together compliant behaviour, force control, tracking, and prediction in order to effect a capture operation on a non-cooperative target operating in a zero-gravity simulated environment.

2. Autonomy - Provide basic decision making capabilities and intelligence to the manipulator particularly in order to transition between operational phases of the capture operation without requiring human-in-the-loop operators. 3. Validation - Validate the theory crafting by designing, constructing, and pro­ gramming a fully operational robotic manipulator and quantitative target in order to perform zero-gravity simulated captures and evaluate the forces and accelerations being imparted from the gripper to the target.

4. Evolution - Within hardware limitations, evolve the control and autonomy development in order to improve upon the initial results from the purely theoretical development.

The development of these objectives will lead to a fully operational capture system with an adaptable behaviour able to react to a developing scenario in order to effect a capture. The testing will begin with relatively static operations and increase the speed difference in order to test the capabilities of the tracking system and predictions.

1.5 Research Methodology

The approach taken for this project is to begin with a full field literature review focusing on space robotics, compliant control, autonomy, and visual servoing. This will be done in order to establish the forefront of technology and research in the fields and layout a research plan for the undertaken study. The research approach involves two sections, one for the theory and one for the validation procedure. The theory research methodology will be as follows. The literature review will establish the current approaches and challenges asso­ ciated with the four theoretical sections of the involved research: (i) control theory, (ii) autonomy, (iii) visual servoing, and (iv) robotic models. Each broader research topic is further detailed into themes which can be applied directly to the develop­ ments in the research field. Control theory research wall focus on the different methods of modelling the kinematic and dynamic system, provide compliant control with external forces and establishing robust vision-based and force-based control. In order to accomplish OOS tasks, the required autonomy is fairly straightfor­ ward and involves making a decision between three stages of a capture operation. Two leading autonomy methods will be researched and detailed in order to obtain a reliable, robust, and repeatable autonomous decision making process. As vision will be the primary sensor employed by the robotic manipulator, visual servoing, that is obtaining information from a vision-based sensor, will play the leading role in this research. Meanwhile, photogrammetry, which is extracting three dimensional information from a two dimensional picture, as well as linking the visual transformation into the kinematics model will form the basis of the visual 8 Literature Review

Control Autonomy Visual Robotic Theory Servoing Model Kinematic Neural Network Photogrammetry Kinematic Control Model Inverse Dynam ic Genetic Kinematics Dynamic Control Algorithm Model Denavit-Hartenberg Compliant Grasping Control Pose-Estimation Strategy Visual Kalman Alignment Control Filtering Compensation Force Control

Figure 1.2: Research Methodology Theory Focus

9 servoing research. Pose estimation as well as visual Kalman filtering will help in determining and stabilizing the information. Pinal theoretical research will focus on robotic modelling, actuating, and grasp­ ing. Although highly particular to the type of actuator and robotic design, the research into this field will illuminate opportunities and strategies that will facili­ tate the operation. In order to validate the developed theory, a fully functional robotic manipulator needed to be designed, constructed, and programmed. Figure 1.3 illustrates the process by which this was accomplished. This process will be detailed in chapter A which deals with the construction and in chapter B which deals with actuation.

1.6 Layout of Thesis

This thesis contains fourteen chapters. Following this introductory Chapter, Chap­ ter 2 provides a critical review of relevant work in four main areas: (1) Control theory and development, (2) autonomy and decision making, (3) sensor and vi­ sual servoing, and (4) robotic manipulator design and control. Chapter 3 derives a robotic control theory from continuum mechanics using two director vectors to facilitate description of large rotations. In Chapter 4, we provide a detailed account of the simulation for the controller and robotic manipulator. In Chapter 5, we lay­ out the mechanical design of the test bed which will be employed in the validation process. In Chapter 6 , we layout the electrical design which is analogous to the central nervous system for the manipulator. Chapter 7 is devoted solely to visual servoing, the main sensor employed in this research. Chapter 8 specifies the hard­ ware employed, explaining the abilities as well as the inherent limitations. Chapter 9 illustrates the software which governs the behaviour of the system. Chapter 10 is devoted to a case study involving the dynamic capture of target. Finally, in Chap­ ter 11 we examine the results for the basic capture scenarios. Chapter 12 examines the post-capture controller behaviour. Finally, in chapter 13, we conclude the work, identify the original contributions of the thesis, and outline the directions for future work.

10 Design Stage

Robotic Electronics Grasper Microcontroller Controller Power Arm System System System

Construction Stage

Programming and Integration

Experimental Validation and Testing

Full System Integration

Design Test Scenario

Test

Adjustments

Conclusions

Figure 1.3: Research Methodology Validation Focus

11 2 Literature Review

Summary: This literature review covers four main topics. The first deals with the robotic designs, which is the fundamental validation in this thesis work. The second gives a detailed account on the robotic control theories/strategies that bridge the gap between inputs and outputs in robotics and are critical to the robotic operation. The third and fourth deals with the robotic autonomy in space and the sensing technologies commonly encountered in robotics.

2.1 Robotic Design

The focus on application will greatly affect the choice and design of the robotic platform [33]. The Shuttle Remote Manipulator System (SRMS or Canadarm) has six joints that correspond roughly to the joints of the human arm. with shoulder yaw and pitch joints; an elbow pitch joint; and wrist pitch, yaw, and roll joints. The end effector is the unit at the end of the wrist that actually grabs, or grapples, the payload [34]. The two lightweight boom segments are called the upper and lower arms. The upper boom connects the shoulder and elbow joints, and the lower boom connects the elbow and wrist joints. The design corresponds to a very specific set of requirements and abilities that enhances the ability for this platform to perform its tasks. The basics of platform designs revolve around two types of actuators: linear and angular. Combining the two in various orders, the degrees of freedom provide a large amount of freedom in the design of the platform 2.1. However even similar configurations, combined in different orientations, can yield different abilities in a robotic manipulator. A common choice in adaptable robotics is a Pieper configuration, also known as a human analog or bio-mimetic form [35]. The joint configuration illustrated in Fig. 2.2 is analogous to a human’s body from torso to hand. It provides several advantages in terms of freedom of motion and orientations by separating the XYZ positioning of the hand and the orientation of the wrist. Additionally, this allows for the controller to separate its goals into two, an XYZ position goal based on the

12 (a) R-R-T Assembly (c) T-T-T Assembly

(d) R-R-R Assembly

Figure 2.1: Different Robotic Configurations can be seen where (a) represents a rotational-rotational-translational configuration, and so forth (b), (c) and (d). [ 2] first three degrees of freedom of the arm, and an orientation goal based 011 the final two degrees of freedom thanks to the intersection of the actuation axes. The length of the arm is set in order to conform to the required workspace. Limitations include the torque of the motors versus the weight of the materials, which is a trade-off study between stiffness, strength, flexibility, and lightness [36]. Typically the setting point on this design choice is the task. Working in a one G environment with a goal of manipulating a target mass at a maximum arm length provides a required torque for the motors, stiffness and strength requirement for the arm, and a power requirement for the manipulator [37]. In summary, knowing precisely what the task to be performed is, allows for a better design of the arm and selection of the components.

2.2 Robotic Control Theory

Control bridges the gap between inputs and outputs of a system. Once the decision is made, the controller determines how it is to be achieved. The most common failure is not mechanical or electrical failure, but rather the failure of the controller. Strategies in control theory vary greatly and their effectiveness depends greatly

13 Figure 2.2: Bio-mimetic. Pieper Configuration [2] on the application [38]. For complex tasks such as the capture of non-cooperative targets employing robotic manipulators, robustness and stability become important issues in order to reliably achieve a given t.ask [39]. Several strategies have come to the forefront to control specific problems in dynamics of robotic manipulator [28] and flexible joints [40]. References [41, 42] look at the problem of controlling the dynamics of a system, a much more complicated problem than the kinematics of the system. In the following, we will establish the current approaches and challenges associated with the five aspects of control theories involved in this research.

2.2.1 Vision-based Control

Robots operating in an unknown dynamic environment require reliable sensory information or fast and reliable communication between the robot’s central control and the sensory data retrieval unit. The vision system mimics the human sense of sight and allows non-contact, and flexible measurements of the environment. Vision- based feedback control, a combination of computer vision and feedback control, can increase the overall accuracy of autonomous robotic systems and enable a wide variety of sensing and tracking systems. Currently, there are two types of widely employed configurations in vision-based : eye-in-hand and eye-to-hand [43]. The former refers to a camera mounted on the robot’s end-effector while 14 the latter to a fixed camera continuously observing the robot’s end-effector within its work space. Obviously, the eye-in-hand has a limited but precise view of the scene and its control accuracy increases as the camera approaches the target. The position and orientation of the camera is. therefore, directly affected by the robot’s movement. Alternatively, the eye-to-hand has a less precise but global view of work space which makes it easier to avoid the loss of target. While each configuration has its own advantages and drawbacks, they both are used in real applications. Eye- in-hand systems can be widely found in research laboratories and are being used in tele-operated space robotic manipulators and inspection systems in hazardous environments such as disaster scenes and nuclear factories. Eye-to-hand systems are used in vision-based pick-and-place at manufacturing lines where robots are assisted by networked cameras. Depending on the type of error feedback used in the control law, the vision- based robot control law can be further classified into four categories: position- based, image-based, hybrid, and motion-based visual servoing. In the position- based visual servoing, the control law directly uses the error between the desired and actual positions of the camera in the 3-Dimensional (3D) work space [44]. Therefore, it is also called 3D visual servoing. The main advantage of position- based visual servoing is that it can control relative camera position with respect to the target directly in the work space to perform tasks such as grasping objects and avoiding obstacles. The major disadvantage of this approach is that the feedback of 3D position error of the target must be estimated. If the camera is poorly calibrated and the target’s 3D model is erroneous, or if there are high measurement noises in the signal, the current and desired camera positions will not be accurately estimated. In addition, the position-based visual servoing may fail due to the target being out of the camera’s field of view. In the image-based visual servoing, the control error is calculated directly be­ tween the desired and actual positions on the 2-Dimensional ( 2 D) image plane of the camera [45]. Thus, it is referred as 2D visual servoing. In general, image-based visual servoing is free from errors in target model and more robust to camera cali­ bration errors and measurement noise in images than position-based methods [46]. However, its convergence is theoretically ensured only around the desired position (whose domain seems to be impossible to determine analytically). Another disad­ vantage of the image-based visual servoing compared to the position-based one is the lack of depth knowledge of 3D tar get and additional cares are needed to provide a reasonable estimation of the depth. The hybrid visual servoing, referred to as 2\ D visual servoing in the literature, takes advantages of both 3D and 2D visual servoing by calculating the errors par­ tially in the 3D work space and partially on the 2D image plane [47]. It does not

15 need any 3D model of the target and ensures the convergence of the control law in the whole work space. Finally, a motion-based visual servoing calculates the difference between the optical flow measured in the image plane and the desired optical flow as the error in control law [48]. It can be used in the applications such as contour tracking [49], camera self-orientation, and docking. The major disadvantage of motion-based visual servoing is the high servoing rate imposed 011 the controller. Considering the advantages of different vision-based control technologies and the nature of on-orbit servicing, which requires an autonomous capture of a non- cooperative spacecraft by a robot manipulator, it is natural for this thesis work to choose a robot manipulator system of eye-in-hand configuration controlled by the position-based visual servoing which is enhanced by the extended Kalman filter to reduce noises in images. It is also reflected in the design of current space robot in service. For instance, Marshall et al. with intuitive human guidance of robots [50], Yongchun et al. in mobile robotics [51], and Park et al. with field-of-view restraint robotics [52]. The position-based visual servoing requires the estimation of the targets position and orientation relative to the end-effeetor. The widely used ap­ proach in estimating the position and orientation of target is the photogrammetry with a perspective camera-model [53]. The camera model can be either calibrated or non-calibrated [54]. The calibrated camera model is simple and straightforward while many researches are actively conducted in dealing with non-calibrated cam­ era model [55]. Since the current work is focusing on the robot control instead of the camera model, a calibrated camera model is adopted in the thesis. The photogrammetry requires knowledge of the target’s shape in advance and takes an initial guess as to the transformation of the camera to the target plane. Employ­ ing the least square method, the initial guess is corrected until the corrective term becomes insignificant. The transformation output, containing the information re­ lating the target and the global framework, is then fed through a series of Kalman filters in order to reduce noises and jerks in the estimation. Finally, the main and common limitations of all visual servoing techniques are the image processing of natural images to extract robust features for visual servoing [56], The current work employs the existing code library OpenCV to facilitate the programming for the vision.

2.2.2 Impedance Control

One of the fundamental requirements for the success of a robotic capture task is the capability to handle interaction between manipulator and environment. The quantity that describes the state of interaction more effectively is the contact force

16 at the manipulator’s end effector [56] and the motion of the robotic manipulator and the manipulated object. High contact force, resulting from the relative motion between the manipulator and the manipulated objects, is generally undesirable since it will stress and damage the structures of two contacted objects. Hence, it needs an effective control strategy that is able to control the contact force as well as the motion of robotic manipulator simultaneously. The framework of impedance control is known as one of the basic strategies for multiple input control [57]. The impedance control was developed by Hogan in 1987 [58] to specifically deal with a robotic manipulator interacting with environment with multiple in­ puts. Impedance control is based on the concept that the controller should be used to regulate the dynamic behaviour between the robot manipulator motion and the force exerted on the environment rather than considering the motion and force con­ trol problems separately. It fit the control requirement in the autonomous capture of non-cooperative spacecraft, where the robotic manipulator is required not only to capture the spacecraft but also to control the grasping force to avoid damaging the spacecraft. Several variations have been developed in the intervening years that have en­ hanced specific behaviours in the controller. Synergies between impedance and other control theories have been explored in terms of fuzzy logic [59], learning or adaptive impedance control [60, 61], force or collision control [62, 63], and even more complex control hybrids [64]. The robustness of task execution can be improved by several means, in particular, control strategy. It is well known that impedance control can be used to account for position uncertainties [65]. The compliance prop­ erties that are imposed to the control system avoid that excessively large forces are exerted assuming that the controller parameters are chosen appropriately. Compliance is really the key feature that impedance control brings to robotic behaviour [57], specifically when interacting with an unknown external force or environment. Compliance is the inverse of mechanical stiffness, otherwise known as the ability to transfer an input force or displacement, into low-rigidity motion. However, the impedance accuracy in control comes at the cost of robustness [ 6 6 ] and a trade-off analysis between accuracy and robustness needs to be carefully conducted to achieve optimal control of multiple objectives involved in impedance control. A recent approach [67] to this issue is to introduce the desired impedance by the internal model control structure while compensating the non-linear dynamics of robot manipulators by the time-delay estimation technique.

17 2.2.3 Speed and Acceleration Control

Continuity in robotic motion is a necessary requirement for smooth operation. Therefore, any robot motion must be specified by three components: the desired po­ sition, the desired velocity, and the desired acceleration [39]. This can be achieved by controlling the speed and acceleration of robotic actuators such as the stepper motors at joints employed in our robot manipulator. This is necessary to execute the control commands output from the impedance controller smoothly, especially when transforming the control strategy between joint and work spaces. In order to generate the profiles for smooth motion as well as the desired dy­ namic behaviour, several schemas exist: generic speed scheme [ 6 8 ], high speed schemes [69], and acceleration feedback schemes [70]. The most basic yet effective scheme is the driven position and velocity profile. It involves generating the desired acceleration profile and allowing the velocity and position profiles to be integrated forth [71].

2.2.4 Input Shaping Control for Flexible Joint

For efficiency reasons many systems that once consisted of heavy and stiff materials are now being constructed from lighter materials. These lighter weight materials lessen the mass and can increase the speed of a system, but can also introduce flexibility into the robot joints. As a result, the end effector of robot manipulator vibrates after moving to the desired position. In order to achieve high performance, many control strategies have been developed to suppress the residual vibration. One common approach is the feedback control with the notions of inverse dynamics for rigid robots [72]. However, this approach is applicable only if the link acceleration and jerk are available for feedback. To overcome this limitation, a feed-forward con­ trol strategy known as input shaping is proposed to control the residual vibration, if the dynamic characteristics, such as the modal frequencies and damping ratios, of the system are known. Input shaping is a control technique for reducing vibrations in computer con­ trolled machines. The method works by creating a command signal that cancels its own vibration. That is, vibration caused by the first part of the command signal is cancelled by vibration caused by the second part of the command. Input shaping is implemented by convolving a sequence of impulses, an input shaper, with any desired command. The shaped command that results from the convolution is then used to drive the flexible system. If the impulses in the space are chosen correctly, then the system will respond without vibration to any un-shaped command. The amplitudes and time locations of the impulses are obtained from the system’s nat­

18 ural frequencies and damping ratios. Shaping can be made very robust to errors in the system parameters [73]. The typical desired output from input shaping is zero-vibration-motion front point-to-point. However, dynamic motion and response complicates the shaping of the signals and requires a very detailed and precise kinematic and dynamic model of the system and becomes invalid when in contact with an unknown external force. This is a major problem that arises for our purpose as when the grasper makes contact with the target, the model employed to establish the input shapers convolution will become invalid and result in a potentially disastrous command. A potential solution to make the shaper less sensitive to variations is to add more impulses, resulting in a longer shaper [74], Adaptive input shaping (AIS) is the idea of taking a short input shaper that is less robust and tuning it to the system frequency [75]. The added cost of the AIS is the computational complexity to tune the shaper to the correct frequency. A few researchers have previously developed some adaptive input shaping meth­ ods. Tzes and Yurkovich [76] use a fast Fourier transform on a portion of the resid­ ual vibration to improve the estimate of the flexible system frequency. Bodson [77] uses an indirect method to identify the system parameters and tune the shaper. Park and Chang [78] develop the Learning Input Shaping Technique that applies a simple learning rule that uses the phase and magnitude of the residual vibration to obtain a better frequency estimate. This method, however, is limited to repetitive manoeuvres. Rliim and Book [79] applied the Optimal Arbitrary Time delay Filter that allows negative impulses but can be less robust than traditional shapers. The methods of [76, 78, 79] can be applied to any reference input, but adaptation can only take place between manoeuvres. Bodson’s method continuously adapts to the system parameters, and he demonstrates this method on a position reference profile. Though it can be extended to velocity and acceleration profiles, it is not clear if the steady-state position will be the same unless the shaper is only updated between manoeuvres. As the capture of the target is not a repetitive task, these shaping techniques all leave requirements unfulfilled. The goal of our research in input shaping is to take advantage of autonomy structure in our robotic system to vary the shaper in accordance with the stage of the capture. This will allow for the design of specific input shapers for specific purposes and overcome several of the limitations inherent with the control strategy.

2.3 Autonomy

Autonomy is the process by which the robot, based on inputs, controller, and knowl­ edge makes a decision in order to achieve a goal [9]. The reasons why autonomy

19 in robotic exploration and servicing is attractive stems from the growth in produc­ tivity and return on investment for a given mission as it increases the operational time [6]. The complexity of the decision making process is such, that a generic autonomous decision- making protocol is nearly impossible to create. Instead, the standard practice dictates that decision-making algorithms are custom designed to the task and system [19, 20]. However, the structure of these can be established along the guidelines of some generic systems [80]. Far from being a complete review of all autonomy systems, the following is an examination of popular algorithms that have become common in robotics.

2.3.1 Learning Neural Network

Learning neural networks, or adaptive neural networks, are based on bio-mimetic computing systems [37]. They simulate the neuron structure of a biological central nervous system that employs a series of simple structures to perform collective and parallel processing that achieves the results of a much more complex system without knowing the system equations. The learning, or adaptive, aspect of the network is the ability of shifting emphasis or weights of the goals that alter the process. These attributes can be optimized for different goals giving flexibility to a single structure. The standard structure is organized under three layers; the input, the output, and a typically hidden layer. Although the interactions in the hidden layer can become quite complex, each node (or neuron) is a simple processing unit. The interaction between each node is what creates the network. The word network in the term neural network arises because the function f(x) is defined as a composition of other functions gi(x), which can further be defined as a composition of other functions. This can be conveniently represented as a network structure, with arrows depicting the dependencies between variables (seen in Fig. 2.3). A widely used type of composition is the non-linear weighted sum [81]. The advantage and disadvantage of neural networks is the same, unpredictabil­ ity. The ability of the structure to achieve complex results from basic functions can create the appearance of intelligent behaviour. However, when undesired results arise, resolving the issues becomes a complicated procedure. In order to combat this deficiency, the neural network employed for the robotic manipulator will be limited in depth and complexity to two layers and three stages. In addition, they require a large diversity of training for real-world application. We are employing neural networks in order to achieve a smooth, reliable, and autonomous capture of a non-cooperative target. Due to the nature of the scenario, a flexible solution is required that is capable of making decisions such as desired

20 Stiffnes: Position A pproach D am p in

C a p tu re ^Velocity In e rtia

Decceleration Acceleration Fui t.e

Figure 2.3: Neural Network Capture Structure alignment, approach vector, and transitions between stages of capture: approach and align, capture, and deceleration. A basic neural network will allows the creation of a simple set of rules designed to safely and reliably capture the target.

2.3.2 Genetic Algorithm

Genetic algorithm optimizes a solution that has been defined within a domain as a population [82]. The algorithm evolves through generations of solutions and the first of which is typically generated from a random population. In each generation, the population of solutions are evaluated, the leaders of which are selected and used to create the next generation through re-combination and mutation. Commonly, the algorithm is terminated when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. An advantage of employing a genetic algorithm to optimize a solution occurs when the optimal solution is not clearly known or defined. The system evolves through continuously improving populations and termination criteria can be defined through the comparison of effort against gain. One of the difficulties lies in ensuring the solution does not converge to local minima while ignoring the global optimum. This problem may be alleviated by using different functions, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions. However, the No Free Lunch theorem [83] states that there is no general solution to this problem. It is important to ensure that when using genetic algorithms that the solution space is well understood and defined, which for the capture of the non-cooperative target cannot always be the case. Therefore, the genetic algorithm will be avoided as a possible solution to the autonomy problem.

21 2.4 Sensors

Sensor design and selection plays a large role in the control of the robotic manipula­ tor. Sensors are broken up into two separate fields; proprioceptive and exteroceptive sensors. The first is a classification of all sensors that sense the internal status of the robotic manipulator while the latter senses the environment that is considered external to the system. Both types are necessary in order to achieve the needed control inputs to respond properly.

2.4.1 Force Sensing

Force sensing is the ability to detect dynamic or static loads on the joints or end- effector of the robotic manipulator. Several control strategies reiy heavily on accu­ rate directional force measurements [84, 8-5]. [ 8 6 ] provides a review of the develop­ ment of tactile sensors for robotic hands and demonstrates that it is a very active research area. Some recent sensors have even achieved dynamic response [29] and spatial coverage [87] that are comparable to the human fingertip. Standard approaches to force sensing typically employ pressure sensors or de­ flection sensors. However, Refs. [ 88 ] and [89] both examine the use of detailed models and highly precise vision systems in order to estimate joint torques and forces. Based on the same principle as the deflection sensors, the vision system detects misalignments in the system and compares it to an internal model. The deflections are then correlated to a force which was required to cause it.

2.4.2 Visual Servoing

Position-based visual servo defines its reference input as the relative position and orientation between the object and the robot hand ([90]). The relative position and orientation are recovered from the images obtained by the hand-mounted camera. Photographic measurement uses object size and shape to reconstruct the object position and orientation. Usually, the object feature points are extracted from the image and feature point positions in the image place are used for computing the object position. When the numbers of feature points is 3 or 4, the object position and orientation can be reconstructed geometrically ([91, 92]). However, the solution is vulnerable to image noise and the accuracy can suffer. Thus, a lot of feature points are used to obtain a least square solution when real-time is not required. However, in visual servo where the real-time requirement is fundamental, smaller number of feature points are used and the continuity of their time series are exploited.

22 One of the method examined employs a converging algorithm ([18, 90]) that uses an initial guess as to the relative position of the target to the camera. This functions by calculating the collinearity and the corrective terms and verifying if the error is significant. When the corrective term drops to an insignificant level, the solution is found.

23 3 Theoretical Development of Control Law

Summary: This chapter provides the detailed account of the development of the funda­ mental principles, the configuration of the system, the kinematic and dynamic models of the system and camera, and the control laws which are employed in order to obtain the final product.

3.1 Control Problem Definition

Consider an eye-in-hand, bio-mimetic Pieper type of robotic manipulator as shown in Fig. 3.1, in which a camera is mounted next to the end-effector to monitor a target in 3D space. Assume that the target may move in the space and its position and velocity are not known. Furthermore, we assume that the camera is a pin­ hole camera with perspective projection and the intrinsic parameters of the camera and the extrinsic parameters, i.e. the transform matrix between the camera and the end-effector, axe known. To simplify the discussion, we will assume that the target always remains in the field of view of the camera. As outlined in previous Chapter, the issue concerned in this thesis is how to control a servicing space robotic manipulator to approach and capture a non-cooperative satellite autonomously. Therefore, the control problem can be defined as the following two problems:

1 . Problem 1 - Given desired position of the target in space statically or dy­ namically, design a proper joint input for the manipulator with position-based visual servo such that the end-effector can move to the desired grasp position and orientation smoothly. Assuming that only the target’s 3D position and velocity are unknown and the intrinsic and extrinsic parameters of the camera are known.

2 . Problem 2 - Once the end-effector is ready to capture the target, design a proper joint input for the manipulator to grasp the stationary or moving target with position-based visual servo and impedance control to minimize the contact force.

24 Figure 3.1: Schematic of an Eye-in-Hand, Bio-Mimetic Pieper Type of Robotic Manipulator in a Work Space

Figure 3.1 illustrates the orientation and location of each of the reference coordi­ nate frames employed in the control of the robotic manipulator. Frame {C'o} is the universal frame which is anchored to the world frame. Frame {Ci} is anchored to the first link of the robotic manipulator and actuates with the first stepper motor.

Frames {C2 } and {C3} are similarly anchored to link 2 and 3 respectively. Frames {C4 } and {C5} intersect at the same location and represent the wrist actuations for the robotic manipulator. Frame {Cc} is the photogrammetric frame of reference and is anchored to the camera plane. Frame 6 , not depicted, is the gripper frame which actuates the capture. 3.2 Robotic Manipulator Model

As shown in Fig. 3.1, the robotic system considered in the thesis is a bio-mimetic Pieper robot manipulator with a camera in its hand. The basic mathematical description of this system consists of the robot and the camera model. Generally, the end-effector position of robotic manipulator can be represented in two different coordinate frames. One is the proximal space, joint space or joint angle space, which is defined by the joint positions and the joint angles of the robotic manipulator. The other is the distal space, often referred to as task space, Cartesian space, Euclidean space or World coordinate system. The Cartesian space is better suited for task definitions since it follows our natural understanding of positions in three- dimensional space while the joint space is more suitable for control because a robotic manipulator is controlled in joint angle space (by applying torques at joints). Thus, we will define the control tasks in the work space and execute the control tasks in the joint space. The transformation between these two coordinate frames will be performed by the homogeneous or Denavit-Hartenberg transformation.

3.2.1 Kinematic Model of Robotic Manipulator

The robotic manipulator in Fig. 3.1 can be idealized mathematically as an open chained manipulator model in the joint space as shown in Fig. 3.2. Let {W} € 9 ?3 be the end-effector position vector in the work space, and {d}(di,d2, • • • ,di) £ 5?5 be the set of corresponding joint angles defined the joint space. As the sixth degree of freedom, the gripper, does not affect the kinematics of the system, it is dropped during the kinematic modelling process. The kinematics of the robot manipulator gives the relationship between the joint positions and the corresponding position of the end-effector in the work space, such that,

{*.} = [J’lo(tf1)][r21(e2)][r32(0.,)][T4:i(fl(1)][T'54(e.,)]{x5} (3.i) where [T^j are homogeneous or Denavit-Hartenberg transformation matrices:

cosQi —sinOi 0 O' cosd 2 —sind2 0 O' sin9\ cos9\ 0 0 0 0 - 1 0 [Tio] p y = 0 0 1 0 sind2 cosd2 0 0 0 0 0 1 0 0 0 1_

cosd 3 —sin9$ 0 L i cosd4 —sind 4 0 0

sinQz cosd 3 0 0 0 0 - 1 l 2 p y = p y = 0 0 1 0 sind4 cosd 4 0 0 0 0 0 1 0 0 0 1 26 cosd5 —sinds 0 0 0 0 1 0 54 —sinOs cosd5 0 0 0 0 0 1 and {X$} = {x, y, z, 1} is the homogeneous coordinates of the end-effector in the local coordinate system as shown in Fig. 3.2.

lo

Figure 3.2: Schematic of Open Chained Manipulator Model in Joint Space

Figure 3.3 defines the Denavit-Hartenberg reference frames and identifies the links and their properties. Table E. 1 contains the identification card for the robotic manipulator.

Table 3.1: Robotic Manipulator Identification Card T Joint &i-i <+—i di di 0 ^ 1 Odeg 0 0 dx 1 —» 2 +90 deg 0 0 d-2 2 -+ 3 Odeg Lx 0 03 3 —y 4 —90 deg 0 l 2 d,4 4 —y 5 +90 deg 0 0 03

Taking the time derivative of Eq. (3.1) leads to the relationship between the joint velocity and the end-effector velocity in the work space. 27 i-2

i+1

Figure 3.3: Denavit-Hartenberg Frame Definitions

{xr} = J(e){6) (3.2)

where J(6 ) is the so-called Jacobian matrix, such that, dx dx • " del dy dy m (3.3) d6 l ‘ ■ 006 dz dz . 001 ' " df5 . Furthermore, the transformation of the acceleration of end-effector form the work space to the joint space can be derived as

{*«} = J(9){8) + j( 8 ){8 } (3.4) For manipulator path planning, the inverse kinematics gives the joint angles {0} required to reach the specified end-effector position X e such that,

{0} = f(X e) (3.5) In general this solution is non-unique [93] and additional treatments axe required to eliminate the ambiguity of Eq. (3.4). One common technique is the so-called Resolved Motion Rate Control (RMRC) [94]. It makes use of the fact that the forward kinematics can be linearly approximated using the Jacobian that relates

28 changes in joint angle space in accordance with changes in work space in a sufficient small time step, such that.

A{Xe} = J(B)A{0} (3.6) By calculating the inverse of the Jacobian, the inverse kinematics can be directly calculated for a small time step as follows:

A{0} = J~l{9) A{Xe} (3.7) In cases where the Jacobian matrix is non-square (rectangular), the inverse can be replaced by a pseudoinverse J~l [95], such as,

J-^d) = [JT(e)J(d)]~1 JT(0) (3.8)

A{0} = (3.9) However, the Jacobian matrix J is not invertible in many cases, especially for redundant systems. There are many alternative approaches that address the in­ verse kinematics problem, such as Jacobian transpose methods [96], quasi-Newton and conjugate gradient methods [97, 98], and Levenberg-Marquardt damped least squares methods [99, 100]. In the current work, we adopted the pseudo-inverse method and mapped all of the work-space interior singularities. In order to eliminate the singularity problem, we avoid the possibility of the robotic manipulator achieving this position by hard- coding the systems bounds. Thereby, the system is limited in its workspace in the following manner.

Table 3.2: Robotic Actuation Card Joint Minimum Maximum Note -90° 90° Singularity 02 0° 90° Physical limitation and Singularity 0* -60° 90° Hardware and Singularity 04 -90° 90° Hardware 05 -90° 90° Hardware

3.2.2 Dynamic Model of Robotic Manipulator

The dynamics of a serial 5-link rigid robot manipulator considered can be derived using Langrangian mechanics. In the absence of any intentional elastic energy 29 storage capability and friction or other disturbances, the potential, U. and kinetic, T. energy of the robotic manipulator can be expressed as,

i=I

(3.10)

The kinetic energy of link 1

1 T\ h A (3.11) 2

The kinetic energy of link 2

= \m 2vl + \ l j \ (3.12)

The kinetic energy of link 3

T3 = \rn3vl + i IJI 1 ^3)2 LiL2 sin(03)(9 2 + ^3)^2 + x l J t (3.13) 2 '

where

I = I p(r)r dV{r)

3 U = Y ^m ig h m 1 = 1 1 , . „ , . „ „ 1-2 2 ‘m2Lig sin 02 + m^Lig sin 02 + m4g ^Lx sin d2 - — cos (02 + $3 )

+ m5g {L-\ sin d2 — L2 cos(02 + $3)) (3.14) where g is the gravitational acceleration, h% is the height of centre of mass of each link above a reference position, mci and Icl are the lumped mass representations 30 and moment of inertia of the joints, the gearing, the linkages, the brakes, and the motors as shown in Fig. 3.2. The vcl is the velocity of the lumped mass, which can be expressed as the function of joint velocities and can be found in Appendix B. Then, we can define the Lagrangian to be used in Lagrange’s equations as,

L — T —V (3.15) Substituting Eqs. (3.10) to (3.14) in Eq. (3.15) and applying the Lagrange’s equation, we can derive the dynamics of a serial multi-link rigid robotic manipulator can be written in the joint space as,

/(»){»} +C(e,9){6\ + N{0) = + {r} (3.16) where 1(9) is the 5 x 5 symmetric positive definite inertia moment matrix of the manipulator, C(9,8) is the 5 x 1 vector of centripetal and Coriolis torques, N(9) is the 5x1 vector of gravitational torques, {Fext} is the 3x1 vector of external loads applied on the end-effector, and the {r} is the 5 x 1 vector of applied actuating torques at joints, respectively. Here, we assume the Jacobian matrix J is invertible. The robotic dynamic equation in the joint space can also be transformed into the work space by multiplying the transpose of Jacobian matrix JT on both sides of Eq. (3.16), such that,

M{Xe} + D{Xe} + {Gx} = {Fext} + J t(9){ t) (3.17) where X e is the position vector of the end-effector in the work space, and

M = .JT (9)1(9) J-l(9) (3.18)

D = Jt(9)[C(8,9 - I(8)r1(8)j(9))rl(9) (3.19) {Gx} = f r(8)N(8) (3.20)

3.2.3 Denavit-Hartenberg Transform

In this convention, coordinate frames are attached to the joints between two links such that one transformation is associated with the joint, [Z], and the second is associated with the link [X]. The coordinate transformations along a serial robot consisting of n links form the kinematics equations of the robot,

[T] = [Z1][A'l][Z2][X2] ... [Xn. x][Zn\ (3.21) where [T] is the transformation locating the end-link.

31 In order to determine the coordinate transformations [Z] and [X], the joints connecting the links are modelled as either hinged or sliding joints (rotary or linear), each of which have a unique line S in space that forms the joint axis and define the relative movement of the two links. A typical serial robot is characterized by a sequence six lines St, i = 1.... .6, one for each joint in the robot. For each sequence of lines Sl and

cos(9i) —sin($i ) 0 0 sin{9j) cos{9i) 0 0 (3.22) [Zi] 0 0 1 d-i 0 0 0 1 where 9i is the rotation around and dt is the slide along the X-axis. Either of the parameters can be constants depending on the structure of the robot. Under this convention the dimensions of each link in the serial chain are defined by the screw displacement around the common normal a.-hi+\ from the joint St to Si+i, which is given by:

1 0 0 aM+1 0 cos(aiti+1) -sin(aiii+i) 0 (3.23) [* ] = 0 sin(aij,+i) cosfau+i) 0 0 o ’ O’ 1 where au +i and av;+i define the physical dimensions of the link in terms of the angle measured around and distance measured along the X-axis. In summary, the reference frames are laid out as follows:

• the Z-axis is in the direction of the joint axis

• the X-axis is parallel to the common normal: X n — Zn x Zn + 1

• If there is no unique common normal (parallel Z-axes), then d(below) is a free parameter. The direction of Xn is from Zn-i to Zn.

• the T-axis follows from the X- and Z-axis by choosing it to be a right-handed coordinate system.

32 The transformation is then described by the following four parameters known as DH parameters:

• d: offset along previous Z-axis to the common normal.

• 9: angle about previous Z-axis, from old X-axis to new X-axis.

• r: length of the common normal (aka , a but if using this notation, do not confuse with a). Assuming a revolute joint, this is the radius about previous Z.

• a: angle about common normal, from old Z-axis to new Z-axis

3.3 Camera Model for Vision-Based 3D Control

As outline above, we consider a vision system mounted on the robotic link close to the end-effector. As shown in Fig. 3.1, the camera coordinate system, {Xc}, is moving with the coordinate system of X 3 fixed to the link L3. The origin of the camera coordinate frame located on the centre of image plane with Y-axis pointing to the target, X-axis aligning with Z3 and Z-axis aligning with X3, respectively. The coordinate transformation from the camera frame Xc to the work space (Xe, Ye, Ze) can be expressed as,

{X0} = [T\0] [T21] P 32] [T„] [Tfo] [Tj {Xc} (3.24) where [Ty] are the homogeneous Denavit-Hartenberg rotational transformation ma­ trices: "1 0 0 o' ' 0 0 1 o' 0 - 1 0 0 0 1 0 0 [Ta] [Tb] = 0 0 - 1 0 - 1 0 0 0 0 0 0 1 0 0 0 1 1 0 0 O' 0 1 0 y,;3 (3.25) [Tc] = 0 0 10 0 0 0 1 The camera model is assumed as a perspective projection with a known focal / as shown in Fig. 3.4. Assume there is a visual cue P on a 3D target in space has been projected on a 2D image plane of a camera, The coordinates of projected point P can be established by collinearity equations, such as,

33 Xc = - f ^ ; Z c = (3.26) r tc *tc where (xc,zc) are the coordinates of the projected cue P on the camera’s image plane and (Xtc, Ytc, Ztc) are the coordinated of the cue with respective to the camera coordinate system, respectively.

Visual Cue -f Target y P(X'C,Y IC, Z,c) >------7 , * '

Image Point P'(*c,Zc)

Figure 3.4: Photogrammetric Model of Camera

The coordinates of the visual cue (Xtc, Ytc, Ztc) can be further decomposed into two parts:

r ii r 12 r 13 T 21 T 22 r-2 3 (3.27) T 31 Ts 2 ?33 where [Tte] is the rotational matrix from the body-fixed coordinate system of the target to the camera coordinate system, (Xt, Yt, Zt) is the coordinates of the cue in the target coordinate system and (X0; Yq, Z0) is the origin of the target coordinate system in the camera coordinate system, respectively. Furthermore, the rotational matrix [Ttc] is the function of Euler angles: pitch (Q, rotation about X- axis), roll ($, rotation about Y-axis) and yaw (0, rotation about Z-axis). The transformation order from the target coordinate system to the camera coordinate system is defined as pitch roll and yaw. Substituting Eq. (5.1) into (5.2) leads to

i 'i i X a + ?’i2Ya + y~i3 Z a + T x (3.28) ^21 Xa + r 2 2 yA + T 9 3 Za + Ty 34 _ j 7"31 Xf\ + r 32^/1 + r3sZ/\ + Tz , . T2iXa + ^22X 4 + r23^.4 + Ty where rtJ are the functions of Euler angles of the target with respect to the camera’s image plane arid can be found in Appendix B. These equations are highly non-linear and involve six unknowns: the three rotational angles and the three Cartesian coordinates, which are the orientation and position of the target relative to the camera coordinate system. Theoretically, a minimum of three points is required In order to solve the six unknowns in Eq. (3.28) and Eq. (3.29). In practices, four- cues are used to provide system redundancy so that it can tolerate to lose one target point during the tracking process [18]. In order to apply the photogrammetry model for the position-based visual servo, a set of pre-defined visual cues on the target, see Fig. 3.6. is used to extract the three-dimensional position and orientation of the target by photogrammetry.

(0.003,0,0.109)

(0.003,0,0.016) (0.079,0,0.016)

Figure 3.5: Target Coordinates in Meters

Using Taylor’s theorem and ignoring the second order and other higher order terms, we obtained the linearized form of Eq. (3.28) and Eq. (3.29) as:

dXa + Jq — b\idX() + biidYo + bizdZo + bi4dfl + bisdtf? + bi^dQ (3.30)

dza + Kq — b- 2 idX() -f- 622^^0 + b'2'idZo + 624dU + 625dd* + 620;d© (3.31) 35 Figure 3.6: Photogrammetric Target: (a) is the Real Targets, (b) Seen Through the Vision Program

where the coefficients bij are the partial derivatives from the Taylor’s the­ orem. Eq. 3.30 and Eq. 3.31 will be solved iteratively until the correction (dXo, (IYq. d.Z0. riO. diE>, d,Q) to the initial guess of V0, y0, Z0, Q. , 0 is less than the prescribed tolerance [92]. Once the position of the target in the camera coordinate system is determined, it must be transformed into the work space by Eq. (3.15) to be used by the controller.

3.4 Vision-Based Control Strategy

As outlined in Section 3.1, the control strategy discussed in the current study covers two phases of tracking and approaching the target phase and the post-grasping stabilization phase. Accordingly, two separate controllers are designed for the two phases. Each controller may contain several sub-controllers to execute the subtasks within each phase. Figure 3.7 shows the control strategy for the given problem. The visual-based control system can be separated as two loops, the internal and external loops, to simplify the development of control law. The external control loop determines the control task based on a dynamic look and move visual servo control law. The internal control loop solves the robotic joint control problem and has open control architecture. As mentioned before, we will adopt the impedance control theory to control the robotic manipulator to achieve the control tasks determined by the external control loop.

3.4.1 Position-Based Visual Servoing

In the position-based visual servoing, the feedback state errors are the relative position error, {e}, and the orientation error {(f)} of the target with respect to the Pseudo-Force Forw ard Control^ Kinematics

• • g g j

Kinematic

Figure 3.7: Robotic Manipulator Control Flowchart end-effector in work space and are evaluated directly by the vision system, such as,

(3.32)

A{0} = J -Xe (3.34) The control objective of position-based visual servoing is to ensure the relative position and orientation error vectors in Eqs. (3.32) and (3.33) approaching zero through a given trajectory. Obviously, the position error is large if the target is far away from the end-effector initially. Accordingly, the calculated error vector 37 of joints is not accurate in joint space. However, this inaccuracy will be reduced as the end-effector approaches the target. Therefore, it will not affect the control objective if one chooses to design a control law in the joint space based on the feedback state error calculated by Eq. (3.34). Assuming there is no obstacle between the end-effector and the target, thus the simplest trajectory would be the straight trajectory from the end-effector to the target. However, this trajectory may not be the most effective one because the actual control is applied to the joint angles through the joint motor. Therefore, the simplest trajectory is a shortest distance path based on the state error calculated by Eq. (3.34) in the joint space with a maximum approaching velocity limit applied on the end-effector in the work space. This will result in a trajectory in work space slightly different from the straight path between the end-effector and the target. The approaching velocity of the end-effector can be calculated using the forward kinematics of the robotic manipulator expressed in Eq. (3.2) once the joint velocities are set by the control law. In the current study, the maximum velocity limit is assumed as 1.0 cm/s. In reality, it is a design parameter that is determined by mission objectives. In order to ensure that the target remains within the field of view of the camera as well as providing the best chance for a solid capture, the camera’s pitch and yaw angles should be controlled so that the camera is always pointing towards the target. Based on the above control strategy for the position-based visual servoing, the dynamic look and move visual control procedure can be described as follows

1 Input the feedback state errors (er and ea) of the end-effector in work space at time t from the vision system;

2 Evaluate the Jacobian matrix J(6) and its inverse J~l{0)\ 3 Estimate the state error of joints in. joint space, A{9} ss J~le

4 Modify the state error of joints by considering

(3.35)

and A{0} = A{0}+T$ (3.36)

T is the transformation matrix.

38 5 Determine the desired joint velocities by

A { 0 } < J-l(e){xemax} {&}=< At or nr ' (3.37) {9 }o <

Here At is the time limit to capture at that moment and {0}o is the pre­ determined joint velocity vector.

6 Execute the control command as outlined in the following section;

7 If e = 0 and $ = 0. Stop and enter into next control phase; capture. Other­ wise. advance the time from t to t + At and go to Step 1.

3.4.2 Vision-Based Impedance Control

The impedance control theory has been widely adopted for robotic manipulators operating in unknown environments. The aim of an impedance controller is to achieve the position and/or the force controls in a desired dynamic manner through a properly defined mechanical impedance of the robotic manipulator in the work space, such that.

Mo{e} + D0{e} + K 0{e} = {F ^} (3.38) where M0, D0, and K0 are positive definite matrices representing the inertia, damp­ ing. and the stiffness of constraints experienced by the robot, e is the relative po­ sition error defined in Eq. (3.32) and { Fext} is the external force acting on the end-effector defined in Eq. (3.16), respectively. The selection of the desired inertia, damping, and the stiffness matrices of con­ straints controls the dynamic response of the robotic manipulator to the external force as well as position tracking errors. The constraints may be real such as the end-effector is in contact with working pieces, walls and other physical limits. In this case, the parameters of inertia, damping, and the stiffness matrices depend on the physical.properties of these constraints. In case of tracking/following a desired trajectory or approaching a desired target, the constraints will be virtual and the selection of the desired inertia, damping, and the stiffness matrices depends on control objectives. To design a proper impedance controller, let us take into account of the kine­ matic and dynamic characteristics of the robotic manipulator. As an idealized

39 example, if the desired motion should be smooth in the work space, it is reasonable to assume the acceleration of desired motion as zero, such that.

e = X e ~ X d = X e (3.39) Substituting Eq. (3.39) into Eq. (3.38) leads to the joint acceleration due to the external force and the desired impedance.

{0} = J -\e )M v l[{Fext} - D0{e} - A'0{e}] - J-\9) J(e){6} (3.40)

Correspondingly, the torque required to control the robotic manipulator can be calculated from Eq. (3.16), such that,

{r} = I(d){J~l(e)Mo1[{Fext} - Da{e} - A0{e}] - J - X{0)j{0){6}} + C(0,0){0} + N(e)-J~\d){Fext} (3.41)

Equations (3.40) and (3.41) are the desired joint acceleration and torque in the joint space in the presence of external loads exerted on the end-effector, the position tracking errors input from the vision system and the desired impedance. The rate of position tracking errors can be calculated from the time history of the position tracking errors. In the tracking and approaching phase of the robotic manipulator, there is no external force exerted on the end-effector and therefore the force term { Fext} is set to zero in Eq. (3.40). Accordingly, the terms of tracking error e and its rate {e} will be set to zero after the end-effector has grasped the target arid only the force term {Fcxt} is feedback into the controller. The proposed robotic manipulator employs the stepper motors at the joints. Therefore, we will use Eq. (3.40) to control these stepper motors directly. The selection of the matrices that represents the inertia, damping, and the stiffness of constraints determine the dynamic behaviour of robotic manipulator. It will be carried out by calculating the physical properties of the system. The behaviour can then be adjusted through the following method. The desired dynamic equation is the following:

[Md][e\ + [cH][0]+ = o (3.42) Scaling the values of Aid and Cd will modify the behaviour of the system where M,i is synonymous with inertia and Cd is synonymous with damping. Notice the absence of the gravitational term, which can be compensated for in the controller

40 rather than the desired dynamics equation. For the desired results, the values are scaled linearly, rather than dimensionally in order to homogonize the behaviour of the system in all axes. The tuning equation now becomes.

Km[Md]{e) + K c[Cd)[9] + [G\ = 0 (3.43) where the Krn and the K c are the respective scaling factors.

3.4.3 Visual Servoing Enhanced by Kalman Filter

The position-based visual servoing requires calculating the error between the desired and actual positions of the camera in the 3D work space in real-time. The pose estimation is carried out by photogrammetry with a least-squaxes-based solution. However, this approach is not robust and demonstrates high susceptibility to noise in image coordinates [89], leading to poor individual pose estimates. This is because dynamics of the target motion with respect to the camera cannot be accurately predicted in a dynamic environment. As a result, the position-based visual servoing will lead to excessive or uncontrollable oscillations of robotic manipulator and, thus, require temporal filtering. In this thesis, we adopted the Kalman filter to address the robustness issue in visual servoing. Due to non-linearity of photogrammetry, a Kalman filter (KF) is used to filter the image position and orientation in order to compute the current information as well as the predicted information in the time domain when the capture will take place. The following outlines the basic KF algorithm customized for the vision system. Rudolf E. Kalman published his famous paper in 1960, describing a recursive solution to the discreet linear filtering problem. Since that time, the filter has been adapted and advanced, becoming extensively used in autonomous navigation. The Kalman Filter is a recursive data processing algorithm generating an optimal estimate of desired quantities given a set of measurements. Optimal is emphasized because it denotes two requirements; a linear system and white Gaussian errors.

3.4.3.1 Discrete Kalman Filter

Ongoing Kalman Filter is a cycle that predicts and corrects in an iterative loop.

Prediction > Correction (3.44) (time update) O (measurement update) The filter functions by iterating through two steps; prediction and update. Pre­ cise initial values for both the state and the error covariance are required in order to 41 Table 3.3: Kalman Filter State Space Symbol KF Symbol Variable

X X r x position X X 2 x velocity X x$ x acceleration Y X i y position Y x§ y velocity Y x 6 y acceleration Z x 7 z position Z X g z velocity z X g z acceleration © X 10 roll angle 0 X n roll rate © x 12 roll acceleration X i3 pitch angle X u pitch rate $ X 15 pitch acceleration 0 x 16 yaw angle 0 X 17 yaw rate n X l& yaw acceleration reduce the settling time of the filter. Once initialized, the system continuously it­ erates between the two steps mentioned above. During the prediction step, the KF computes the new state xk+i based on the previous state x k as well as the predictive model F x . For our purpose, the predictive model is a PVA model, which stands for Position, Velocity, Acceleration. Additionally during this step, the KF updates the error covariance matrix Pk+i based on the predictive model and a tuning parameter Q k , known as the system model error.

During the update state, the KF obtains measurements (t/i, r/2 ,..., ijk) and up­ dates the measurement output model H k . The Kalman Gain K k is then calculated using the measurements, the measurement output model, and the measurement covariance matrix, R k and employed to update the current state along with the sensor measurements. Lastly, the new error covariance matrix P k + \ is calculated. The filter can be seen to function by varying the Kalman Gain Kk hi order to either ”trust” the actual measurements or the prediction. In another manner of speaking, as the measurement error covariance R k approaches zero, the actual measurement, yk, is "trusted” more and more. On the other hand, as the system

42 Table 3.4: Kalman Filter Cycle summary table Summary Table Initialization: i 0 = E[x o] Pq — E [(.to - .t0)(x0 - .x0)T] State Propagation: 1 = Ek&k Pk+l = PkPkFk + Qk Observation Propagation: Vk+i = Hk+lK+l Pk+l = Pk+l Pk+1^1+1 + P'k+1 pxy _ p - tjT r k+1 ~ k,+\ A;4-l Update: icM = p ^ n u r 1 p + p — K~ p w / r k+1 — r k+1 A"fc+l-r fc+l'v'fc+l = T P'k+lk'k+l where (-) denotes a propagated value, (+) denotes an updated value Qk — System Model Error Matrix, Rk = Measurement Error Matrix

43 initial estimate k=k + i initial error covariance P" Measurements k=l

Measwement Update 1. Compute Kalman gain

1. Compute a priori state Kj=P:H /(H iP:H / + Rif 2. Update state estimate with measurement yA 2. Compute a priori error covariance ^i+l ~ PiPjPi +Qi 3. Update the a posteriori error co variance pi (i-K iHi )p;

Figure 3.8: Kalman Filter Cycle model error. Qk is reduced, the measurements are trusted less and less, while the prediction, Hkxk is trusted more and more. This calculation can be seen in Fig. 3.8 under the right hand Measvrement Update block, point 2. For more details for the development of the Kalman filter, see Appendix C. As previously mentioned we employ a PVA KF in order to track and predict the motion of the target through the universal frame of reference and to predict the intercept position of the robotic manipulator. The inputs to the system, i.e. the measurements, are the photogrammetry outputs, converted to universal frame employing the inverse kinematics of the robotic manipulator from base to Camera frame. The main idea behind a PVA is Newtonian motion. Therefore the motion of the target is modelled as the following.

X + — X + X ■ dt + \ x ■ dt2 (3.45)

The error is modelled as the next, progressive term from the Taylor expansion which is representative of the jerk of the system.

44 (3.46) As the model represents a specific scenario in our case, that is of a drifting non-cooperative satellite in space, this simplified assumption for the motion is in fact quite accurate. Our state matrix tracks the rotational and translational degrees of freedom as well as their velocities and accelerations. This totals 18 states, which can be seen in the Table 3.3. For full detail in the construction and tuning of the KF, see Section 4.4.

3.5 Speed and Acceleration Control of Stepper Motors

The proposed functional prototype of robotic manipulator built in the thesis study employs stepper motors at major joints due to their high holding torque. However, the use of stepper motors brings a series of complications in terms of motion control strategies. Stepper motors move a finite and defined step each time when they are triggered. This discrete actuation results in non-smooth rotation of joints although its impact can be reduced by using sub-steps. Thus, special control is required to ensure smooth motion. The proposed controller employs a standard delay of the on-board microcon­ troller as a basis for the highest speed. The minimum delay between iterations of the software is the highest possible speed. Adjusting the delay while the motors are in operation provides a change in the rate of the actuations, which equates to a change in acceleration. The conceptual design of the speed and acceleration controller is a bucket fill algorithm and the control flowchart can be seen in Fig. 3.9. Although the micro-controller is interrupt based, the time variations between iterations are minor in comparison to the time delay of the actuation and allow for a smooth operation of the stepper motors. The speed control portion of the control is a simple delay counter set-up that employs the clock on the micro-controller in order to measure the iteration time. Variations in speed and acceleration are only applied after an actuation occurs allowing a synchronization to the changes. In order to effect acceleration control the system first must establish a be­ havioural profile to track.

X-re.f 3'des T Kpip'des -Fact) T ^d(.Xdes 2 - a c t) (3.47) where [xactjxact.xact]T, [xdes, xdes, Xde.sV are the vectors of the actual and desired positions, velocities, and accelerations of the end-effector, respectively. The position

45 xd,xd

Acceleration Reset Step Control , Counts

D eterm ine y D eterm i flag* xflag

A c t u a t i

Figure 3.9: Speed and Acceleration Control Diagram error of the end-effector can be formulated as:

e(t) = Xdes(t) - Xact(t) (3.48) To show the influence of the input acceleration on the tracking error, Eq. (3.48) is differentiated to obtain:

e(t) = ides{t) - Xact{t) (3.49)

e(t) = xdes(t) - Xact(t) (3.50) Then, the system equation can be written as:

e + A^e + Kp€ = 0 (3.51) which is a standard form of the second-order characteristic polynomial expression. The desired performance in each component will be gradually achieved by selecting the appropriate Kp and Kd.

46 3.6 Flexible Joint Control using Input Mode Shaping

The joint 6\ of the proposed functional prototype of robotic manipulator built in the thesis study employs a flexible joint to address the misalignment between the stepper motion and the robot. As a result, it creates residual vibration of robotic arms after actuation, which takes a very long period to settle down [74]. For our application, the system has only one natural frequency, although it varies as the robotic manipulator extends its links. The natural frequency and damping ratio can be easily determined by vision system [75]. Thus, we will adopt the input shape control to suppress the residual vibration. Assume the system has a natural frequency, to, and damping ratio, £. then the residual vibrational response from a sequence of impulses of joint actuations can be described by:

v (U, o = yctycF+sRcF ( 3 .5 2 ) where,

, C) A{e<0Jti cos (L)dti ) (3.53) i=1 n S(u,0 =Y2 Aieiutisin(udti) (3.54)

where At and U are the amplitudes and time locations of the impulses, n is the number of impulses in the impulse sequence, and to a = — £2. By setting Eq. (3.52) equal to zero, we can solve for the impulse amplitudes and time locations that would lead to zero residual vibration. However, we must place a few more restrictions on the impulses, or the solution will converge to zero-valued or infinitely valued impulses [76]. To avoid the trivial solution of all zero-valued impulses and to obtain a normalized result, we require the impulses to sum to one:

= 1 , A i > 0 (3 -55) The selection of impulse number, n, depends on control requirement. However, the more numbers of impulses will result in better vibration suppression but. longer delay of system response. Therefore, we will use two impulses in the input shaping control to achieve a balance between the performance and system response. The problem we want to solve can now be stated explicitly: find a sequence of impulses that makes Eq. (3.52) equal to zero, while also satisfying both aspects of Eq. (3.55). For a two-impulse sequence, the problem has four unknowns - the two

47 impulse amplitudes (A1; A2) and the two impulse time locations (ti, t2). Without loss of generality, we can set the time location of the first impulse equal to zero, t] = 0. The problem is now reduced to finding three unknowns (.41: A2, t2). In order for Eq. (3.52) to equal zero, the expressions in Eq. (3.53) must both equal zero independently because they are squared in Eq. (3.52). Therefore, the impulses must satisfy:

0 = Ai + A2e ^ t2cos{uidt 2) (3.56) 0 = A2e^ui'2 sin(udt2) (3.57)

Equation (3.57) can be satisfied in a non-trivial manner, when the sine term equals zero. This occurs when: nix nTd ojdt2 = nn, =>■ t2 = for n — 1 ,2 ,... (3.58) LOd 2 where Td is the damped period of vibration. This result tells us that there is an infinite number of possible values for the location of the second impulse - they occur at multiples of the half period of vibration. To cancel the vibration in the shortest amount of time, choose the smallest value for t2:

(3.59)

For this simple case, the amplitude constraint given in Eq. (3.55) reduces to:

A\ + A2 — 1 (3.60) Using the expression for the damped natural frequency and substituting Eqs. (3.59) and (3.60) into Eq. (3.56) gives:

(3.61)

Rearranging Eq. (3.61) and solving for A\ gives:

(3.62)

48 TiC Defining, K = exp j ... ^ . the sequence of two impulses that leads to zero residual vibration can now be summarized as:

1 K ’ Ai a 2 1 + K 1 + K *2 0 0.5Td _

3.7 Pseudo-Force Control

In order to effect proper control while in contact with the target, force-feedback be­ comes necessary. However, three dimensional force sensors are costly and bulky, two qualities that count heavily against their selection for this particular application. Therefore, a pseudo-force approach is suggested. The pseudo-force sensing employs detailed knowledge of the properties of both the robotic manipulator as well as the target and uses the accuracy of the near target vision system in order to visually calculate the force being applied to the target. In concept it is working backwards from traditional stress/strain problems where the deflection will be given and the force responsible for its cause will be calculated. The contact force is established by examining the reference trajectory and the desired trajectory of the vision system and attributing the misalignment between the grasper, the relaxed target grasping point, and the vision system to force.

{F} = [Md] (Xdes — Xact) + [Bd\(Xdes ~ X act) + [Iid](Xdes — Xact) (3.64) where the values of Md. Bd, Kd are the combined Mass, Damping, and Stiffness matrix for the target, target grasper, and gripper system. The force calculated from this term is then converted into a robotic manipulator offset through a simple potential energy model in order to zero out its term in the following manner.

{AX} = ( A ^ i m (3.65) where [KSys] is a pseudo-spring constant which represents the storage energy po­ tential of the gripper bar and robotic manipulator system.

49 3.8 Digital Dynamic Damping

A digital dynamic damping algorithm was implemented in order to compare, con­ trast, and complement the physical viscous rotary damper which is employed to compensate for the flexible shaft coupling. The digital damper functions by creat­ ing a variably sized sliding averaged array which contains the desired positions of the torso stepper motor. Due to the high iteration rate of the computer algorithm, the inherent lag to the system will be reduced in severity. The algorithm functions as follows. n T .x , Xf> = ^2— (3.66) n n where X (p is the digitally damped value of the torso stepper, E Xt represents a i—0 one-dimensional matrix of size n and where n is an adjustable variable used to scale the level of damping. Initially the value for n is quite small in order to allow for a rapid adjustments in the approach phase. As the distance is reduced, the value of n increases in order to stiffen the performance of the . The adjustment of the matrix size is controlled through the neural network which controls the phases of the capture.

3.9 Grasping Algorithm

The grasping algorithm is used in order to determine whether or not to close the gripper. A decision making process is required in order to determine if the grasping bar is within the grasp of the gripper and whether the alignment is sufficient in order to effect a successful capture. The algorithm functions in the following manner. n E Mi A G = — — (3.67) n where Mt is known as the grasping offset. The grasping offset is the metric by which the total misalignment of the system is measured. The sliding average used for the grasping algorithm is the same as the digital dynamic damping except with an initializing condition. For the first 500 iterations, the matrix is seeded with reading from the vision system, which allows the value to stabilize at a non-trivia! solution. Once the first 500 iterations are complete, the algorithm performs as designed.

50 Initially Mt was defined as a rigid value such as:

Mi = ^ A X f + Ah;2 + A Zf + \J AO} + A $ + Aw? (3.68) However, providing equal weight to the translation and orientation misalign­ ments proved to be impractical and unreliable. A vertical misalignment can be corrected late into the grasping procedure due to the physical design of the system, while a horizontal misalignment will result in a collision and requires a reset of the system in order to correct for. Therefore, in order to provide the necessary flexibility to the system, the decision making procedure employs a neural network. The first of three layers of nodes are; the non-critical translational distances, the weighted critical horizontal distance, the orientations, the yaw misalignment, and the vertical alignments. The translational misalignments are separated due to the higher weight needed to ensure that the horizontal misalignment be minimized. This is critical in ensuring a successful capture. The second layer of the neural network nodes weighs the misalignments. The third layer sums up the error, and measures it against a pre-set value. If the misalignments are small enough, meaning that the grasping bar is within range of the gripper, then the system will initiate a grab. The weighing of the misalignments employs a simplified kinematic model which weighs the values of the first layer, depending on the current position between the camera and the target. The weighing system is the following.

Mi = V A X 2 + A F 2 + A Z 2 (3.69)

m 2 = k V A Y 2 (3.70)

Ms = \J92 + ®2 + uj2 (3.71) m 4 = Ytan(

The last layer also contains a monitoring system that iterates for a maximum of 100 cycles. The system monitors for a large change in alignment and will release the target in case this occurs. This is to ensure that once initiated, if the target is missed, then the system will attempt a reset and will try again.

51 4 Computer Validation and Controller Tuning

Summary: Simulation is a large part of verifying and tuning controllers. This chapter covers the design and analysis of the robotic manipulator, we simulated the possible be­ haviours and results from the differing control strategies and methods. The creation of the simulation environment involved the design of a simulated robotic manipulator that would take similar input commands as the real robotic manipulator and behave similarly in reaction and force. Second part involved the creation of a test scenario capable of eval­ uation the behaviour of individual control strategies and determining in a quantitative manner the best method to control the arm. The goal of the simulation environment was to first confirm that the methods would function as expected, but also as important is employing the simulation set-ups for tuning the control parameters.

4.1 MatLab Simulink Environment

The selected simulation environment is MatLab w'ith the Simulink environment. MatLab allows for rapid coding of the control strategy and provides debugging, plotting, and simulating tools that make it a great environment. MatLab is similar to a coding environment and shares a large portion of its syntax with coding languages such as C. The main reason for employing MatLab over a pure coding language is the availability of the libraries and analysis tools developed purely for this purpose. Visualization tools provide rapid insight and modelling capabilities that are handy when developing new control strategies in order to determine behaviour and tuning changes. Simulink is a visually based simulator that provides a simple and quick layout for rather complicated control system. It allows in-depth analysis and comparisons of any parts of the system and can accommodate custom sections of Matlab code in the control layout. The visual basis of the toolbox creates a flowchart like control allowing digital probing and reading of sections which could be physically impossible to obtain.

52 4.2 Robotic Manipulator Model

The robotic manipulator model is employed in the majority of the simulations and was the first simulated model created. The first simulation model simplified the design of the arm and eliminated the torso joint, effectively transforming the model into a two dimensional manipulator. The following calculations were used and resulted in the following model.

4.2.1 2-Dimensional Robotic Manipulator Simulator

In simulating the behaviour of the robotic manipulator, we limited the operational space to two dimensions. The following is the development of the dynamics equation for a two link, two dimensional robotic manipulator. The configuration can be seen in the following, Fig. 4.1.

Figure 4.1: 2D Robotic Manipulator Dynamics Set Up

The geometric relationship between the universal coordinate system and the end- effector coordinate system is the following.

Xe = Li cos 8 i + L2 sin(#i+ 8 2 ) (4.1)

Ye = Li sin 8 1 + L2 cos(#i + 8 2 ) (4.2) Taking the derivative we can calculate the velocity relationship.

Xe — —L\ sin 8 \ 8 \ + T2 cos(0i 4- 02)(0i + ^ 2) (4.3)

53 Ye = Li cos Oi$i + L2 sin((?i + 02 )0i + #2) (4.4) Written in a more compact matrix form.

Xe —L\ sin9\ 6 \ + L2 cos(0i + 6 2 ) L2 cos(0i + #2)

Ye L\ cos 9\6\ + L2 sin(#x + #2) L% sin(0i + 6 2 )

(4.5) where

j(0 g \ _ sin Mi + L2 cos( 0i + 62) L2 cos( 0i + 92) . 1 15 2) ~ [ Li cos M i + L2 sin(^ + e2) L2 sin(0x + 02) J [ } Taking the derivative of Eq. (4.3) and Eq. (4.4) we get the following acceleration relationships.

Xe = -Lism9i9i+Licos9i(9i) 2 +L2Cos(9i+92)(9i+92)-L2sin(9i+9 2 ){9i+92 ) 2 (4.7)

Ye = L! cos9 1 91 -L 1 sin01 (01 ) 2 + L2 sm(9t+ 0 2 )(0 i + 0 2 )+ L2Cos(91+92)(91 + 92)2 (4.8) In matrix format.

(4.9)

The Lagrangian function, from which we can derive the dynamic behaviour of the system is seen in the following.

L = T - U — Ti + T2- U (4.10) where

U = -miLigsmBi + m2g L\ sin0i - — cos(0i + 02) (4.11)

(4.12)

T2 = ^m 2 vlc + i / c2 B\ (4.13) 1 rO AO , A -m 2 L\0\ + ~^0i + @2 )^ + LiL2sm920i + 92)9i + -^1^02 (4-14)

54 Lagrangian dynamics is defined as

d_ fdL_ dL = 0 (4 .1 5 ) dt \ddL ddi d_ / d L \ dL = 0 (4 .1 6 ) dt \de2) ae2 and so substituting all the previous equations into the Lagrangian dynamics equation, we obtain the following.

m2L2\ ^ + m2L\ + m^2 + m2LiL2 sin 02^ Ox + sin q2 +

+ 92(miLi + m 2L \) | cos0i + m2L2g + 0^ _ o 1?^

(m 2L% , rn2L\ L2 . m^L'i ■/. rn,2 L\L2 l2 . fn2L2g . + sin 0 2 + ’ 0 2 - cos 02^1 + sin(0i + 02) = 0 I 4 2 } 3 2 (4 .1 8 ) Written in a compact matrix format we obtain.

(4 .1 9 ) [M,{I} + [C|{U + |G1 = 0 where

m2 LiL2 . m2L| mi^'1 q. m2L\ + — + m2LijL2sin02 ——/"'2JJ1^-----sin 2 02 zn+ i- '"-2-^2 [M] 3 4 (4 .2 0 ) m2I-2 , ™-2LiL2 . m2 L\ ——£ H sin02

m 2 L\L 2 Q2 cos 02 m 2 L\L 2 ^ ^ (4 .2 1 ) [C] = m 2 L\L2 CO S 0 2 0 1

m i + m , „ m 2L2 . , „ . . ------2Li cos 0i H------sin(0i + 02) 2 T 4 (4 .2 2 ) [G] Tfl2L 2 - - sm(0i + 02) This equation is employed in the impedance control model in the robotic arm control block in order to simulate the behaviour of the two dimensional arm resulting from the acceleration and speed input. 55 4.3 Impedance Control Simulation

The impedance control simulation was set up in the Simulink environment and employed the robotic manipulator model derived in the previous section. Figure 4.3 is the simulink model of the controller where the sections have been described in order to understand the layout. Looking closely at the robotic manipulator section, the left blocks are the actual robotic manipulator motion, while the right hand block is the wall contact and force feedback, resulting in adjusted position of the arm.

56 Feedback Input

E l

Im pedance

Figure 4.2: Impedance Controller in Simulink Environment

The simulation aims to validate the controller design by testing it on two separate scenarios designed to expose desired behaviours for the capture of non-cooperate targets.

4.3.1 Test Scenarios

The test scenarios used to gauge the behaviour of the impedance controller were created in the simulink environment. They concern themselves with the robotic manipulator and how it interacts with unknown forces. In order to simulate this the arm is given a command that will bring the end effector into contact with a spring-damper modelled wall surface.

57 The first testing scenario requires the arm to trace along an unknown wall maintaining a desired force of contact. The wall is sloped and varies in stiffness in discontinuities by factors of ten. The testing scenario demonstrates the ability of maintaining contact forces and desired positions when faced with abrupt changes in external forces. The second scenario commands the arm to move the arm at top velocity and collide with a wall at an unknown position with unknown properties. This scenario demonstrates the ability of the arm to react in worst case collisions and also on its ability to generate and track a path at its top velocity. The goal is to demonstrate that the controller can handle unknown collisions and external force discontinuities during the course of its operation allowing it to deal with uncertainties in the operations that could lead to such events. Furthermore, the ability to adjust the gains in order to vary the behaviour of the system is demonstrated. The following section will present the results of the simulations and a brief description of the meaning these findings represent. The graphs are all generated using Simulink’s probe block, which contains the default settings for outputs. The simulation takes place along a predefined length of time (20 seconds in this case) and the positions are repre­ sented in meters.

Wall with Actual Path stiffness K Desired Path

Joint \

Figure 4.3: Simulation Setup

58 4.3.1.1 Wall Impact

The first set of figures, Fig. 4.4, illustrates a collision with the wall. The arm Ls given a path to follow which will unknowingly bring it into a collision with a wall of unknown stiffness. Figure 4.4 (a) demonstrates the simulated path as compared to the desired path, 4.4 (b ). The arm begins its motion from neutral joint position where 0 i = 02 = 0 which results in an (X,Y) position of (1,-1). Figure 4.4 (c) is the a measure of the force encountered by the end-effector during the collision. It is the distance by which the tip encroaches the surface of the wall. It is expected to be non-zero as a contact force is desired and necessary during the control of the deceleration of the captured target and can be fine tuned depending on the desired outcome. Of note in this particular result is that the system reacts fairly quickly to the external force and proceeds to an acceptable force level that is constrained by the force controller. The oscillation in the force response is from the delay of the robotic system from sensing the impact until it begins to ease the motion. This second collision test, Fig. 4.5, accelerates the end-effector at an angle to the wall and collides with a much stiffer surface (*10 from the previous example). The higher velocity and stiffness results in a higher resultant force, as per expectations. The stiffness results in a higher force than the controller is set to handle and gives preference to the position over the limited force. Figure 4.5 (c) shows the actual force in the X direction (zero force in this case) compared to the desired force (graphed). The force attempts to adjust to the initial force, however the system quickly becomes saturated and the force begins to climb above the reference level (note the graph extends into the negative). The resultant force could be limited by increasing the desired stiffness of the controller, however the overshoot would worsen and the position precision would suffer.

4.3.1.2 Wall Tracker

Figure 4.6 illustrates the arm and controller in a scenario where the end effector traces along the wall without knowing its exact position. At Y = 0.5 the stiffness of the wall increases by a factor of 10 which results in a sudden discontinuity in force and a reduction in penetration distance. The result, seen in Fig. 4.6 (c) is a large oscillation in the force. The controller balances the desired position, the desired force, as well as the reference speed and accel­ eration. This results in a balanced solution instead of a particular solution suited to one aspect. The final set of figures, seen in Fig. 4.7, are a repeat of the previous simulation with an increased in dampening and stiffness of the controller. The result is immediately apparent in a reduction of the oscillations and the reduction in settling time for the controller. The controller similarly responds to the discontinuity but is settled within the time limit of

59 the simulation which is a crucial feature for the system in regards to capturing a target. Settling the robotic system allows the vision system to continually track and predict the target’s motion accurately and the quick settling time means a grasp command can be initiated quicker than with previous controller behaviour.

60 X Y Plot

0.5 (rt >- -0.5

- 1.5

X Axis (a) Actual Path Followed in Meters

X Y Plot

0.5

CO 3 > -0.5

X Axis (b) Desired Path in Meters

o I -0.01 ...... \ .. \ -0.02 ...... I. . _j V/'v -0.03 ...... \J -0.04 i -0.05 \

-0.06 . C 5 10 15 20 (c) Distance Through Wall in Meters

Figure 4.4: X & Y Motion of End-Effector (Meters): Collision with Wall

61 2

- 1.5 -

1 / - /

0.5 / / / / / / . . / / / 4 o > / / / / / •0.5 / / r ______> / /

./ * -1 -

■ -1.5 -

I t -2

XAxis X AXIS (a) Actual Path Followed in (b) Desired Path in Meters Meters

- 0.2

- 0.4

- 0.6

- 0.8

- 1.4

(c) Force in X Direction (Newtons)

Figure 4.5: X & Y Motion of End-Effector (Meters): Collision with Wall at Angle

6 2 X Y Plot

-0.5

0.9 1 1.1 1.2 1.3 X Axis (a) Actual Path Followed in Meters

X Y Plot 21 , , ,------1.S -

1 -

C O „ „ 3 05 ■ > 0 -

-0.5 -

-it______,______,______,______: 0.9 1 1.1 1.2 1.3 X Axis (b) Desired Path in Meters

- 0.01

- 0.02

-0.03

-0.04

-0.05

(c) Distance Through Wall in Meters

Figure 4.6: X & Y Motion of End-Effector (Meters): Trace Wall Path

63 X Y Plot X Y P to t 2

15

1

05

0 >i

•0.5

•1

-1.5

1 15

(a) Actual Path Followed in (b) Desired Path in Meters Meters

“I------r~

-10

______I______I______[_ 4 6 8 10 12 14 16 18 20 (c) Force in X Direction (Newtons)

Figure 4.7: X & Y Motion of End-Effector (Meters): Trace Wall Path With In­ creased Damping

64 4.4 Kalman Filter Simulation

In order to simulate the Kalman filter that was described in chapter 3 in Section 3.4.3.1 we employed a matlab simulation. The simulation is first seeded with a True path to be followed. The True path represents the real data with zero error the goal of which is to be able to achieve this path with the output of the filter. The True path is then covered with errors which represent real life data. This is created with masking noise that creates a Noisy path. These two paths can be seen in Fig. 4.8 and Fig. 4.9 representing the true and noisy paths.

oc a -0-00B ^ -0.01 -0.015

Y - Position in M

Roll Pitch Yaw Roll Noisy Pitch Noisy Yaw Noisy KFRoll KF Pitch O -2 0 KF Yaw

-40

-60

Time

Figure 4.8: True Path of the Kalman Filter

This is probably the correct time to begin talking about the state space of this par­ ticular Kalman filter. The state space model consists of a measurement equation relating

65 - True - Noisy - KF Result 0.01 5. c 0.005 CT o 0 "to o O. -0.005 K| -0.01 -0.015 10

Y - Position in M X - Position in M

60 - Roll 40 /! - Pitch -Yaw \ : h • Roll Noisy 20 !/ - Pitch Noisy , , r \ !...... Y' T ..... “ 7 ^ f 't - Yaw Ncisy . 0 '■ V-i : /••;...... 1 - KF Roll - KF Pitch - KF Yaw -20 V -40 %W Ljf -60 6 10 Time

Figure 4.9: Noisy Path of the Kalman Filter the observed data to a dimensional state vector and a transition equation that describes the evolution of the state vector over time. For this specific model, the measurement model is the following.

Y position ^position •2'position (4.23) Rol lorientation Pitchorientatiori Y Q

66 The predictive model is the following.

Xi + X 2 - dt + \X$ • dt2 *2 + X3 • dt Xi X 4 + Xs ■ dt + \X i ■ dt2 -X 5 ~h * dit ^6 X 7 + X8 -dt + \Xq ■ dt 2 X 8 + Xq ■ dt

X 0 X p r e d — < (4.24) Xiq + X\\ • dt + ^X ± 2 • dt2

X n + X 1 2 • dt X12 X13 + Xu ■ dt + \Xu ■ dt2 X u + X 1 5 ■ dt Xu X u + -Xir • dt + ^Xi8 ■ dt2 X \ 7 + Xj8 • dt

X 1 8 where X i,X 2 ,X 3 correspond to x, x, x, X 4 ,X 5.Xa correspond to y, y, ij, and X 7 ,X 8,Xg correspond to z,z,z. Similarly Xio, I n , Xy> correspond to roll, roll, roll., Xu-X 1 4 , X\r, correspond to pitch, pitch,pitch, and Xu,X\7,X\8 correspond to yaw, yaw, yaw The Jacobian error propagation matrix is.

1 dt ^ 0 0 1 dt 0 0 1 0 d t2 0 1 dt 2 0 1 dt $ (4.25)

0 d t2 1 dt 2 0 1 dt 0 0 1 The process noise matrix is the Jerk, otherwise known as the time derivative of the acceleration. This is a common selection for position, speed, acceleration models. The matrix takes the following shape. 67 The last matrix needed to build the model is the Jacobian for error propagation.

1 0 0 0 0 1 0 0 1 0 H = (4.27) 0 1 0 0 1 0 0 0 10 0

4.4.1 Tuning Parameters

Tuning the response of the Kalman filter is done through adjusting two matrices repre­ senting the variance of the expected process and measurement noise. Effectively, they are the amount of noise we expect from the both the predicted and measured data. The higher the value, the more noise we expect, the less weight we give the data. The first matrix is the expected process noise matrix Q. The matrix is a diagonal square matrix and for the purpose of this example is weighed through experimentation. Although not optimal, the value allows for rapid tuning and behaviour analysis. For our example, the expected variance matrix for the process noise is the following. Q\ 0 o q 2

< ? 3 Q = (4.28) Q 4 Q5 0 0 ... 0 Qe Similarly, we need a matrix for the expected variance of the measurement. R. It is also worthy to note that the relative weight of the Q and the R matrix is important as it is this ratio that will govern the filter’s trust in these two sources of information. As with the Q matrix, the R matrix is a purely diagonal matrix with the following form.

Ri 0 0 0 R2

R3 R (4.29) R4 r 5 0 0 Ra

4.4.2 Initialization

The final parameter needed for the filter is the initialization of both the state, X^niai, and the variance matrix P. The accuracy of both of these values will determine in large part the settling time of the filter, a crucial component in the capture of a non-cooperative target. P is the variances of the initial values of the state. The lower the variance, the higher the weight of the state. The variance matrix is modified and adjusted throughout the iterations of the Kalman filter. The variance matrix, P, takes the following form.

P i 0 0 0 P2 0 0 P3 Pa (4.30) Pl5 Pm 0 0 P17 0 0 0 Pis The initialization of the state matrix is purely to reduce the settling time. It is simply seeded with an estimate of the settled position in the diagonal matrix in the following form. 69 Table 4.1: Kalman Filter Standard Deviation and Variance Vision Standard Deviation Variance X axis 0.03496 0.00122 Y axis 2.11288 4.46427 Z axis 0.00047 2.21760e-007 Roll 2.09207 4.37675 Pitch 1.20686 1.45652 Yaw 3.69602 13.66058

X i 0 0 X2 0 0 x 3 X4 (4.31)

X15 * 1 6 0 0 Xl7 0 0 -Vis

4.4.3 Results

The resulting behaviour is seen in Fig. 4.10 with both the True and Resulting output superimposed in order to see the accuracy of the filter. The resulting standard deviations and variances are shown in Table 4.1. The two noticeable least accurate states which are tracked by the Kalman filter is the Y axis and the Yaw angle. Therefore a focused look at both is provided in Fig. ??. The top three graphs in Fig. ?? describe the Yaw as it is processed in the filter. The leftmost graph is the true yaw of the target. The top center graph is the noisy measurement that is seen by the input of the filter. The top right graph displays the filtered output from the noisy measurements, and although it is two orders of magnitude higher deviation from the other two Cartesian paths, the noise seen on the Y axis is of the same order as the deviations. The Yaw inaccuracies derive from the poor initial conditions of the state which begins with a large error. The remainder of the elapsed time, the angle quickly corrects and remains well within parameters and expected results.

70 • Position in M -0.005 -0.015 rsr 2 -0.01 0.01

10 20 20 0 0 Y - Position in M in Position - Y Figure 4.10: Kalman Filter Output Filter Kalman 4.10: Figure 71 X - Position in M in Position X - 12 - KF Result KF - Noisy - True Roll Noisy Roll KF Yaw KF Pitch KF Roll KF Noisy Yaw Noisy Pitch 5 Photogrammetry

Summary: This chapter explains the photogrammetry, the algorithm, as well as the test results used to track a mobile target through a vision system of which the capture device is a high-definition webcam.

In order to conduct the autonomous capture, a computer vision system is developed to provide the guidance vector for the robotic manipulator to track and capture the ob­ ject. A camera will be mounted on the end of last arm segment in the coordinate system of Z 3. This method is identified as the eye-in-hand method. A set of optical targets, see Fig. 3.6, attached to the object is used to extract the three-dimensional position and orientation of the object. Photogrammetry requires a minimum 3 optical targets to determine the 6 DOFs (three translational and three rotational) of an object with respect to the camera. In our approach, four optical targets are used to provide some redundancy and qualitative lock evaluation.

The coordinates of the optical targets on the image plane of a pinhole camera can be established by the collinearity equations, such as,

(5,1) where (xa, za) are the coordinates of a projected point on the cameras image plane, {Xac~. F/ic; Zac) are the spatial coordinates of the point with respect to the camera co-

Ar V \ Ar

nr "V (a)

Figure 5.1: Photogramrnetric Target: (a) Figure, (b) Target as seen Through the Vision Program ordinate system, and / is the focal length of the camera, respectively.

The transformation equation can be written as,

X ac ' ’ rn ru " r x A Tx Yac > := r2i r22 r 23 YA T y (5.2) X + Z„c , _ r;n r-32 r 33 . { z A J 1 T z where X A . Y A , and ZA represent the target coordinates. We can further express the equations by substituting Eqs. (5.1) into (5.2) and obtain the following. T\\Xa + ^12^4 + fizZA + Tx x , (5.3) r 2 \ X A 4- t ^ Y a + r23^/i + T y '<'?,\Xa + TzoYa + ?'33Z a + T z (5.4) r 2 l X A + T 2^ Y a + 1~2s Z a + T\ Where rij are the functions of the Euler angles (Q, €>, ©) of the object with respect to the camera’s image plane. These equations are highly non-linear and involve six un­ knowns: the three rotational angles and the three Cartesian coordinates {X q ,Y q ,Z q ). They are the orientation and position of the object relative to the camera coordinate system. As stated before, in order to solve Eq. (5.4), a minimum of three optical targets is required. In this work, four optical targets are used to provide system redundancy so that it can tolerate to lose one target point during the tracking process.

We can use Taylor’s theorem, ignoring the higher order terms, to obtain the linearised form of Eqs. (5.3) and (5.4). In linearisation, the previous equations are rewritten as follow:

F(xa,T x,T Y,T z M ,$ ,0 ) = 0 = q x xa + r x / (5.5)

G{za,Tx ,T y tT z,n ,$ ,& ) = 0 = qxza + sx f (5.6) where:

q = r2 iXA + r22YA + r-23ZA + Ty r = rn XA + r12YA + rn ZA + Tx s = rz\XA + r32yA + ri3ZA + Tz From Taylor’s theorem Eqs. (5.5) and (5.6) may be expressed as,

OF BF BF BF 0 «(Fo) + (^r)o dxa + (-g—)oda> + dp + BF BF BF + (5 3 ^) °dTx + ^dT^QdTY + ^Bf^^°dTz 73 0 “ (Go) + W ° “+ (aJ)o + 9 ^ + < W )ckM , , 8G , „ , ,dGs „ (5's) + ( g jr h d T x + (gjr;)o

In the previous Eqs. (5.7) and (5.8), Fq and Go are the functions of F and G such as seen in Eqs. (5.5) and (5.6) evaluated at the initial approximations for the six unknowns. The partial derivatives are also evaluated at the initial approximations, and the ds (IE dTx) are the unknown corrections to the initial estimate. Since we actively measure xa and za through the camera, dx„ and dza are representative of the residual errors in the measurements. Therefore they can be replaced by the more customary notation of vxa and vza. It can be shown that §~^Q and § ^Q are equal to q. and we can further simplify the collinearity equations to,

Vxa + Jq — bndTx + biodTy + by&dTz + budQ + + bi^dQ (5-9) Vza + F q = 621 dTx + b^d-Ty + bo^dTz + &24 dCl + 695 d4> + 620 dch (5.10) where, 1 dF(Tx .Ty.Tz ,u;,

m[Fh =m [A]nn[X]i - m [L]i (5.12) where m is the number of equations, n is the number of unknowns, V is the matrix of residual errors in the measured xa and za photocoordinates, A is the matrix of b's. X is the matrix of unknown corrections to the initial approximations, and [L] is the matrix of constant terms J and K. If the number of equations exceeds the number of unknowns, a least squares solution may be obtained for the more probable values by using the following equation:

«[-V]i — (n[^]mm[-^]ra) nM m m W l (5.13) The photogrammetry algorithm is implemented using OpenCV (Open-sourced Com­ puter Vision library). The OpenCV program calculates the optical flow of the optical targets and outputs the coordinates of the images of targets into our photogrammetry algorithm to calculate the 6 degree of freedoms (DOFs) of the target with respect, to the camera. In order to improve the stability of the vision system, the algorithm also predicts 74 Table 5.1: Test results of photogrammetry algorithm Vision Accuracy Error Precision Error X axis 93.12% ± 0.23% 99.16% ± 0.05% Y axis 87.32% ± 0.75% 97.01% ± 0.04% Z axis 94.50% ± 0-31% 99.87% ± 0.05% Roll 99.45% ± 0.44% 99.45% ± 0.08% Pitch 88.20% ± 0.80% 88.45% ± 0.09% Yaw 88.13% ± 0.80% 89.21% ± 0.08% subsequent positions using a Kalman filter in case the vision system loses tracking of the optical targets.

Testing of the photogrammetry algorithm proceeded with a webcam of one megapixel reolution and the testing results are shown in Table 5.1. The accuracy of the algorithm is very high. Once the position of the target in the camera coordinate system is determined, it must be transformed into the operational space (Xo, Yq, Zq) as input to the controller;

{X 0} = [T10}[T21}[T32}[Ta}[Tb}\Tc} { X c } (5.14) where,

"1 0 0 O' ' 0 0 1 0' 0 - 1 0 0 0 1 0 0 [Ta) 0 0 - 1 0 m = - 1 0 0 0 0 0 0 1 0 0 0 1_ 1 0 0 o ' o 1 0 yc 3 (5.15) [Tc 0 0 10 0 0 0 1

75 6 Software

Summary: This chapter presents the software architecture for the system. The chap­ ter begins by describing the overall interaction and then proceeding to provide a more detailed look at each program including a description, important flags and settings, and finally a detailed flowchart.

6.1 Introduction The software for the project totals near to 10 000 lines of code and is composed of a complex network of inter-operating programs which achieve the desired result. The main program, when executed, calls the remainder of the sub-programs. The only exception is the on-board software which is operated on a ATMEGA Atmel 324P micro-controller. The list of sub-programs include:

Main Initializes and launches all sub-programs, communicates with the micro-controller, and sets the order of operations. K ratos Inverse kinematic program which takes a Cartesian position and returns the joint angles needed to achieve it. Argos Photogrammetry program which takes the four target position and returns the six degree of freedom structure; x, y ,z, roll, pitch, yaw. Hermes I & II There are two versions of Hermes pi'ogram which control the stepper motors and the servo motors. Hermes I is the stepper motor driver array (one for each motor) consisting of H-bridge stepper drivers allowing for up to 1/32 hard­ coded step either forward or backward. Hermes II is the servo driver and consists of a Lynx-motion SSC-32 servo driver board capable of controlling up to 32 servo motors.

The main software seen in Fig. 6.1 illustrates the flow of information.

76 6.2 Robotic Manipulator On-Board Software

The on-board software was programmed in the ATMEL AVR Studio and programmed onto the Atmega 324P using an ATMEL programmer. The initial program (seen in Fig. 6.2) was very basic and involved a simple communication relay which update the positions on the micro-controller and reported back on the current joint angles. The second iteration (seen in Fig. 6.3) became more robust possessing more debugging and reliability features. The program, when powered up, initializes the micro-controller setting ports, baud rates, control registers, disables, and interrupts. The micro-controller in question is de­ scribed in Chapter C and is in possession of 32 pins separated into 4 ports, A through D. Ports B and C are reserved for actuation and direction of the stepper motor drivers. The actuation proved to be a bit of a challenge due to path planning requirements. The stepper motor drivers actuate the motor when the pin voltage falls down. When actuating the driver pins sequentially, we limit the possible paths achievable by the arm as when we examine the motion closely, it is in fact a step-like motion which precludes the possibility of smooth diagonal motion.

77 Take Picture

Edge Detect

Comer Detect *"

Target Selection

Photogrammetry - * Seed Vision Kalman

Forward Kinematics to Camera Current Joint Positions

Forward Kinematics Camera to Target ♦ Inverse Kinematics from Target ♦ Approach/Capture/Halt

Seed Global Targeti Kalman Extract Predicted Target Position

-► Controller

Adjusted Position + Provide Desired Positions

Receive Current Position

Figure 6.1: Flowchart: Main Software

78 On-board Electronics - ICARIUM

Position Communication Update Position Speed Check

Go - No Go

Update Desired Positions

Update Speed

Adjust Accelerations

Determine step

Actuate Motors

Figure 6.2: Flowchart: On-board Software

79 In order to resolve the problem, a solution was found, and approximates what the author believes to be a shotgun-like-approach. As previously stated, the actuation of the motor drivers occur on voltage drop, and as there is an inherent capacitance to a voltage increase on a pin, there is an inherent voltage lag. Therefore the approach requires the loading of each pin to be fired as the first step. Allowing for the loading lag, the loaded pins are then fired, allowing for simultaneous motion of any number of joints. This loading and firing sequence seems analogous to a shotgun loading and firing.

Initialization

Command State Capture State Command Verification HZ 4 C a se List Parse to Char Array Buffer Check Set Pointers

Position Reset Command Parser * Replace Null Characters Counter Reset * Convert Character to Integer Port S e ts S p eed A cceleration Step i Test Motion Update Desired Joint Positions 1 » Check Accelerations Convert Current Joint Positions T 4 Adjust Speeds Create Command M essage T Check Step Count I Send Message Step / No-Step I R eset

Figure 6.3: Flowchart: On-board Software Detailed

6.3 Main Program, Photogrammetry, Filtering, and Tracker

The main loop of the software operates the camera vision software, filtering, and tracking. Initial flags need to be set in order to determine the use of different filters and compensates for each execution of the software. The flags are set manually prior to executing the program and determine which version will be run. These flags include 80 digital_damping_flag Digital dynamic damping is the process by which damping of the response is achieved digitally by implementing a dynamic ranged moving average. Details of the dynamic digital damper will be provided in Section 6.7. testing_flag The testing flag designates the logging of information. Logging takes up a high level of computing power and slow down the processing speed of the computer. Therefore, when not needed, this flag allows the disabling of unnecessary logging of information. display -flag Similar to the previous flag, this one toggles the display feature of the program, limiting the need to expend computing power on displaying texts when not in debugging mode. kalman This flag toggles the use of the main Kalman filter. This Kalman filter tracks the position of the target relative to the global coordinate frame, meaning it contains information of the vision system and the robotic manipulator. The filter predicts the motion of the target and the intercept point, feeding the information to the manipulator system. capture_or_not This flag toggles the system between tracking and capture modes. When enabled, the system will perform the entire approach, align, capture, halt procedure. When disabled, the system limits its behaviour to the approach and align portions of the procedure, effectively approaching the target and mirroring its motion. go_for_capture_flag This flag is the marker point for the transition between the approach- eapture and the capture-halt portions of the control. The criteria for triggering this flag toggling is crucial to a smooth operation of the robotic manipulator. update -flag The update flag is employed during the grasping procedure and controls the information feed for the positioning software in the robotic manipulator. Initially, pre-capture, the vision system controls the information feed for the robotic manip­ ulator. Post-capture, the system is predicting the continuation of the previously sensed motion and applies a counter force to bring it to a halt. use_adj The adjustments flag controls the use of the predicted intercept points fed from the Kalman filter. They are controlled through a separate flag due to the need to turn off the predictions post-capture. control_rm This flag triggers the use of different control methods for interacting with the non-cooperative target. These control methods can be vision based, impedance, force, input-shaping or a hybrid of these.

Following the setting of flags and the initialization of several parameters, variables, and communication protocols, the main loop iterations begin. The first step for any

81 loop iteration is obtaining a new picture. The picture is processed through a contrast filter enhancing the differences in the picture. High contrast reduces the noise in the visual system by eliminating slight differences in colours and shades. The picture is then processed through edge detection and corner detection. The top 20 corners detected are selected as the prime candidate and submitted to a series of tests. They are grouped into fives and subjected to a relative distance test that verifies grouping distances. If relative distances meet our requirements, then they are passed on to the tracker code. The tracker code is a simple Kalman filter that calculates the flow of pixels on the screen to predict their motion employing simple velocity models. The purpose of this code is to provide a much more robust vision tracking system capable of dealing with minor vision outages and lost of target. The tracker will even go so far as to predict up to five frames of motion when the target lock is loss. This prediction is limited to only five frames in order to prevent the possibility of endless prediction, of which the cumulative error is enormous. Covered in detail in Chapter 5, the photogrammetry algorithm employs four targets in order to allow for the calculation of six degrees of freedom; roll, pitch, yaw, and the three Cartesian directions. The software functions by using the collinearity equations, an initial guess, and the least squares method. An initial guess is provided based on a vector of approach towards the target and is fed into a transformation matrix. An error is produced between the guess and the actual projected picture on the image plane and provides a corrective term. The system is iterated until the error becomes insignificant. The output from the photogrammetry software is the transformations in both rotations and translations. The information is passed on through several programs which will be detailed in the following sections.

6.4 Inverse Kinematics The inverse kinematics software transforms from the Cartesian to joint space and provides a few unique challenges. Due to the configuration of the robotic manipulator, the final two actuating axes, 2 5 . intersect within the wrist and provide a convenient control point for the arm. The inverse kinematics takes advantage of this configuration and transforms the robotic joints based on this point of control. The first joint is simple enough to solve particularly when seen from a top-down view such as Fig. 6.4. A simple use of the double arctangent function (atan2) provides the joint angle solution. The solution is verified against the previous solution and within one rotation. The shoulder and elbow joints provide the challenge in solving for a consistent solution. Due to the design of the arm, and depending on the desired position, there can exist several valid mathematical solutions. For static positioning, this is not crucial, however when the system is dynamic and fluid, a consistency of solution is needed. These possible solutions can be seen in figure 6.6. Therefore it is necessary to limit the range of motion of 82 Figure 6.4: Inverse Kinematics: Initial joint certain joints to ensure a consistent solution that allows for a fluid motion of the robotic arm.

Elbow Up

Figure 6.5: Inverse Kinematics: Multiple Solutions for Joint Redundancy

The solutions for the shoulder and elbow joints can be obtained in two ways, either through a mathematical or geometrical solution. Both proceed in a similar fashion by solving the elbow joint first, then proceeding to solve the shoulder joint. The final two joints provide orientation rather than position control. They can be calculated independently from the remainder of the joints. In order to speed up the response time in computing, the last two joints are calculated within the main loop 83 rather than the sub-program. They are calculated as a function of the photogrammetry’s rotational transformation.

Figure 6.6: Inverse Kinematics: Multiple Solutions for Joint Redundancy at Wrist

6.5 Communication Protocol Communication occurs between the desktop computer and the on-board computer, desk­ top computer and the target motor, as well as the desktop computer and the servo motor drivers. The communication employs USB and occurs over serial communication proto­ col. Communication happens by selecting the COM ports and initializing each channel according to the following Table 6.1.

Table 6.1: Communication Table Component COM Baud Byte Size Stop Bits Parity On-Board 3 9600 8 1 None Servo Driver 4 115200 8 1 None Target Driver 8 2400 8 1 None

For communication between the desktop computer and the drivers, the protocol is simple and mono-directional, meaning only commands are issued and there is no need for feedback. For the on-board computer, feedback is a crucial component. Therefore, in addition to the serial communication for the software, a communication protocol was established between the two programs. Communication functions by creating a character array with key flags to identify the values in the array, and transferring the entire array over serial. Upon receipt, the array' is parsed into a cyclical buffer array which is then scanned backwards for the last complete message. Once that message is detected, a confirmation algorithm is initiated to verify the veracity of the information. The selected section of the buffer containing the message is isolated by null characters on both ends. Following this, each joint factor is identified at both start and end section 84 E500QS250T150P10GE4920S230T120PSiE4879S213T112P9GE4842S207 Buffer Signal

Figure 6.7: Communication Buffer: Signal for Robotic Drivers and the end section is replaced by a null character and the start section by a character pointer. If all sections are positively identified, the pointers are passed to a cliaracter-to- integer converter program. The integers are then integrated into the software.

6.6 Kalman Filter and Algorithm

The Kalman filter is a crucial piece of software for the capture operation. The filter takes in several streams of information and predicts the intercept position of the arm and the grasping bar as well as their relative orientations. The Kalman filter developed in Chapter 3 and simulated in Chapter 4 is now imple­ mented. The first stage is defining the state space of the Kalman filter, otherwise known as the tracking variables for the filter. For our application, the tracked variables are the following seen in Table 6.2. Second is the initialization of the state transition probability matrix, otherwise known as the P matrix, which can be seen in Table 6.3. The last two initialization are for the system model error and the measurement model error. These two matrices represent the fidelity of the signals for the sources and the important metric is the relative values between the two matrices. The numbers can be seen in the following Table 6.4. The standard operation for the Kalman filter is to first obtain the measurement data and to structure it in the appropriate matrix. The predictive Kalman filter is then iterated through the next time step and the returned information is verified through a boolean flag. If the data is correct, the measurements are fed into the filter, and the current state is obtained based on the current system time. The three sub-programs are the following: KAlgorithm::predictState(time_delta) Employing the time step between iterations, this subroutine updates the $ matrix and B matrix followed by an update to the predicted State space matrix as well as the predicted transition probability matrix. KF.getMeas(kalman_target,time_delta) This function obtains the measurements from the sensors and updates the estimated measurement matrix. KF.currentState(time_running,time_delta) Employing all of the information avail­ able, the current state of the system is predicted which includes a prediction of each variable in the state space.

85 Table 6.2: Kalman State Space and Initial Values Variable Symbol Initial Value X Position X 0.05 X Velocity X 0 X Acceleration X 0 Y Position Y -0.10 Y Velocity Y 0 Y Acceleration Y 0 Z Position Z 0.31 Z Velocity Z 0 Z Acceleration z 0 Roll R 0 Roll Velocity R 0 Roll Acceleration R 0 Pitch Position P -0.05 Pitch Velocity P 0 Pitch Acceleration P 0 Yaw Position W -0.14 Yaw Velocity w 0 Yaw Acceleration w 0

The information obtained from the current State sub-program is then employed to predict the intercept between the target's motion and the robotic manipulator’s motion, in order to achieve a dynamic capture.

6.7 Damping, Grasper, and Controller

6.7.1 Dynamic Digital Damping

Due to a flexible torso joint, a digital dynamic damping system was programmed. The system takes a dynamic moving average from a matrix whose size, and therefore its damping factor, can be varied over the course of an execution. The matrix is initialized at a minimum size and value which creates an initial lag in the execution of a torso motion. As the distance between the grasper and the target shrinks, the size of the matrix is increased, increasing the damping factor from the moving average. This stiffens the torso and smooths out the torso motion, reducing oscillations and vibrations. This digital form of damping the joint is contrasted and compared with other solutions such as input

86 Table 6.3: Kalman State Transition Probability and Initial Values Variable Symbol Initial Value X Position Px 0.0001 X Velocity Px 0.0025 X Acceleration Px 0.0625 Y Position Py 0.0001 Y Velocity Py 0.0025 Y Acceleration Py 0.0625 Z Position Pz 0.0001 Z Velocity Pz 0.0025 Z Acceleration Pz 0.0625 Roll Pr 0.0001 Roll Velocity Pr 0.0025 Roll Acceleration Pr 0.0625 Pitch Position Pp 0.0001 Pitch Velocity Pp 0.0025 Pitch Acceleration Pp 0.0625 Yaw Position Pw 0.0001 Yaw Velocity Pw 0.0025 Yaw Acceleration Pw 0.0625

Table 6.4: Kalman System Model Error and Measurement Error Matrices Variable Symbol Initial Value X Model Qx 0.275 Y Model Qy 0.275 Z Model Qz 0.275 Roll Model Qr 0.75 Pitch Model Qp 0.75 Yaw Model Qw 0.75 X Measurement R x 0.09 Y Measurement R y 0.09 Z Measurement R z 0.09 Roll Measurement R r 0.09 Pitch Measurement Rp 0.09 Yaw Measurement Rw 0.09 shaping and employing a mechanical viscous rotary damper.

87 6.7.2 Grasping

The grasping algorithm functions as both the initiation for closing the grasper as well as the initiation for the behaviour of the system to change from tracking and approaching to halting the target. The metric for initiating the grasp is twofold and involves both the current distance between the grasper and the target as well as the history' of the distance. The current distance is employed for the transition between approach and capture, however the actual capture command is only initiated once the distance has dropped below a flagged value and remained there fore several iterations of the code. This strikes a balance between a quick capture and a reliable steady capture.

6.7.3 Controller

The main control strategy is initiated once the target has been grasped by the robotic manipulator. The goal is to devise a strategy employing the continued use of the vision system and reducing any undesired jarring, vibration, or torques from being applied to the grasping point of the target. Several control strategies will be implemented and have been described in Chapter 3.

88 7 Experimental Set-up

Summary: In order to quantitatively evaluate the capture operations, a testing scenario and target were developed. The concept behind the testing scenario was to develop an analogous low-cost space simulate that would represent a non-cooperative satellite tum­ bling passively in space needing to be captured and brought to a relative halt. This section describes the validating test scenario for the system. The target structure and functioning is described followed by a look at the sensors which will be used to compare the different results. Lastly, the section will look at the procedure which will be followed during the tests.

7.1 Target Structure The main component of the target is the visual target plane w-here the objective shape is posted which is employed to track the motion through the photogrammetry algorithm. The target can be seen in Fig. 7.1. The target also provides a mounting point for a grasping bar as well as a location for an IMU. The grasping bar is designed to maintain alignment between the visual target and the camera as well as the grasper and the bar. In a real application, the visual cues would be chosen in order to maintain this alignment up until the grasping point. The background, target, and mounting hardware are all painted matte w'hite in order to reduce the visual noise. The target is suspended in a pendulum system and attached to a stepper motor in order to provide a controlled, yet unknown to the robotic system, motion to the target. The pendulum simulates (in tv7o dimensions) a zero-gravity environment allowing for free motion of the target in response to an external stimuli. This stage of a capture operation is the most critical moment when considering on-orbit servicing or autonomous satellite interaction. Satellites are expensive assets that even in the best case scenario require the surmounting of large risks in order to be placed into nominal operation. When two such assets approach each other, autonomously, the risk for damage and collision, even minor, can represent the loss of function or of the entire system. Therefore it is important to understand the behaviour of a robotic manipulator and its dynamic interaction with a target.

89 Figure 7.1: Physical Target

7.2 Evaluative Components

The target was constructed with two types of sensors in order to quantify the capture operations of the target. These represent the two types of undesired interactions with a target; a force applied to the grasping point of the asset, and an overall body acceleration to the asset.

7.2.1 Strain Gage

Strain gages are employed to detect the strain, or deflection, in a component, which in this case is the grasping bar. A high strain correlates to a high deflection or stress which can signify damage. Additionally, vibrations, even with amplitudes kept below the material limits, can lead to damage. The strain gages measure the deflection in the grasping bar and can be linked to both strains and vibrations.

7.2.2 Inertial Measurement Unit

The inertial measurement unit (IMU) senses accelerations and is employed as a macro- motion sensing. The IMU can sense accelerations that have been transferred to the target plate from the robotic manipulator. The information from the IMU is transmitted over a wireless XBEE system to the desktop computer and logged. The IMU itself has two separate types of sensors and the gyro’s have two ranges of sensitivity, 0.83 m V/°/s and 3.33 m V(°/s and a range of ±300°/,s range. The accelerome­

90 ters have a sensitivity of 300ml 7/g and a range of ±3 g. The higher sensitivity is employed as this is a relatively low jerk motion. The IMU along with its power pack can be seen in Fig. 7.2.

r

Grasping Point

Figure 7.2: Target and Grasping Bar with Integrated Sensors

7.3 Dynamic Path Planning of Target The dynamic target has a motion that comes from the zero-gravity suspension set-up and the target motor that anchored at two opposite corners of the background, seen in Fig. 7.3. The Target is suspended from the Target Anchor in a pendulum set-up allowing the target to move in a two-dimensional zero-gravity simulate. The Target Motor is anchored at the opposite upper corner (see Fig. 7.3) and winches the target back and forth across the target plane. Due to the height difference in the points of anchor, the motion created is a sweep. The motor is powered and driven with the use of a salvaged computer power supply and a stepper motor driver called a Little-Step-U. The software on the driver allows quick and easy adjustments for accelerations and speed profiles providing a variety of dynamics for the robotic manipulator to capture and halt.

91 Figure 7.3: Target Set Up The system was even created with a back-up driver in order to allow for a moi-e complex system in the future with a two motor target motion system.

Figure 7.4: Target Motor Driver

7.4 Target Paths

Several iterations of the capture operations took place in order to first validate the system and then to calibrate the several sub-programs, filters, and controllers for the capture operation of a dynamic system. The first set of captures calibrated the static capture where the target is not in any motion. This was an important aspect of the initial calibration process and will be described in details in the next section. The dynamic passes took place as two stages, the first of which are low velocity passes. The qualification for low velocity passes are the ability of the arm to capture the target without need for a predictive or compensating element, for the motion of the target. The second type are high velocity passes requiring the predictive element of the Kalman filter in order to compensate for the faster motion of the target.

7.4.1 Static

Static captures illuminated a few key problems with grip alignment and yaw compensa­ tion. In addition to compensating for the offsets between the target’s photogrammetry

93 origin and the grasping point, the robotic manipulator needs to adjust for relative ori­ entation of the target in terms of yaw position as well as optimizing the chance for the grasper to line up with the grasping point.

Figure 7.5: Transformations for system

7.4.1.1 Grip Alignment

The purpose of aligning the gripper and the grasping point is to line up the gripping axis and the grasping points axis in order to maximize the chance of capturing the target. The alignment procedure is an interim goal position for the arm prior to the capture operation. The alignment procedure can be achieved in several ways, however the simplest and most direct method involved adjusting a photogrammetric offset, of the target in order to extend the distance between the grasping point and the target and to reduce that offset once the position has been achieved.

7.4.1.2 Yaw Compensation

The second type of compensation was for a yaw misalignment between target and manip­ ulator. When yaw values between camera and target are high, due to the transformation process, the gripper alignment will be off. In order to account for this behaviour a yaw compensator was implemented based on empirical results. The target was traversed across the field of capture at small intervals 94 Target (0 ,0 ,0) ' A A VV \

Z offset (+ve)

i X Offset (-ve)

Figure 7.6: Target Offset

Zero Adjusment

A k II

Figure 7.7: Top View of Joint Alignment for Yaw and the required offset was measured in order to affect a capture. The trend was analysed and a best fit line was established. The result was the following yaw compensation.

95 Yaw Adjustment in X

Figure 7.8: Top View of Compensated Joint Alignment for Yaw

“Empirical 1.5 “Fit for Curve

0.5

ii 21 41 51 61 71 81 -0.5

-1.5 -

Figure 7.9: Yaw Offset Through Experimentation

Xadjustment — 0 . 9 0 -S 2 /7 ( 5 7 Oi I. (JCttU j ( 7 . 1 )

F adjustment = 0 . 0 0 - 5 cos(vision.yaw) ( 7 . 2 )

96 7.4.2 Target Paths

Low velocity captures initially presented a difficult trial due to the flexible joint in the torso and the lack of speed and acceleration control. The low velocity. capture is char­ acterized by the lack of need for a specific compensation for the velocity of the target. Meaning that the relative motion between the target and the arm is low enough that the high iteration rate of the control allows it to keep up and capture the target. This is an interim capture scenario between static and full speed dynamic scenarios. The target motor velocity is set at 40 steps per second and maintained for the duration of the capture attempt. The robotic manipulator is initialized at a starting position and provided with an initial lock to the target pattern. The target motion is initiated and the robotic manipulator is given a manual proceed signal. The low velocity testing revealed three deficiencies in the system.

• Oscillations: Torso joint would oscillate when the target and robotic manipulator motion would alternate in directions. • Vision Stability: Vision system would lose lock often during capture operations. • Velocity Control: Start and stop motions for the robotic manipulator were spas­ modic.

7.4.3 Challenges at Low Velocity

The first challenge was the oscillation caused in the system due to the combination of the controller and the mechanical system. The flexible coupling linking the torso motor to the torso link, combined with the high mass of the remainder of the system would cause oscillations. This behaviour can be seen in Fig. 7.10.

7.4.3.1 Oscillations

In order to compensate for the behaviour, a physical dampening system was installed as well as a digital dynamic damper software. These can be compared, contrasted, or com­ bined in order to reduce the oscillations. When properly tuned, we obtain the following behaviour seen in Fig. 7.11.

7.4.4 Dynamic High Speed

The high speed captures are characterized with a target velocity high enough that the robotic manipulator cannot capture the target. Typically the captures are performed at a target motor speed of 80 steps per second. Similar to the low velocity captures, the target is locked on manually, the motion is initiated for the target and then the robotic manipulator. 97 X of Target X Speed of Targsi X Acetfwatstf* «f Target Y t t Tsget V Speed of Target Y AcGfcfera&cn of Targ®; Z ofTargei Z Spwd af 7#§ei *M- " Z Acceleration cf Target Rd! of Target R^i Sj»?xf sf Target R«Si Accele?sfs« of Target — Pitch cf Target Fitch Spe-e4 cf Target Fitch Acceleration dT sget ——— Yaw cf Tiagat Yaw Speed of Target Yaw Acceleration d Target

Figure 7.10: Oscillation Difficulty — X«f large* X $ p 9 & i af?sg « — Yc*X Aixttet& Target l& i cf Target — Y — Y ACCsstefStW e «f Y sfgst Z.cfl»g<3 2 S$*#0 cf Twgel 2 A£«si5fst>o« cf Targst — $9Bef7«rg*t — R«$RaS Speed Ace«iei^$fi <£Tasj*? efTargg? ■••- Prieh ef Target — P&th SpcaC d Tafjsi —* P a c h Acesiemi&x c f T ergst «—* Y9& — Yass Spssd «£ Target t ~ •■ Yew tet&J^3Ss» «tT#$?t

il v

Figure 7.11: Oscillation Difficulty Fixed

98 7.4.5 Challenges at High Velocity

At the higher velocities, controller lag and prediction accuracy complicated the capture exceedingly. Fig. 7.12 illustrates filter lag when we compare the input (current position) and the output (current estimation) graphs.

0.2

0.15

0.1

I.G5 - Roll of Input - Pitch of Input a t - Yaw of Input > - Roil of O utput - P itch of O utput -0.05 - Yaw of Output

- 0.1

-0.15

-0.2

5 10 15 20 25 30 35 Time (seconds)

Figure 7.12: Example of High Velocity Lag

The Butterworth filter accentuates the filter lag due the averaging method employed. In order to combat the lag, prediction is employed through the Kalman filtering. The higher the lag, the farther ahead the needed prediction, the less accurate the information. Therefore it is important to balance out all of the filters and controllers in order to obtain a workable solution. This will be discussed in Chapter 8.

7.4.6 Comparison of Captures with and without Kalman Filtering

The use of the Kalman filter provided several advantages during the course of the cap­ ture phase. Several of the features were necessary in order to overcome control issues and prediction complications, however looking at a comparison between a non-filtered and a filtered capture, seen in Fig. 7.13 and Fig. 7.14 respectively, illuminates several improvements. First, the capture occurs at a much earlier point in time when employing a Kalman filter. This is mainly due to the stability of the information being provided by the filter. As the grasping algorithm relies on the stability of the measurement offset in order to

99 initiate phase transitions, the new, more stable information flow, greatly improves the speed at which the system transitions. This results in a capture operation being initiated at 800 ticks with the Filter, as opposed to 2100 ticks without.

— Current Elbow — Cun ant Shoulder — C u rren t T orso Desired Torso — Desired Shoulder — Desired Elbow — Oesired Wrist Rotation — Desired Wrist Angie — Capture Flag — Vision X — V ision Y — V ision Z — Vision Roil — V isio n P itc h V isio n Y aw — RMYRMX — R M Z

1500 2000 Tims (ticks)

Figure 7.13: Example of Capture without Kalman Filtering

— C urrent Elbow ....Current Shoulder — C urrent Torso .... Desired Torse — Desired Shoulder — Desired Elbow ._.... — Desired Wrist Rotation — Desired Wrist Angle — Capture Flag

I *

-20

0 20300700280 808 800 Time (licks)

Figure 7.14: Example of Capture with Kalman Filtering

100 Second is the reduced noise level seen in the information. The filter, by combining the measurements and the internal model, obtains a higher degree of accuracy and noise reduction which results in a much smoother operation. Last is the filter lag. This constant lag due to the filtering process is seen as a gap between the current position and the desired position in Fig. 7.14. This can complicate the operation of the system due to the real-time nature of the capture. However, with proper modelling and characterization we can overcome the operational lag and counter the time delay resulting in an acceptable accuracy. Overall, the addition of the Kalman Filtering, though necessary, also provided sup­ plementary advantages which were taken advantage of as illustrated in this section.

101 8 Experimental Validations

Summary: This chapter reviews the results from the three test campaigns carried out during the course of validating the robotic manipulator. The results are compared by em­ ploying the information from the evaluative target and its associated sensors. This section will review the results from the capture operation, focusing on the operation up to and including the capture. The chapter will cover static captures, low-velocity captures, and high velocity captures. Analysis will be conducted through the graphical representation of the joint information, the Kalman filter inputs and outputs, the strain gage readings, the vision information, and the IMU readings.

8.1 Note on Capture Graphs The bottom axis of the graphs is a time scale. However, there is several different counters for the logs and it is not a universal time across all graphics even for the same cap­ ture event. There is the main computer’s timer, the strain timer, and the IMU timer. Therefore the timing of the events needs to be established on actions during the capture.

8.2 Static Capture Static captures were the first steps in calibrating the capture operation for the robotic manipulator and involved positioning the target in a wide range of positions while al­ lowing the arm to capture a non-moving target. The comparisons will be based on the manipulator’s ability to determine the desired position, tracking ability of the controller, the strain gages as well as the IMU’s output during the capture.

8.2.1 Initial Static Capture

The first, capture examined is a basic static capture with no filtering or control established. The first graph (Fig. 8.1) shows the strains ^induced in the grasping bar of the target due to the capture procedure. The values are symmetrical because the strain gages are mounted in pairs on either side of the aluminium grasping bar.

102 The strain values are fairly small, in the range of 6 x 10-5 and stabilize quite quickly. The static values will be employed as a relative benchmark due to the fact that dynamic captures will, by definition and design, create higher torque and force captures. If the control is capable of capturing a dynamic target with a relatively similar strain value, we will deem it a successful capture.

* 1Q-6

e

4

2

e o 65

■7

•4

•6

-8 24 24.223.4 24.4 2 4 .8 2 5 2 5 .2 2 5 .4 T im e (s) Gauge 1 Gauge 2 Gauge 3 Gauge 4

Figure 8.1: Strain Gage Reading for Static Capture

The second figure, Fig. 8.2, shows the vision camera’s raw inputs to the system. The figure illustrates the noise inherent in vision systems, even when the target is static with the worst culprits being the vision pitch and yaw. This is due partially to the pixel twitch and convergence from the photogrammetry algorithm. Essentially a corner straddles two or more pixels in the camera and depending on the current iteration of the vision system, the convergent least squares methods twitches between solutions resulting in a noisy output. The final set of figures, (Fig. 8.3), are the internal tracking parameters for the three stepper motor joints. They represent the current position as well as the desired position outputted from the current control strategy. The signal is unfiltered and represents the true noisy stage of the processed signal from the camera as it is transformed through inverse kinematics from the vision system. The desired elbow joint is subjected to a large amount of noise which can be seen in Fig. 8.3(a). The spikes represent a temporary breakdown in communication resulting in a desired and current position of zero. The shoulder tracking is relatively smooth and constant seen in Fig. 8.3(b). The last figure seen for the torso drifts constantly upwards throughout the capture and can be seen in Fig. 8.3(c). The drift was due to the lack of the yaw compensator and resulted in a large over estimation of the initial position and a 103 Vision X Vision Y Vision Z Vision Roll Vision Pitch Vision Yaw

0 500 1000 1500 2000 2500 Time (ticks)

Figure 8.2: Vision Input for Static Capture slow drift back to the proper torso position. The capture illustrates some of the basic difficulties inherent in a system as complex as this robotic manipulator. The following capture will contrast with a fully filtered and controller static capture to demonstrate the improvements provided.

8.2.2 Static Capture - Filtered

Several upgrades have been made to the system between the last set of captures and the filtered static captures. Two types of filters have been implemented; the Kalman filter and the Butterworth filter.

8.2.2.1 Butterworth Filter

The Butterworth filter was implemented in order to reduce the high frequency noise inherent in the position signal being sent to the Kalman filter. A third order low-pass Butterworth filter was implemented with sample frequency of 15 Hz with a single corner frequency. Figure 8.4 represents the filter’s pass in magnitude and phase vs frequency. The x-axis represents the fraction of the sampling rate (i.e. 0.5 represents the Nyquist frequency,

104 Ctm rn 3hc*i O ts m Elsaw

9 550 1 3 3 0 t&SO MO# 2500 Tanc {treks} (a) Elbow

■ D ssiM Sheuid*f

0 5 0 0 1 0 8 0 1500 30 0 0 2 5 0 0 Tine(&±3) (b) Shoulder

£I I

(c) Torso

Figure 8.3: Capture Attempt of a Static Target which for this particular example is 7.5 Hz). The y-axis. in red. represents the normal­ ized linear magnitude. The red graph shows the magnitude drop-off in relation to the frequency demonstrating a nearly 93% drop-off at 2.25 Hz. The blue graph shows the phase shift in response to the frequency based on the filters performance. Phase shift is

105 0.00 0.03 0.10 0.15 0.20 0.23 0.30 0.35 0.40 0.43 0.30 Time (sec)

Figure 8.4: Butterworth Magnitude (Red) and Phase (Blue) vs. Frequency essentially the filter lag and becomes more pronounced as we near the cut-off frequency.

t l -0 ,2 ■

0 10 20 30 40 50 €0 7 0 80 Time (sec)

Figure 8.5: Butterworth Impulse Response

Figure 8.5 represents the impulse response from the filter and illustrates a reasonably quick response time and return to norm. The x-axis is a time axis represented in sample rate. (15 samples resulting in 1 second) and the y-axis is the filter response linear and normalized. 106 1.0

0 .8

S0 .6

0} 0 .4 a3 . I 0 .2

o .o 0 1 0 20 30 4 0 50 60 70 80 100 Time {sec)

Figure 8.6: Butterworth Step Response

Similar to the previous section, Fig. 8.6 illustrates the Butterworth filter’s response to a step input. Similar response in terms of both overshoot and settling time to the impulse test are seen.

8.2.2.2 Static Kalman Filter Implementation The Kalman filter’s role in static captures is much subdued in comparison with its be­ haviour in a dynamic capture, particularly at high speed. The role of the Kalman filter in the static capture is simply as an amalgamation of sensor and model data in order to optimize the behaviour of the arm. Figure 8.7 illustrates the joints ability to track the desired joint profiles based on the inverse kinematics from a static target. The figure illustrates both the current and desired joint angles as well as the capture flag. Recall from Chapter 6 that the capture flag is the indicator for the closing of the grasper and signifying the transition between approach and deceleration of the target. Figure 8.8 illustrates the input and output of the Kalman filter, tracking the global motion of the target. As expected for a static capture, the target remains immobile throughout the process. Figure 8.9 shows the strain sensed during the capture. The strain levels establish during the static captures can be considered close to ideal for the capture operation. The values seen here are on the order of 2zl0~5 and will be used as a yardstick to measure the following captures. Figure 8.10 is the sole input to the system and is the feed to all controllers and filters. The vision input is the transformation matrix from camera to target explaining the drifting and offsets up until the capture flag is initiated. At that time, the target is grasped in the gripper and the vision input remains relatively fixed at that point. Figure 8.11 represents the unfiltered accelerations from the IMU unit attached to the target. The data has been processed and the bias has been removed in Fig. 8.11(a). The time axis (x-axis), is separated from the time axis seen in the previous figures as the program employed to log the information is run separately from the main software.

107 D«w?d Wreri AngJ* Cipfcie Flag

.6 | >

Figure 8.7: Joint Tracking for Static Capture

X af Input Y oftnpaB 2 af Input — Xof Output ——Xdoi of Output Y ef Output Yds of Output 2 of Output Ztfal of Output

W 15 20 2S 30 35 40 15 Tima (see ends)

Figure 8.8: Kalman Filter Input and Output

Figure 8.11(b) has been filtered through a low pass Butterworth filter in order to reduce noise and integrated over time to obtain the velocity profile of the targets IMU. In Fig. 8.11(c) the velocity profile is integrated once more to obtain the position. The actual values from the velocity and position graphs are subject to suspicion however they are

108 ,tr*

Figure 8.9: Strain Gage Readings

2n Flag ft* ftY n 2 n R

I = I

Figure 8.10: Vision Information valuable in terms of comparison with future capture procedures. Figure 8.12 similarly represents the gyroscope readings from the target’s IMU. The offset has been corrected on Fig. 8.12(a) and the output has been integrated in order to obtain the angles for Fig. 8.12(b). The angles again appear to drift a bit however, they present a convenient comparison for future capture operations.

109 0.06

-0 .0 6 •

-0 .0 8 •

- 0 .1 1 1 1 1 1------0 5 10 15 20 25 Tferw {3} (a) Acceleration

0.0 4

Vy 003 V z0.0

jq 0.02

-0.01

T im e rs ) (b) Velocity

0.2 5 ------P x P y 0.2 ---- — — R r 0 .1 5 / / £ 0.1 u / g / 0.0 5 S. / 0 __-— ...... -—/ ------—.______S c ’

-0.05 -0.1 \ -0.15-0 .1 5 '------'....------‘------1------'------1 0 5 10 15 20 25 Tim * (s) (c) Position

Figure 8.11: Capture Attempt of a Static Target IMU Acceleration, Velocity, and Position Readings

110 Gyro X GyroY Gyro 2

2 0

10

-10

-20

-30

-60 Time (s) (a) Gyros 8

4

2

■4

- 6

-8 0 5 10 15 20 Time {s} (b) Angles

Figure 8.12: IMU Gyro and Angle Readings

111 8.2.2.3 Static Capture Second Example

This second static capture operation differs from the first by creating a high yaw situation for the arm to overcome. The high yaw initially presented a difficulty in alignment, and required the programming of a custom compensator for the arm. This testing example set the target at a high yaw angle and once again attempted a static capture.

s 5

Figure 8.13: Joint Tracking for High Yaw Static Capture

Figure 8.13, in comparison to the capture illustrated in Fig. 8.7, takes less time but requires an offset and adjustment. The figure illustrates the ability of the system to compensate for a large capture area without penalty in terms of time or alignment. Figure 8.14 again illustrates a steady state period of capture for the target’s Kalman filter. Due to the yaw compensator being outside of the Kalman filter, the system sees a small amount of adjustment. None of the remaining outputs vary with any degree of significance. In terms of Kalman angles, Fig. 8.15 shows the target’s Yaw and Pitch angles drifting slightly as the compensator takes effect. Both yaw and pitch angles are coupled as the motion of the target requires the arm to compensate in both ways in order to maintain alignment. This is the reason why both inputs have very similar behaviours in drift and settling. Figure 8.16 illustrates a dual behaviour. The first deviation from null reading slopes the strain gages reading. Once the arm has finalized the capture it stops it’s motion with the target and the target’s motion halts as well. This is when we get the second level of strain and can explain the steady state nature of its profile. Figure 8.17 shows the visual input from the high yaw static capture operation. The

112 X at input Ygfmpu: I a input -XwOidpui Xdct at Ota put V* of Out sis tict at Output of Output Met of Oidaut

Figure 8.14: Kalman Filter Input and Output for High Yaw Static Capture

r***# \ v/ ft

Roil of to put Pilch cf Input .M — - Yew ol inpul 5 RaB at Output Pitch si Output Yggpf Output

Figure 8.15: Kalman Filter Input and Output Angles for High Yaw Static Cap­ ture visual input is slightly noisier than the previous example, however the capture flag, ini­ tiated near the 900 tick mark, still occurs in a reasonably prompt timing. Figures 8.18 illustrate the IMU’s accelerometer readings. The noteworthy information can be seen in Fig. 8.18(a) where the highest spike occurs in the y direction and reaches a

113 I V%^ VM

Figure 8.16: Strain Gage Readings for High Yaw Static Capture

— Capture Flag ~ Vision X •• Vision Y

— Vision fioi! ~ Vision Pttoh •• Vision Ysvf

1 & I

Figure 8.17: Vision Information for High Yaw Static Capture maximum magnitude of 0.04m/s2. In comparison to the previous acceleration of a static capture seen in Fig. 8.11(a) there is a marked increase in accelerations and can still be considered a soft capture. In Fig. 8.19 are the gyroscopic read outs from the IMU. The capture operation triggers an angular momentum change which reaches a peak around the Zaxis at 12°/s2.

114 - O .t1------1------'------«------«------1------'------«------1------«------1 C 2 4 6 8 10 12 14 15 18 20 T im a (s ) (a) Acceleration

0.1

0 .0 5

-0 05

> -0.1

-0.15

-0.2

T im e {s} (b) Velocity

P x

-8.4

-0.8

(c) Position

Figure 8.18: Capture Attempt of a Static Target IMU Acceleration, Velocity, and iue81:IU yo n AgeRaig o Hg a ttc Capture Static Yaw High for Readings Angle and Gyro IMU 8.19: Figure

O y fo s (° /S ) -20 -30 a Grs b Angles (b) Gyros (a) oY ro y G X ro y G o ro y G — Z -30 -25 -15 0 4 -35 8.2.2.4 Static Capture Analysis

Static capture operations brought several difficulties to life, including high angle captures, and resulted in a much stabler and smoother capture algorithm. The changes that arose from the static captures include the following:

Yaw Compensator The yaw compensation allows for a wider range of capture from the arm by providing an accurate alignment of the gripper and the target for high values of yaw misalignment. C apture Algorithm The decision to trigger the grasping mechanism required several iterations and evolutions. A balance had to be achieved between tight tolerances which require more time to achieve and loose tolerances which could result in missed captures. Vision Stability Even with a static target, spasmodic motion from the arm resulted in several loss of locks on the target. In order to combat this symptom, velocity control was implemented at the motor controller level, resulting in smooth accelerations and direction changes for the arm, without a severe loss of speed.

The information obtained from the static capture operations will be employed as a baseline for the future controllers, which will endeavour to achieve similar success.

8.3 Low Velocity Captures

Dynamic low velocity captures are characterized by not requiring a prediction of the motion of the target in order to affect a capture. Two captures are illustrated in this section illustrating the standard capture operations of the low velocity capture algorithm for the robotic manipulator. The low velocity captures illustrated in this section took place with the target motor set at a velocity of 40 steps/second.

8.3.1 Initial Capture Attempts at Low Velocity without Kalman Fil­ tering

The initial capture illustrates a smooth capture operation where the process from initial lock, to tracking, to capture occurs without any complications or delays. The first figure, Fig. 8.20, demonstrates the desired and actual position of the joints throughout the capture operation. The dynamic nature of this operation illustrates the lag created by the entire system, from the processing time to controller lag. The figure also illustrates the tracking ability of the system to keep up with a moving target. Figure 8.21 shows the Kalman Filter’s input vs output. The values illustrate one of the difficulties in dynamic captures resulting from a slight lag in the system; oscillations. The system takes much longer to achieve a stable enough reading in order to initiate the 117 “ Comr* Qixw ~C un«n 5 hauld*f —Contra Torse Otairsd Tone - Cssj/sd Shaufciar —Otairesi Elbow - Craraa VVtfet Rsutisn - Ctsind iVnst Acgt* - Captsrt Rag

?

Figure 8.20: Joint Tracking for Low Velocity Capture capture. The oscillations occur from the system lagging behind the motion of the target due to processing time, then over compensating at the next image. The cycle creates the appearance of unstable motion of the target and/or manipulator.

Z n cisut Xd Output Xdot d Output Yd Output Tdst «l Output Zal Output iSetof Output

Figure 8.21: Kalman Filter Input and Output for Low Velocity Capture

Figure 8.22 demonstrates a similar behaviour as Fig. 8.21 in terms of oscillations. The two major axes subject to this are the Cartesian x-axis and the rotational pitch-axis. 118 The settling time was responsible for a delayed capture.

RoS si Input - Phch cf insLd - Yswcf Input > - Rsl of Output ~ Rich cf CXSput ~ Yam cf Cxtpul

Figure 8.22: Kalman Filter Input and Output Angles for Low Velocity Capture

The strain gage readings for the low velocity capture, seen in Fig. 8.23. peak at 4x1 (V 5 which is much higher than the static captures, and continue to deviate. The capture is still considered soft, however there is room for improvements. «10* —• Gauge 1 — Gauge 2 — Gauge 3 — Gauge *

B

Figure 8.2-3: Strain Gage readings for Low Velocity Capture

119 Once again, oscillations present a difficulty in initiating the capture of the target, and the vision information can be seen in Fig. 8.24. The z-axis oscillation dominates the field this time, and once settled, allows for a quick capture of the target.

I < s

too 380 SH 709

Figure 8.24: Vision Information for Low Velocity Capture

The accelerometer readings from the target’s IMU spiked in the target’s x-axis at a value of 0.055 g’s. Noise values appeared to cause a slight drift, however as a comparison basis it illustrates a lot more acceleration being transferred from the manipulator to the target. The digital nature of the sensors on the IMU can help explain the stable period of the x-axis acceleration. Pre-filter data is discretized and the sensitivity of the levels on the accelerometers are coarse, this can result in a period where the acceleration is varying within a range that is contained inside one level of sensitivity. This results in a steady reading by the sensor. The target’s gyroscopic readings for this particular capture illustrate a very smooth capture with minimal angular rates being transferred to the target as can be seen in Fig. 8.26. At the end of the recorded data, there is some noise that signifies a bit of a jostled target, however it is minimal.

120 0 .1 5 Ax 0.1 Az

0 .0 5 £

< ■01

(a) Acceleration

Q.5 Vx

0 .5

0.4

0 .3

> 0.2

-0.1

(b) Velocity

------Px •—...Py ------pz

/J / / / / / / / s

~— „__

1.51------* «------L1------' ■------1 ’------1 0 5 IQ 15 20 25 Ttm© (s) (c) Position

Figure 8.25: Capture Attempt of a Static Target IMU Acceleration, Velocity, and Position Readings for Low Velocity Capture

121 iue82: M yo n Age ednsfr o eoiy Capture LowVelocity for Readings Angle and Gyro IMU 8.26: Figure Gyros -€0, a Gyros (a) tre( ) (s Ttrrre G y io 2 io y G X ro y G aY Y ra y G 122 -35 -30 -25 -20 -15 10 -5 0 s

0 5 5 2 ) a 15 10 5 0 b Angles (b) Tim e (s) e Tim \ lo g n A - , ta s o A - nglex A - 8.3.2 Capture at Low Velocity with Kalman Filtering

The second example for the low velocity capture employed the vision Kalman filter re­ sulting in a much smoother visual input for the controller. This nearly eliminated the oscillating behaviour of the manipulator, however trace elements still remained. Similar to the previous capture, the velocity of the target motor was set at 40 ticks/ second and did not require any prediction from the controller in order to maintain the tracking and perform a capture. The vision Kalman filter was employed at this point in order to combat vision loss and lag. The use of the filter allowed the system to smooth out the readings from the photogrammetry and prevent backtracking, the cause of the oscillating behaviour. The first figure for this capture operation, Fig. 8.27 illustrates an improvement in the oscillation problem which was previously discussed. The lag is also reduced in effect even though its presence can still be seen, particularly when examining the desired and current torso joint.

123 — Current Efcars — Custom SfcoJOef — Cues a Tam D w itd Torso — Oessad Shsufefw — Oasir«

Ten* (ticks)

Figure 8.27: Joint Tracking for Static Capture at Low Velocity

The Kalman filter inputs and outputs seen in Fig. 8.28 illustrate a much cleaner behaviour from the sensors. The velocities are interesting to analyse as they depart from zero as the arm catches up to the target and are brought back to zero after the capture. This exemplifies an ideal behaviour with minimal jerk and smooth operation.

x«f input - Y ef Input Z cl input XcfOutput --- Y Xdot 5| of Output Ysa af Output Output Zibt of Output

2 * « 4 10 12 U 16 W 20 22 r«tM(Mcands)

Figure 8.28: Kalman Filter Input and Output at Low Velocity

Figure 8.29 illustrates the input and output of the angles during the low velocity 124 capture. The filtered signal aligns well with the noisy data and led to a smooth alignment of the wrist and capture.

J Pitch sf Input >

0

■01

Figure 8.29: Kalman Filter Input and Output Angles at Low Velocity

In accordance with the smooth capture, the strain gage readings seen in Fig. 8.30 shows relatively low strain during the initial phase of the capture, seen between time 28.5 seconds and 31 seconds. The strain reading hovers around 2xl0~5 before beginning to deviate up to twice that reading. The deviation occurs due to a poor post-capture con­ troller performance not aligning the robotic manipulator well with the target’s previous motion. Figure 8.31 illustrates the input to the system up to, and soon after, the capture is effected. The reading for the vision y axis, the perpendicular axis outward from the target, oscillates slightly as the arm approaches the target and converges to a steady state. In conjunction with the previous data, Fig. 8.32 represents low accelerations from the target’s IMU. The drift from the sensor appears to indicate a consistent acceleration, however, the acceleration data indicates that this is most likely a biased noise from the sensor and not an actual input. Similar to the previous figures, Fig. 8.33 demonstrates a fairly noisy and stable capture from the target’s IMU sensor.

8.3.3 Difficulties in Low Velocity Captures

The difficulties which were brought to life during the low velocity captures included system lag and oscillations, vision stability issues and the requirement for velocity control. 125 Is

Figure 8.30: Strain Gage Readings at Low Velocity

!

/ r v > .V v •*-.. yA-v^.-;

Figure 8.31: Vision Information at Low Velocity

8.3.3.1 Oscillations

This problem became apparent when joints would oscillate due to system lag. The vision system would see the target oscillate and cause the robotic manipulator motion to alter­ nate in directions. In the worst case scenario, this causes a loss of vision lock or in a best

126 case will result in the aforementioned oscillations and delay the capture operation. Figures such as Fig. 8.24 and Fig. 8.31 demonstrate the oscillation due to the process lag which slows down the capture operation due to the long settling time. The problem was addressed by employing two layers of Kalman filtering; both at the vision program level and at the target position level. The visual Kalman filter smooths out the photogrammetry input and helps reduce the oscillation due to visual lag. It employs a model that predicts consistent motion of the pixels and therefore smooths out the input to photogrammetry. This section of the Kalman filter helps reduce the loss-of-locks that occurs, and also helps feed a less noisy input to the controller. The second level of the Kalman filter works on the global position of the target. In the high-speed capture this filter will become integral to successful captures, however even at low velocities, the Kalman aids in the capture operation. The Kalman output is employed to track the global motion of the target and removes the oscillation due to the lag by accounting for the current position of the arm.

8.3.3.2 Vision Stability

During any capture, a loss-of-lock would cause an immediate abort to the operation due to the danger of the arm damaging itself or its environment. In its current iteration, vision is the sole source of input to the system. Therefore, when the input is lost or incorrect the arm will behave in an unpredictable manner. The visual Kalman filter greatly improved the reliability of the vision system by allowing the system to predict up to five frames post loss-of-lock. This provides the vision system a chance to reacquire lock quickly. As previously stated in a past section, prediction and reacquiring is limited to 5 frames otherwise the prediction and the actual motion can diverge to an extent where they differ too greatly and convergence becomes a remote chance.

8.3.3.3 Velocity Control

The main mechanism where vision lock is lost occurs from spasmodic motion of the arm. These can be due to direction change, hesitations, or slipping and result in either the target moving outside of the field of view- or in one of the corners being confused and being replaced by a different feature. Velocity control limits the changes in velocity creating a smoother velocity curve for the motion of the robotic manipulator. Smooth velocity matches the model in both Kalman filters and allows for an easier time tracking the target as well as reducing spas­ modic motion of the wrist-mounted camera. The effect can be seen in Fig. 8.27, where the joint positions behave in predictable and smooth manner.

127 8.3.3.4 Summary of Low Velocity Captures

The low velocity captures illustrated a few problems and provided a great interim step before attempting the dynamic high velocity captures. The solutions, specifically to the vision loss-of-lock and spasmodic motion evolved into more than mere solutions and became full fledged components of the high velocity capture system.

8.4 High Velocity Capture

The dynamic high velocity capture is characterized by the target motor velocity being set at 80 ticks/second. In comparison with the low velocity captures previously examined, this reduces the time which the target is within grasping range by half, creates a more dif­ ficult tracking environment, and requires motion analysis and prediction from the global target Kalman filter in order to align and capture the target. These additional difficulties create a much more complex operation and provide a more interesting analysis.

8.4.1 Initial Dynamic High Velocity Capture Figure 8.34 represents the joint motion for the high velocity capture. Immediately appar­ ent in comparison to the low velocity is the more pronounced lag for the torso motion. This is the primary cause for the prediction requirement from the global target Kalman filter. At the higher velocities, the Kalman filter plays a more prevalent role, the process of which can be seen in Fig. 8.35. The initial input and output are fairly noisy, however the system stabilizes and the output of the filter settles. The motion of the target is nicely predictable allowing for the capture to be predicted and succeed. As important as the Cartesian Kalman filter is the Angular Kalman filter seen in Fig. 8.36. Unlike the Car tesian Kalman filter, the behaviour of the angles between the target and the arm do not behave smoothly and predictably. The velocity control can counter the spasmodic behaviour of the angle, however, the alignment process from the arm allows for a larger window of accuracy which is then reduced as the arm nears the target. This procedure allows for a larger error without, causing the arm to oscillate early on, and during the latter stages of the capture, where the angles are more stable, is removed allowing for an accurate capture. Thanks to an accurate filter prediction on the target, as well as an accurate repre­ sentative model of the behaviour, the capture remains well within an acceptable range of strain, seen in Fig. 8.37. The divergence, which occurs well post capture, is due to the target continuing on its motion while the arm slowly brings itself to a halt. Figure 8.38 shows the visual input to the system, processed by the photogrammetry program. The higher velocity creates a need for higher actuation and precision in order to align and capture the target. The Kalman filtered visual program is required at this

128 velocity in order to track the motion of the target and keep lock. Without the Kalman filter, the system loses lock within the first few seconds of motion. The dynamic capture becomes evident when looking at the IMU data seen in Fig. 8.39. The capture occurs at 8 seconds into the data logging, and creates a more profound response than previously experienced. The accelerometers registers an acceleration of nearly 0.0-5 m/s, orders of magnitude higher than previous capture operations. This acceleration is one of the objective metrics that requires reduction. Figure 8.40 illustrates the gyroscopic information from the target’s IMU. The Z axis noise masks any reliable signal, however, the trend clearly shows a large angular acceler­ ation post-capture that becomes over-compensated.

8.5 Controlled Dynamic High Velocity Capture This capture was chosen to illustrate a lengthy operation that shows post-capture accel­ eration and strain reduction behaviour, even though we still see a spike initially. The capture occurred over nearly the entire capture workspace and also illustrates the robust­ ness of the controller and system. Figure 8.41 demonstrates the range of the torso, beginning at —60° and perform­ ing that actual capture at nearly +65°. The tracking from the controller is intuitive and although lag is noticeable, the prediction from the Kalman filter used counters it adequately. Figure 8.42 illustrates why we can predict the behaviour of the system employing a fairly simple second degree model for the target’s global behaviour. The motion of the target is fairly linear and extrapolations over short distance prove to be fairly accurate, allowing us to counter the process’ lag. Similar to the previous figure, Fig. 8.43 demonstrates a nice behaviour, conducive to modelling and predictions. Early oscillations from both the sensor and the filtered data quickly smooth away and become smooth curves. Figure 8.44 is where the data begins to become interesting. We clearly see a much increased strain reading from a high velocity capture. How-ever, for the first time wre also see a post capture controller effecting an improvement over the strain. The behaviour, and specifically the duration of the high strain level becomes evident w-hen we examine the specifics of this capture. The capture itself took place near the end of both the manipulator’s workspace as well as the limit of the target’s motors limit. Therefore, soon after the capture occurred, the target motor reached the end of its motion and stopped, allowing for the manipulator to finally align itself with the target. This capture clearly indicates that the post-capture controller is suffering from lag which is causing the manipulator to fall behind the motion of the target and clearly impart a strain to the target. Examining the controller’s visual input in Fig. 8.45 once again demonstrates the robustness of the visual Kalman filter. Of particular interest is a moment just passed the 500 Time(ticks) point where a loss of lock clearly occured. The spikes in the information 129 show a disconnect between actual target behaviour and the photogrammetry output, which appears to correct itself within a few iterations. Commensurate with the strain readings, Fig. 8.46 shows a much higher readings from the target’s accelerometers during the capture operation. Figures 8.47 show the initial high angular acceleration which is imparted to the target as the arm makes contact. It proceeds to stabilize while the controller is consistently tracking behind the target, and the final stage is wdien the target, reaching the end of its motion, halts and allows the manipulator a chance to catch up.

8.5.1 Difficulties in High Velocity Captures

The majority of the difficulties for the high velocity captures, after compensation and tuning of the predictions for the controllers, took place post-capture. Simple control of the type employed in this chapter seemed inadequate to the task of reducing the accelerations and forces down to a minimal level. Further controllers will be examined in the following chapter which will attempt to improve upon the results seen in this chapter.

8.5.2 Summary of High Velocity Capture Analysis

The high velocity captures achieved a high success rates once the target’s global posi­ tion Kalman filter was employed in conjunction with a prediction to overcome system lag. However, post capture behaviour still eschewed low accelerations and strains. The following chapter will focus on the post-capture behaviour of the target and the arm and will compare a few' different strategies which show promise.

130 ■0.051-----'----1---- *---- 1---- 1---- 1---- 1---- 1 0 5 10 15 20 25 30 35 40 Ttme (s) (a) Acceleration

0.02 ------Vx

■*l ■" Vy 0^ '"'-'■'-.v,...... ------V r V v \

i s - i . \ X

V \

0 5 10 15 20 25 30 35 40 Ttme (s) (b) Velocity

-0.2

-0.4

§ -0.$ \ I- -1

* 1.2

-1.4

-1.6 \

- 1 8 0 5 10 15 20 25 30 35 40 Time (s) (c) Position

Figure 8.32: Capture Attempt of a Static Target IMU Acceleration, Velocity, and Position Readings at Low Velocity

131 Figure 8.34: Joint Tracking for Static Capture for Initial Dynamic High Velocity High Dynamic Capturefor Initial TrackingCapture Joint Static for Figure8.34:

Gyros (*15) iue83: M Gr n nl Raig tLw Velocity Low at Readings Angle and Gyro IMU 8.33: Figure (a) Gyros m s) (s UmB Gyro Y 132 f -5 ~ 19 1 29 25 30 3 40 4 35 0 3 5 2 9 2 15 9 1 & v (b)Angles X i s) (s a Tim A X VA. - v ,„*WV ‘ D t n « e l s•Ctsitcd i s e d - S C f u t e x u t• r C ) a d uT » i t e r n• r o sC e S u f f t o l* u n i t d wQ b s w •€*5*a<«Fiaj Arrgl# Wits • 0esu»d •0* • 0 « s i i e d V A i s R e l a t i o n sb EK M A yV* X mw %y*< Angle A - AngJey - , e l g n A - — «*■'-

J 5

i\nsf, I , .-"V ta 20 2S

Figure 8.35: Kalman Filter Input and Output for Initial Dynamic High Velocity Capture

•Ral of Input

Ya*of Input —— RoC a( OuJpul

Figure 8.36: Kalman Filter Input and Output Angles for Initial Dynamic High Velocity Capture

133 G3ug*2 Gauge 3 •Gauge

l* % w ^ W a 1 m n

Figure 8.37: Strain Gage Readings for Initial Dynamic High Velocity Capture

■ f — ...... i ...... ------Cs&uie Flag ------Vision X ------Vision Y

4 2 ------Vision Kelt ■------Vision Pitch - Vision Vav*

------J

0 ...... V“......

...... e fv

______. /

«- t99 200 338 400 MO €00 TOO SCO 900 Tima (ticks;

Figure 8.38: Vision Information for Initial Dynamic High Velocity Capture

134 T im e (s ) (a) Acceleration

0.5

0.4 Vz

0.2

> O.t

-0.2

Tim a ( s j (b) Velocity

P x - P y P z 1.5

uS

a . 0 .5

-0.5

(c) Position

Figure 8.39: Capture Attempt of a Static Target IMU Acceleration, Velocity, and Position Readings for Initial Dynamic High Velocity Capture

135 100

-50 (A

•150 -150

■200

-260,-250 20

(a) Gyros (b) Angles

Figure 8.40: IMU Gyro and Angle Readings for Initial Dynamic High Velocity Capture

«

to

40

| 0

40

200 430 600 SCO 1000 1200 1400 Tin* (ticks?

Figure 8.41: Joint Tracking for Static Capture for Dynamic High Velocity Cap­ ture

136 0

— X d Input —— Y of input Ztflr.ptg X of Output -— — Xdai of Output YsfOusut Ydw d Output ■ 1 at Output — Ztfat ti Output

•0

■0

Figure 8.42: Kalman Filter Input and Output for Dynamic High Velocity Capture

o.

RaB of Input Pteh of input — Yan of Input ——• Relief Output Pitch gf On! put Yaw d Output

-0 0

-o.

-0.1

■6

Figure 8.4-3: Kalman Filter Input and Output Angles for Dynamic High Velocity Capture

137 xt»"*

.£ e

Figure 8.44: Strain Gage Readings for Dynamic High Velocity Capture

5

Figure 8.45: Vision Information for Dynamic High Velocity Capture

138 0 5 A x 0.4

-0.2 I -0.4

- 1.2

(a) Acceleration

2 .5

V2

1.5

-0.5

4 0 T im a (s) (b) Velocity

30 • P x 25 - P y 20

15

19

5 0

-5

-10 0 5 10 15 2 5 30 35

(c) Position

Figure 8.46: Target IMU Acceleration, Velocity, and Position Readings for Dynamic High Velocity Capture

139 Figure 8.47: IMU Gyro and Angle Readings for Dynamic High Velocity Capture Velocity High Dynamic for Readings Angle and Gyro IMU 8.47: Figure

<3yros (•/& ) -100 •300 •200 200 0 8 3 a Gyros(a) 140 -8GO 200 b Angles (b) 5 3 9 Comparison of Post-Capture Controllers

Summary: This chapter compares strictly post-capture behaviour of the system at high velocity with a non-cooperative target, employing different control strategies in order to minimize strains and accelerations imparted from the robotic manipulator to the target’s grasping arm. The basic premise of each strategy has been covered in earlier chapters and will be quickly reviewed before examining the results. The three post-capture controllers to be examined are vision-based, pseudo-force, and a custom adapted impedance control which will build upon the pseudo-force controller.

9.1 Vision-Based Control

The vision-based control takes the simplest post capture strategy and improves upon it. The previous capture operations employed a very simple post-capture operation which took the current motion of the manipulator and simply reduced the velocity vector to a stop over a predetermined time period. This resulted in mixed results, namely due to a lack of compensation between the target and the manipulator, often creating much higher strain and accelerations than desired. The improved vision-based control functions similarly by taking the previous motion of target and the manipulator and providing a halt velocity vector to bring the relative motion to a halt. However, in addition, it continues to employ the vision sensor in order to provide continuous adjustment to the system.

9.1.1 Vision Control Law

In order to continue the visual tracking of the target, once the capture flag is initiated, a direct visual compensating component will be added to the motion of the target, which will scale in regards to the velocity of the target.

AX = X far get. ' Cvision (9.1) where AX is the visual offset matrix resulting from the visual compensating control due to the velocity. Xtarget is the target’s velocity matrix in the global frame, and CViSion is the constant scalar used to modify the target velocity into a distance.

141 9.1.2 Comparison of Results

These results illustrate the vision-based capture scenario with the high velocity non- cooperative target as described in Section 7.4.4. The first figure is the vision controlled joint information as the target traverses the capture area. The capture is performed over nearly the full range of the capture area, corroborating with a long capture seen in Fig. 9.1.

0.6

0.4

- X of Input 0.2 ~ Y of Input - Z of input - X of Output I 0 - Xdot of Output - Y of Output ■ Ydot of Output • Z of Output -0.2 - Zdot of Output

-0.4

-06 10 15 20 25 30 35 40 45 Time (seconds)

Figure 9.1: High Velocity Vision Controlled Capture - Position Information

Figure 9.2 displays the orientation of the target in relation to the camera. The roll and pitch angles are quickly reduced to zero, however thanks to the wrist construction, the yaw angle can be absorbed and compensated for easily. Hence the drift upwards at the end of a yaw angle is simply countered by angling the wrist joint accordingly. Figure 9.3 is where we start seeing new results based on the controller being employed. The range of the detected strain is on order of 10“5 which is comparable to the static capture scenario. The vision based control adapts to the continuous motion of the target and attempts to minimize the force being applied to the target while maintaining contact and alignment. The force levels slowly diverge upwards indicating either a lag or an increasing misalignment. Most likely it is a combination of both problems which cause the gripper to slowly add a torque to the target’s grasping bar. As expected, Figs. 9.4 represent the IMU’s translational accelerations during the capture. The target behaves as expected during the capture section w'here the Y — direction acceleration dominates the motion of the target. The velocity and position profiles dutifully follow the motion. 142 0.8

0.6

0.4

0.2 - RoD of Input - Pitch of Input o 3 - Yaw of Input 5O - Roll of Output - Pitch of Output - Yaw of Output

-04

- 0.6

Figure 9.2: High Velocity Vision Controlled Capture - Orientation Information

Figure 9.5 is the gyroscopic reading for the target. The trend of the information follows the data previously presented. The target remains fairly centred throughout the approach phase as would be expected. Once capture is initiated the angles deviate slightly however remain well within range for the robotic manipulator to maintain grip and alignment.

143 Strain -2 -6 -4 Figure 9.3: High Velocity Vision Controlled Capture - Strain Information Strain - Capture Controlled Vision Velocity High 9.3: Figure 0 2 4 6 x tO x V Time (s) Time 144 §9 ■M

60 I h% *IA 61 Gauge 4 3 auge G 0.1

0 .05 Az

e I

-0.15

(a) Acceleration

Vx

V z -002

£ -0.04

-0.1

•0.12

(b) Velocity

0.2

0.1 Pz

a - -0.2

■0.3

-0.5,

Ttme (s) (c) Position

Figure 9.4: Capture of a High Velocity Target: IMU Acceleration. Velocity, and Position Readings

145 200 Gyro X GyroY ISO Gyro Z

I ©

-50 -100

-150

(a) Gyros

Angle: Angie^ Angle.

-10

(b) Angles

Figure 9.5: Capture of a High Velocity Target: IMU Gyro and Angle Readings

146 9.2 Pseudo-Force Control

Force control is based on directional haptic feedback which is employed in a controller loop in order to manage the force. Force sensing can be expensive and complex and can be realized through several technologies such as pressure sensors, strain gages, deflection gages, and range finders. Evolving on the previous post-capture control strategy, pseudo­ force control will employ visual cues in order to calculate the force being applied between the robotic manipulator and the target’s grasping point. In contrast to the previous control, only the relative position of the camera to the target is relevant in calculating the control offset. The difficulty lies in transforming a vision vector into an accurate force reading.

9.2.1 Pseudo-Force Control Law

Within the structure of the current robotic controller, the desired output is an offset distance based on the force being applied between the target and the manipulator in order to reduce it and control it within an acceptable level. The system is similar to a beam being deflected by a point load and the structure will take the following form.

6 = ^ f f f (9-2)

/ = Ke(re — r) (9.3) where <5 is the deflection of the beam and also the compensating motion in order to reduce the force. F.point is the point load force applied to the beam, I is the length of the beam, E is Young’s modulus and I is the moment of inertia. In order to calculate the point load, we employ the visual information from the misalignment and the beam stiffness in three dimensions. These calculations can be empirically verified and the resulting scaling factor will be Ke. the equivalent stiffness matrix.

9.2.2 Comparison of Results

The capture scenario for the force controlled robotic manipulator is analysed in the fol­ lowing figures. Figure 9.6 is the target information for the capture operation. Similar to the previous post-capture control comparison result, the capture operation occurs over the majority of the range of the manipulator and, correspondingly, takes a relatively long time. The orientation of the target, seen in Fig. 9.7 better illustrates the time of the first contact during the capture which occurs at the time of about 37 seconds. Prior to the capture occurring the orientation of the target in regards to the vision system mounted on the manipulator varies greatly as the arm approaches and then overshoots the position

147 0.7

0.6

0.5

0.4 £ - X of Input o - Y of Input - Z of Input -X of Output 0.4 - Y of Output -Z of Out put

-0.1

- 0.2

-0.3 10 20 30 40 50 60 Time (seconds)

Figure 9.6: High Velocity Force Controlled Capture - Position Information of the target slightly. The system quickly recuperates and proceeds to track and effect a successful capture.

0.3

0.25

0.2

0.15 ■ Roll of Input 0.1 - Pitch of Input - Yaw of Input - Roll of Output f!jWt%V*JvVWvVvVVW'VV - Pitch of Output - Yaw of Output l|4W!y^Sv>'V,''y--v-V-,V -0.05

- 0.1

-0.15

10 20 30 40 50 60 Time (seconds)

Figure 9.7: High Velocity Force Controlled Capture - Orientation Information

Figure 9.8 depicts the strain imparted to the grasping bar during the capture opera-

148 tion. The strain is in the range of 10~° and appears to have an average strain maintained throughout the operation of near 2 x 10“5. This is an improvement over the previous vision based capture, however the compensating algorithm doesn’t appear to be able to reduce the applied force, it simply prevents it from getting worse.

x 10

4

3 2 1 - Gauge 1 E 0 03 - Gauge 2 - Gauge 3 -1 - Gauge 4 -2

-3

-4

-5 23.5 29 29.5 30 30.5 Time (s)

Figure 9.8: High Velocity Force Controlled Capture - Strain Information

Figure 9.9 corroborates the evidence from the strain gages. The controller, during the capture operation,causes the velocity to drift for the y-direction. This slows down the target as desired, however it. has a difficult time maintaining a level accurate force. Figure 9.10 contains the information from the IMU’s gyroscopic sensors. The drift in the sensor is noticeable even during the pre-contact phase of the capture operation. However the gyroscopic drifts in the different axes separate at the sensor time of roughly 11 seconds. This corresponds to the previous information analysed for this scenario.

149 0.015 Ax Ay

4 .0 * 1 ------1------1------1------i------1------1------1 0 5 16 15 20 25 30 3 5 Time {s) (a) Acceleration

6.04

V z

!> -0.06

-0.1

-0.12

-6.14,

(b) Velocity

0.5 Px

E -0.5

6,s

-1.5

(c) Position

Figure 9.9: Capture of a High Velocity Target: IMU Acceleration, Velocity, and Position Readings

150 e Gyro X S Gyro Y Gyro Z 4

3 9 ©2 1 0

-1

-2 05 10 15 20 25 30 35

(a) Gyros

Angle3

A n g te j Angie,

30

-10

Time (s) (b) Angles

Figure 9.10: Capture of a High Velocity Target: IMU Gyro and Angle Readings

151 9.3 Impedance Control

Compensating for the beam deflection in order to reduce the force provides a way to control the level of strain being applied to the target, however in order to control the be­ haviour of this strain, a more complex control strategy is required. Examined in Chapter 3, Impedance control provides an intuitive strategy based on the dynamic model of the system and the desired behaviour of the manipulator capable of dealing with external forces. The impedance controller is adapted in order to employ our sensor suite and to output a position and velocity. The standard input to an impedance control is a force or torque, depending on which space the offset is being calculated in. For the purpose of this build, the pseudo-force calculator, examined in the previous chapter, will be employed as the feed for this new control strategy.

9.3.1 Impedance Control Law Quickly reiterating the relevant equations from impedance control strategy.

fd~ f = Mm{rd ~r) + Dm(fd + r) + Km(rd - r) (9.4) where Mm, Drn, and Kin are the constant inertia, damping, and the stiffness matrices respectively, fd is the desired constraint force and / is the actual constraint force. The relationship between the constraint force and the positions of the constraint and the robot is modelled as:

/ = Ke(re - r) (9.5) where Ke is the stiffness matrix and re is the rest location of the constraint, and r is the position of the contact point made by the end effector of the robot. This model is limited in its applications as it is a straightforward spring simulation system. More complex scenarios will require more complex models. This is the equivalent pseudo-force controller portion from the previous section. Therefore, prior to contact with the target, the impedance controller needs to be employed purely as a path tracker, meaning fd is zero. Once capture is initiated, the desired force level is applied and the remaining part of the controller design aims at choosing a control input fd such that the system (Eq. (9.6)), which already contains the external force component, Ke(re — r), resembles the desired dynamics equation as closely as possible.

fd = Mm(rd - r) + Dm(rd + r) + Km(rd ~ r) + Ke(re - r) (9.6)

152 9.3.2 Comparison of Results

The final set of capture figures illustrates the capture operation employing the full set of control theory in order to reduce the strain and accelerations imparted onto the target. The first figure, Fig. 9.11, is the tracking information for the target. The information is clean and describes the story of the capture. One of the most notable features on this capture operation is the oscillating X and Y which occurs. The oscillation occurs as the motion of the arm is initiated and the Kalman filter lags and then proceeds to catch up. The system takes a bit of time to settle as it approaches the target, and once it is settled, the capture occurs rather quickly.

0.6 o.s

0.4 X of Input 0.3 Y of Input e Z of Input u X of Output Xdot of Output Y of Output Ydot of Output Z of Output Zdot of Output - 0.1

- 0.2

-0.3 5 10 15 20 25 30 Time (seconds)

Figure 9.11: High Velocity Impedance Controlled Capture - Position Information

Nearly identical in construction to the previous figure, Fig. 9.12 demonstrates the orientation of the target in relation to the arm. The oscillations occur over the same time period and once settled, lead to a very quick capture. Figure 9.13 illustrates a great achievement in the pursuit of the autonomous capture of a non-cooperative satellite. The initial contact creates a small spike of strain which measures in at 4 x 10“5 strain. The interesting part is that following this initial contact strain, the level is slowly reduced. This is the first tested controller in which the post capture strain is reduced through compensation for the robotic manipulator. Figures 9.14 represent the acceleration measurement for the impedance controlled capture. In contrast to the previous captures, the accelerations noted in the x direction are higher in magnitude, however there is a much smoother transition up to the value. In comparison to the previous controller, this simply indicates a smoother transition into a decelerating the target. 153 0.2

0.15

0.1

0.05 - Roll of Input - Pitch of Input o 3 - Yaw of Input - Roll of Output - Pitch of Output -0.05 -Yaw of Output

-0.1

-0.15

- 0.2

5 10 15 20 25 30 35 Time (seconds)

Figure 9.12: High Velocity Impedance Controlled Capture - Orientation Infor- mation

The gyroscopic measurements are seen in Figs. 9.15. Similar to the accelerations, these indicate a higher overall reading with a longer time in the applied rotation.

154 Figure 9.13: High Velocity Impedance Controlled Capture - Strain Information Strain - Capture Controlled Impedance Velocity High 9.13: Figure

Strain xtQ" s m ■* j f 0 30.5 30 Iwne{s) 155 131.5 31 0.015

-0.005

}.g?l J 1------1------1------1------»— S------1------1 0 5 10 15 20 25 30 35 40 T im e (s ) (a) Acceleration

2 6

-0.5

(b) Velocity

P y P r

(c) Position

Figure 9.14: Capture of a High Velocity Target: IMU Acceleration, Velocity, and Position Readings

156 100 Gyro X Gyro Y Gyro Z

-1 0 0

-150

-3 0 0

Time (s) (a) Gyros

Angls, Ang!es

-200

fflr < -4 0 0

-1000 2 5

(b) Angles

Figure 9.15: Capture of a High Velocity Target: IMU Gyro and Angle Readings

157 10 Conclusions

Summary: This chapter will cover the conclusion for the thesis and future work section that can be expanded upon at a future date.

10.1 Validation

The goal set out at the beginning of the research was to develop a controller in order to effect an autonomous capture operation whilst minimizing strains and accelerations imparted to the target from the robotic manipulator. The essence of the problem lies in designing the controller and its interaction with the decision making autonomy sector. In order to validate the exercise, a fully functional robotic manipulator wras designed, machined, assembled, programmed, and matched with a quantitative testing target that simulates a zero-gravity space environment. The target is able to traverse the workspace at variable speeds and paths in order to mimic an uncooperative satellite in space. The testing campaign then began with simple static capture operation, high-yaw static captures, low-velocity dynamic captures, and finally with high-velocity dynamic captures. Focus is paid in particular to the post-capture behaviour of the system and three control strategies are implemented and analysed with the results being compared.

10.2 Findings The results from Chapter 9 represent the culmination of the test scenario with Table 10.1 illustrating the important numbers with the Pseudo-Force and Impedance controllers showing marked improvements over the vision only post-capture control strategy. Impedance control imparted a slightly higher acceleration to the target, however the strain behaviour showed an adjustment tendency which reduced the strain level over the course of the deceleration. This indicates the ability to correct for an initial misalignment which causes the spike in strain. Impedance control has been validated as an autonomous control theory in order to effect autonomous capture operations whilst reducing strains and accelerations being imparted from the robotic manipulator to the target.

158 Table 10.1: Comparison of Capture Operations Control Theory Maximum Strain Maximum Acceleration

Vision 6 x 10-5 0.03 Pseudo-Force 5 x 10~5 0.01 Impedance 3.8 x 10-5 0.01-5

10.3 Caveats The system functions based largely on the physical design and the mechanical limitations of the selected hardware. Therefore, the control strategy, software programming, and even the measurement metrics were chosen in order overcome these limitations. Therefore, the exportation or transplantation of this system in whole would not be recommended without a deep understanding of the method of approach which led to these decisions and how they could be adapted to the new scenario.

10.4 Summary of Contributions During the development and validation process, several contributions were made to the field of robotics. These will be enumerated briefly in this section.

10.4.1 Neural Network Phase Transition Algorithm

The neural network developed in order to determine the transition between the phases of the capture employs a unique combination of error measurements. The use of a neural network allows for a simple set of rules to create a complex behaviour that can be tuned and adjusted. This allows for the tuning of safety parameters and can greatly increase the reliability of the performance.

10.4.2 Vision-Based Compliant Capture Control

The developed impedance controller based on the feedback from the pseudo-force vision sensor is a unique development in the field of robotics. The system employs a flexible sensor and minimal knowledge of the target in order to accomplish a fairly complex task. The flexibility of both the sensor and the controller makes the system adaptable to several different tasks, environments, and targets. The validation process provided real world data and an arena in which several testing scenarios, mimicking space, were established and capture was successfully accomplished. 10.4.3 Vision-Based Force Estimation

The pseudo-force seusor based on the visual misalignment of the gripper and the target functioned based on a simple, yet accurate, model of the system. The use of a non- intrusive sensor in order to estimate deflection in a beam, so long as the knowledge of the beam is accurate, provides an additional use for the vision system.

10.4.4 Smooth Stepper Motor Control

The use of stepper motors in the robotic manipulator arose from the need for a compact, precise, rotary actuator. However, in robotics, the application of torque-control is second nature and can be found in most literature. In order to accomplish smooth and controlled operation, a unique speed and acceleration control was developed for the stepper motors which involved both a theoretical component as well as an operational component. This led to the freedom of employing inexpensive, precise, and powerful stepper motors in a situation where they were originally thought to be unsuited.

10.4.5 Digital Dynamic Damping

Due to assembly and mechanical requirements, a flexible shaft coupling was needed be­ tween the main tube and the torso actuator. The digital damping employs a variably sized array which scales dynamically in order to first allow the robotic manipulator to quickly approach the target, and then to stiffen up the response, allowing for a steady capture and reduced chance of oscillations.

10.4.6 Vision System and Scalable Target Design

The target design for the system allows the use of any three points in three-dimensional space to be tracked and, through the use of photogrammetry, allow the extraction of the relative distance and orientation of the target to the camera plane. The target employs four points in order to ensure tracking and to quantitatively determine the quality of the lock. Furthermore, the shape of the target reduces the chance to obtain an inverted lock, meaning for the mirror image to be assumed to be the correct one.

10.4.7 Kalman Intercept Prediction

When a dynamic capture is attempted at high velocity, prediction becomes critical and sensitive. The use of a PVA modelled Kalman Filter smoothly predicts the target motion and allows for an intercept calculation that allows for a rapid and reliable capture. The use of an eye-in-hand based visual Kalman filter predicting the motion of a target is novel and effective and could be easily expanded in order to further improve upon the accuracy.

160 10.5 Future Work

Future work for this research can take place on several fronts. In terms of control the­ ory development, the adaptation for vibration control and compensation would greatly enhance the flexibility of the system and expand the applications for the robotic ma­ nipulator. Some large space structures are designed with minimal stiffness in order to achieve the lightest possible mass. Therefore, the beams or other logical grasping points will likely have very low stiffness, resulting in vibrations during the capture. One of the highest yield addition to be added to the current research would be the development of a custom lightweight 3D force feedback sensor. The visual pseudo-force- feedback improved the capture operation greatly as seen with the results in Table 10.1 and it stands to reason that a true force-feedback would help even more. Several design ideas exist including strain gage networks, piezo-electric sensors, and deflection detection employing systems like range finders and lasers. The important part of this addition is to ensure that the maximum number degrees-of-freedom can be sensed both translational and rotational. This will provide a higher degree of sensitivity to the system and make it capable of responding to a higher degree of stimuli. In the same vein as the improved force sensing is the addition of a second camera to the photogrammetry system. The fusing of two cameras into a single photogrammetry system enhances the system’s accuracy and precision particularly in the off-camera-sensor-plane. The current set-up proved more than adequate in order to accomplish the capture. Should a smaller target be required, dual-camera-photogrammetry would almost definitely be required in order to accomplish the capture. The current autonomy model, perforce, is a simplified neural network which transi­ tions between the control types in order to optimize the approach, align, capture, and deceleration portions of the operation. Future research could expand the autonomy pro­ cess and transitions to factor in additional metrics in the decision making process. The current simplified kinematics model employed in the first layer of the neural network could be developed into more detail. Additionally, the inclusion of dynamics would provide a more intuitive response to the decision making. The input shaping should be independently validated and characterized by exterior sensors. The current design employs interior sensors which are unable to measure the sway of the robotic manipulator as it comes to rest. The use of external sensors would quantify the effectiveness of the input shaping and its ability to function in a dynamic environment. By far the richest research potential delved into during this research is the controller. The current design of the compliant controller does not employ any foreknowledge of the target or estimation of physical properties post-capture. Pose-estimation is a currently active field of study of particular interest to space missions such as on-orbit servicing. Combining pose-estimation and foreknowledge of the target, from model or experimental manipulation with force-feedback, could provide crucial information such as moments of inertia, center of mass, and system damping and stiffness. The information could then

161 be used to improve the interactive model between the robotic arm and the target, as well as the overall combined system, greatly improving the performance of the controller.

162 A Appendix - Mechanical Design

Summary: This chapter summarizes the mechanical design of the robotic manipulator including the selection of the actuators and linkages. Additionally, the kinematic frame­ work is presented along with the notational convention that will be employed throughout the thesis. Finally, each joint, link, and actuator is outlined in order to provide an un­ derstanding of the functionality of the system.

A.l Introduction

The mechanical design of the robotic manipulator defines its agility and its abilities. The combination and order of the actuators determines the configuration and type of manipulator constructed. One of the most common types is the bio-mimetic Piper configuration that consists of 5 (five) rotary actuator and 1 (one) linear actuator that mimics a human torso and arm.

A. 2 Actuator

Several different type of actuators exist, however their motion can be classified into two camps: linear and rotary. Linear actuators are typically hydraulic in nature and have a limited range of motion. Rotary actuators are highly popular in the mechatronic field and come in several different types such as electro-magnetic motors, internal combustion engines, and piezo-electric vibrational motors are just some examples. The robotic manipulator was designed to employ three high-torque bipolar step­ per motors and three high-torque servo motors, providing the needed combination of accuracy and weight. The selected stepper motors are from CNC Elektonik Lan- genfeld and are paired with a 25:1 ratio planetary gearbox from Alpha. Combined with the ability of stepping the motor at l/16th of a step, this provides a high level of accuracy for the positioning of the motors. 163 (a) Stepper Motor (b) Gearbox (c) Servo Motor

Figure A.l: Various Actuators and Modifiers

Table A.l: Stepper Motor Specifications Feature Amount Units Holding Torque 3 N-m Moving Torque 1.90 N-m Phase Current 3 A Coil Voltage 30 V Step Angle 1.8 Degrees Rotor Inertia 840 g-cm2

Servo motors work on a different principle and employ a pulse that relates to a position. This provides the advantage that there is a feedback control in order to maintain the position of the motor, but also limits the range in which the motor can operate. Due to the kinematic layout of the robotic wrist, this limitation was accommodated without any sacrifice in dexterity of the arm.

A. 3 Kinematics

The full robotic system can be seen in Fig. A.2 and illustrates the six frames of reference that are employed in order to determine the kinematics of the system. The Denavit-Hartenberg convention was employed to determine the transforms and are characterized by the following criteria.

164 Figure A.2: Robotic Manipulator

Table A.2: Identification Card for Robotic Manipulator

Joint CXi—l «i-l di Si 0 -» 1 Odeg 0 0 01 1 -> 2 +90 deg 0 0 02 2 -* 3 Odeg Li 0 0‘s 3->4 —90 deg 0 l 2 04 4 —> 5 +90 deg 0 0 05

A.4 Robotic Manipulator

Employing the D-H convention, an identification card which is unique to each robotic system can be created. The card for the robotic manipulator is as seen in Table A.2. This is a compact representation of the robotic manipulator and will be em­ ployed as a reference. Joint 6 is not represented as it does not affect the kinematic of the system, it simply controls the actuation of the gripper.

165 A.4.1 Torso Column

The torso column is the anchor to the base of the robotic manipulator and houses a few important components including:

• Torso stepper motor

• Paired bearings to counter the rotational moments

• Physical damper system • Cabling and routing

Encoder Gearbox Torque Distribution Hub

S leeve Bearing Motor *k Shoulder Alignment Housing Mount Adapter

Bearing

Main Shaft

Central Tube

Bearing

Flexible Coupling

- Mount

’ Motor

Encoder

Figure A.3: Central Tube

166 The design of the column was crucial as it provides the stability for the entire system. It required enough strength to resist bending moments when the arm was fully extended and in dynamic motion as well as providing an anchor point for the motor to connect to and actuate the robotic manipulator. In order to minimize the strain on the motor’s rotor and to ensure shaft align­ ment from motor to robot, a matched pair of bearings were compression fit into the column. These supported the entire weight of the robotic manipulator and due to their spacing, provided the counter rotational moment and assured shaft alignment. The set-up is illustrated in Fig. A.3.

A.4.2 Joint Design

The joint design on the robotic manipulator took advantage of the efficient design of both the stepper motors and the gearboxes. Employing a composite material sleeve bearing that fit over the gearbox, the stepper motor was fitted within the square tubing and a custom machined alignment bracket was used to anchor both the motor and the gearbox to the post. The protruding gearbox was then used as a hub for the following post. This compact design allowed the entirety of the joint system to be contained within the robotic manipulator’s frame. Additional features were added to the joint design including a shaft encoder and a electromagnetic brake system. The shaft encoders are used in order to verify the motion of the joint and work by employing a break-beam sensor and a custom machined encoder wheel. The wheel’s properties were matched to the stepper motors specifications and are employed as a verification tool. The electromagnetic brakes are a safety feature that prevent the collapse of the arm when power is lost to either the stepper motors or the electronics. The brakes are wired into both power supplies. If either lose power, the electromagnet which is holding back a spring loaded friction plate will release and the brake will be applied.

A.4.3 Wrist

The wrist design was based on the continuation of a biomimet.ic robotic arm design and mimics the human wrist providing both a rotation and angle joint. The wrist was designed and constructed employing servo motors due to their light weight and compact form factor. The three motors were mounted using lightweight aluminium brackets and injection molded gears to transfer the power to the gripper. The breakdown of the wrist and the axes of actuation can be seen in Fig. A.5. It consists of two angular actuators and a converted angular to linear actuation. The two angular actuators intersect at the wrist angle and is, in fact, the point of

167 Torque Gear Box I Distribution / Hub Sleeve Bearing

Figure A.4: Shoulder and Elbow Joint

W rist, RtSfation

Figure A.5: Wrist Assembly control for the motion of the arm as it is a consistent transformed point from the elbow joint. The grasper transforms the rotational motion of the servo motor into a. linear motion and can be seen in Fig. A.6. Figure A.6: Grasper

A .4.4 Base

The base of the system houses all of the on-board electronics and provides a heavy anchor in order to allow rapid dynamic motion of the system. A wide form factor was selected in order to ensure a footprint capable of stable motion for the robotic manipulator. The base is a recycled water reservoir from a testing laboratory, made out of AL6061-T6 grade aluminium. The lid was machined in order to accommodate the central tube, an access panel in order to inspect and modify the equipments, a ventilation system in order to keep the stepper motor drivers cooled, and access points for communication, power, and electronics. Internally mounted is three electronic boards used to develop and test. There are three custom stepper motor drivers with cooling fans mounted to the lid in order to ensure that the temperature remains at an acceptable level. There is a USB hub mounted which connects the various components and a wireless XBEE network receiver which receives the information from the target mounted IMU. Mounted externally to the base is the high power supply relay which connects the high voltage power supply to the high voltage stepper motor drivers through a large gage copper wire.

A.4.5 Construction &: Assembly

The assembly of the manipulator was accomplished in three separate sections, each designated by the link which is anchored. Each of these links represent a human analogue motion, those being the torso, the shoulder, and the elbow respectively. The first link seen in Fig. A.8 represents the torso of the robotic manipulator. Being the link between the base and the remainder of the arm, it will transfer the highest level of torque, and therefore requires the highest stiffness. During assembly the torso motor and mount are inserted are inserted through

169 Shoulder Housing Alignment Mount

Bearing Central Tube

Figure A.7: Base Assembly the bottom of the central tube and fastened into place. The main shaft and the pair of bearings are then inserted from the other end along with a flexible shaft coupling. A small hole is drilled on the central tube to tighten the set screw on the shaft coupling, joining the motor to the main shaft. Figures A.3 and A.8 both illustrate the assembly. The second link, analogous with a humans’ biceps, is a simple link. It houses a mounting point to connect to the first link and an identical motor and gear box set-up at the other end. The link can be seen in Fig. A.9. This section of the manipulator can be attached to the first link with the simple addition of a torque distribution unit which fastens the entire section to the gearbox. The final link, the forearm, is a similar construct to the second link, however, opposite of its mounting side, the other side houses a bracket for a vision system and a wrist system. The wrist and vision brackets are mounted to the link using four screws and tapped aluminium mounting bracket (name the wrist assembly mount) seen in Fig. A. 10. Alignment Mount Motor Sleeve Bearing Encoder

Shoulder Gearbox Tubing Adapter

Main Shaft Bearing

Flexible Shaft Coupling Mount

Motor

Figure A.8: Link 1

171 Alignment Mount

Gearbox Encoder

Motor Sleeve Bearing

Lightening Holes

Torque Distribution Mount

Mounted to Link 1

Figure A.9: Link 2

172 Torque Distribution Mount

Lightening Holes

W ebcam

Webcam Mount

Wrist A ssem bly Mount

Aluminum Brackets > A Servo Motors Grasper

Figure A. 10: Link 3

173 B Appendix - Electrical Design

Summary: This chapter covers the electrical design of the system. This includes the power requirements, actuator circuitry for stepper and servo motors, as well as the main on-board computer layout and wiring. The actuators both function using widely different approaches, each is analysed, presented, and explained in order to illustrate the abilities and limitations of the system.

r B.l Introduction

The electrical design for the robotic manipulator is composed of two independent systems, one controlling the stepper motors and one controlling the servo motors. The systems incorporate power, micro-controllers, and motor drivers. Power is derived from three separate supplies and one battery pack; a high voltage BK precision power supply, a low voltage BK precision power supply, a salvaged computer power supply, and a mobile battery pack. These systems were necessary in order to provide the variety of required voltages in order to power the multitude of electronics for the system listed in Table B.l. The communication takes place between three systems and several smaller com­ ponents. The three main system are the robotic manipulator, the desktop computer, and servo motor controller. Additional components which are also involved include the web-cam, the strain gage sensor, the IMU, and target motor. Communication and power are the reason behind the necessary wiring harness that permeates the manipulator.

B.2 Stepper Motor System

The stepper motor system schematic can be seen in Fig. B.l. From the main computer the communication is routed through a USB hub in order to branch out to all components. The communication between the desktop and on-board computing system occurs through a USB-to-RS232 converter. The signal is then

174 Table B.l: Power Requirement Table Component Group Voltage Amp Requirement Requirement Stepper Motor Driver RM 12 35mA Microcontroller RM 5 0.4mA Webcam Vision 12 0.5A Servo Motor Driver RM 5 31mA Power Regulator 7805T RM 5 5mA Power Regulator MC33269-S08 RM 5 0.5mA Power Regulator FM93C66N RM 5 0.5mA MAX232 Comm 5 0.5rnA Stepper Motors RM 40 5.5A Servo Motors RM 8 0.6A Shaft Encoders Control 5 0.5mA Electromagnetic Brakes Safety 24 1A XBEE Target 5 50mA National Instrument DAQ Target 12 2A IMU Target 3.3 6.5mA Target Motor Target 35 3.0 converted through a MAX232 which modifies the input levels to TTL input levels. This is needed in order to transform the signal to the same level as that needed for the micro-controller and remaining electronics to detect. The micro-controller then takes over and connects to the drivers. Port B and Port C are employed for direction and step initiation. Port C is wired to the directional ports for each of the three stepper motor drivers. Port B is connected to the step port for each driver. Combining different port settings for port C and triggering a voltage drop on port B pins will provide directional control of the stepper motors. The step size is hard-wired with the motor driver and can range from a full step to a ^ step. The stepper motor drivers are standard dual-coil H-bridge structures allowing for forward and backward stepping. The H-bridge schematic can be seen in Fig. B.2 which includes the PIC18F4550 that controls the motor through Port A and the power is fed through Port C. The power port is employed due to the high voltage required to move and hold the bipolar stepper motors. Stepper motors consist of a permanent magnet rotating shaft, called the rotor, and electromagnets on the stationary portion that surrounds the motor, called the stator. In order to move the rotor, the stator magnets are polarized sequentially as can be seen in Fig. B.3. This motion provides a very accurate step motion which lends the stepper motor its accuracy and torque strength. 175 POWERW 1C3

J.C 1 1 C4

ET- i ------GND

~ T 5 So

i^D

_ 2 i

JCRSOCOIL_A

Figure B.l: Schematic Layout of the Stepper Motor Driver

This example illustrates a motor with a 90 degree resolution. In reality this would not be a very practical motor for most applications. The average stepper resolution, the degree per pulse, is much higher than this. For example, the stepper motors employed in the robotic manipulator, their resolution is on the order of 1.8 degrees per step, thereby requiring 200 pulses to complete a full 360 degree rotation. You can increase the resolution of some motors by a process known as half­ stepping. Instead of switching the next electromagnet in the rotation on one at a time, with half-stepping you turn on both electromagnets, causing an equal at­ traction between, thereby doubling the resolution. As you can see in Fig. B.4. In position II, both the top and the right electromagnets are active, causing the rotor to position itself between the two active poles. This process can be repeated for the entire rotation. Several types of stepper motors exist. 4-wire stepper motors contain only two electromagnets, however operation is more complicated than those with three ox- four magnets, because the driving circuit must be able to reverse the current after each step. For our purposes, we will be using a 6-wire stepper motor.

176 Figure B.2: Schematic Layout of the H-Bridge Driver

tHBI on off off off

off i r x p f t -s t h i p x p f p v , i ip r j'si ia*. m\ [j:x..jrT-i

> 1 & 5

Figure B.3: Stepper Motor: How a Stepper Functions

Unlike our example motor which rotated 90 degrees per step, real-world motors employ a series of mini-poles on the stator and rotor to increase resolution. Al­ though this may seem to add more complexity to the process of driving the motors, the operation is identical to the simple 90 degree motor we used in our example. A multi-pole motor can be seen in Fig. B.5. In position 1, the north pole of the rotor’s permanent magnet is aligned with the south pole of the stator’s electromag­ net. Note that multiple positions are aligned at once. In position 2, the upper electromagnet is deactivated and the next one to its immediate left is activated.

177 Figure B.4: Stepper Motor: Half Stepping causing the rotor to rotate a precise amount of degrees. In this example, after eight steps, the sequence repeats.

s

s

s

Figure B.5: Stepper Motor: Multi-pole

In the case of a 6 wire stepper motor, the internal workings are slightly more complicated. If we follow Fig. B.6, the electrical equivalent of the stepper motor, we can see that 3 wires go to each half of the coils, and that the coil windings are connected in pairs. This is the definition of a four-phase stepper motor. The phasing of the motor coils enables activation with high holding torques and moving torques by pairing of the electromagnets. By pairing the stepper motors with low backlash gearboxes further increases the torques and the precision motion of the motors at the cost of speed. TTH Yellow- Green Blue

White Green Red

Figure B.6: Stepper Motor: 6 wire

B.3 Servo Motor System

The servo motor system employs a different type of driver system. Rather than an actuation system, a servo motor functions with an internal encoder system that depends on a timed pulse correlating to a specific position. The pulses range from 0.5 milliseconds to 2.5 milliseconds and correlate to —90° to +90°. The schematic for the servo motor system can be seen in Fig. B.7 and similar to the stepper motor system requires a TTL driver/receiver and power regulation devices. The pulse timing is controlled through four separate ports which allows for the precise control of up to 32 servo motors. Power is provided through the low voltage BK precision power supply and powers both the electronics and the motors. The communication is conducted through a USB-to-RS232 converter and is operated sequentially. Servos function differently than stepper motors which operate based on a step signal. Servos are controlled by sending them a pulse of variable width. The con­ trol wire is used to send this pulse. The parameters for this pulse are that it has a minimum pulse, a maximum pulse, and a repetition rate. Given the rotation constraints of the servo, neutral is defined to be the position where the servo has exactly the same amount of potential rotation in the clockwise direction as it does in the counter clockwise direction (typically associated with a pulse timing of 1.5 mil­ lisecond). It is important to note that different servos will have different constraints on their rotation but they all have a neutral position. The angle is determined by the duration of a pulse that is applied to the control wire. This is called Pulse Width Modulation (PWM). The servo expects to see a pulse every 20 ms. The length of the pulse will determine how far the motor turns.

179 FM93C66N VSS _P0WERIN2 IC2 SK 7805T DO + Dl OS VI + VO py £ VCC GND IC1 Hf GND 14 28 r 13 27 PD0 PC4 IC4 C1 12 - 0 X C4 PD1 PC3 26 _£e2- C1 + 25 u r PD2 PC2 v + 10 PD3 PCI 24 C1- b PD4 PC0 23 V- GND VCC GND C2+ I 21 f-ei- GND +AREF + . 3^1 PB6 AVCC 20 C2- 19 PB7 PB5 14 PD5 PB4 18 11 T1IN T 10U T PD6 PB3 17 10 T2IN T 20U T 16 13 PD7 PB2 12 RIOUT R1IN PBQ PB1 15 R20UT R2IN jSE RVO_ROW_D MAX232 £AUD0 MD £ERVO_ROW_C _£AUD1 £ERVO_ROW_B .S ERVO_ROW_A

Figure B.7: Schematic Layout for the Servo Motor Driver

h - Period 20 ms —H

h— Pulse Width 1 ms (min.) - 2 ms (max.)

Figure B.8: Servo Pulse: Example of a Signal to a Servo Motor

For example, a 1.5 ms pulse will make the motor turn to the neutral position. When these servos are commanded to move they will move to the position and hold that position. If an external force pushes against the servo while the servo is holding a position, the servo will resist from moving out of that position. The maximum amount of force the servo can exert is the torque rating of the servo. Servos will not hold their position forever though, the position pulse must be repeated to instruct the servo to stay in position. When a pulse is sent to a servo that is less that 1.5 ms the servo rotates to a position and holds its output shaft some number of degrees counter-clockwise from the neutral point. When the pulse is wider than 1.5 ms the opposite occurs

180 (see Fig. B.9). The minimal width and the maximum width of pulse that will command the servo to turn to a valid position axe functions of each servo. Different brands, and even different servos of the same brand, will have different maximum and minimums.

Minimum Puise [~| |_____ | j |_____ $

^ "Pulse Width 1 ms 90

Neutral Position | [ ____

'""Puise Width 1.5 m s

Maximum PulseJ [

^ Pulse Width 2 ms

Figure B.9: Servo Pulse: Example of Signals

181 C Appendix - Hardware

Summary: This section enumerates the hardware selection in detail providing a look at each component. In order to validate the control theory developed in earlier chapters, a fully function robotic manipulator was constructed and possesses limitations. Therefore it is important to understand the criteria of selection and the implementation.

C.l Electronic Hardware

The electrical hardware components list begins with the micro-controller used to run the on-board code for the robotic manipulator. The choice for the micro­ controller was based on the ease of integration and development. Atmel Atrnega chips provide a great range of options and support which led to the choice of the Atmega324P chip seen in Fig. C.l (a).

(a) ATMEGA324P (b) Target Motor Driver System

Figure C.l: Atmega Microchip and Target Driver System

In order to control the target motor, a motor driver system was designed and implemented. The system required a modified computer power supply to provide power to both the motor and the electronics as well as a micro-controller, based on an H-bridge, controls the motor. This can be seen in Fig. C.l(b). 182 The motor driver controller for the target stepper motor is a chip system based on the Little-Step-U controller. The controller is a general six-wire stepper motor controller for high amp motors and can be seen in Fig. C.2(a). The controller comes with custom software allowing several types of control strategies and commands.

(a) Little Step U (b) SSC-32

Figure C.2: Little Step U and SSC-32

The final controller for the system is the servo motor control board (SSC-32). It is designed to operate up to 32 individual servo motors by supplying both control signals and power to the entire system. The interface is a DB-9 which is converted from a USB hub in order to match to the desktop computer. The servo control board can be seen in Fig. C.2(b). The majority of the components in the hardware list are controlled through a DB-9 interface allowing for pin-out control and prototyping. USB to DB-9 convert­ ers provide more than a simple mechanical conversion, it alters the electrical signal and the voltage levels. The selected USB to DB-9 (also known as RS232) converter can be seen in Fig. C.3(a). In order to accommodate the large number of communication based hardware devices, a USB hub was integrated into the system. Also due to the requirements, separate power to the hub was required. The selected device can be seen in Fig. C.3(b). The AVRISP is a programmer specifically designed to operate between AVR Studio and the Atmega 324P micro-controller. The AVRISP additionally provides a secondary communication link to the micro-controller allowing an interrupt of the 183 (a) USB to RS232 Converter (b) DLink USB Hub

Figure C.3: USB TO RS232 Converter and USB Hub execution of the program, which is useful when debugging software and communi­ cation issues. The AVRISP can be seen in Fig. C.4(a).

(a) Programmer (b) Stepper Driver

Figure C.4: Programmer and Stepper Driver

The stepper motor driver was custom designed for the bipolar stepper motors employed in the construction of the robotic manipulator. Requiring high voltage (40 volts) as well as high amperes (10 amps), a stepper motor driver manufacturer 184 was contacted and contracted to create a custom driver board. The driver functions purely based on signal inputs, meaning that there is no command or communication structure implemented in software. The driver is controlled through voltage pins on the board for both direction and step signals. Heat dissipation became an apparent issue early on and required a two-pronged approach in order to mediate completely. Large heat-sinks were installed and paired with computer fans providing adequate thermal control to maintain the boards well within operational temperatures. The drivers can be seen in Fig. C.4(b). The crux of the sensing system is the web-cam and it has evolved during the course of the development of this project. The original web-cam was a software- enhanced 1 megapixel (not a true megapixel) camera with fixed focus and mounting. The mechanical properties of the chip became the limiting factor for the precision of the system. We found that using the 1 megapixel software-enhanced chip did not provide the necessary resolution for accurate vision based tracking. A decision was made to upgrade to a higher resolution pan-tracking web-cam with autofocus. The camera can be seen in Fig. C.5.

Figure C.5: Webcam

C.2 Power Hardware

Power provides life in robotics, however mismanagement can cause serious dangers and death. Therefore it is necessary to ensure that proper power is supplied in a reliable and safe manner. Two main types of power supplies were needed in order to supply the necessary wattage for the robotic manipulator system. The first power supply was the high voltage power supply employed for the stepper motors. It provides an adjustable DC voltage of up to 32 volts and 30 amps. It can be seen in Fig. C.6(a). The lower voltage power supply is regulated and employed to power the elec­ tronics on the robotic manipulator. The BK Precision DC power supply provides 185 (a) High Voltage Power Supply (b) Low Voltage Power Supply

Figure C.6: Low and High Voltage Power Supplies a 12 volts, a 5 volts, and a variable output which are connected to the micro­ controller, the stepper drivers, the servo driver, the servo motors, and the XBEE wireless receiver. The power supply can be seen in Fig. C.6(b). The final power source needed to be separate from the system and power the target motor and electronics as well as the electromagnetic brakes. In order to save costs, two power supplies were salvaged from older, unused computers. The first power supply was connected to the electromagnetic brakes and is wired in parallel with the high voltage power supply. The second supply is connected to the target motor and electronics. An example computer power supply can be seen in Fig. C.7(a). In order to satisfy safety concerns for both environmental and human factors, an emergency stop with a double pole single throw was implemented. The brake serves to sever both the high-voltage and electromagnetic brakes power supply effectively freezing the motion of the robotic arm and keeping the electronics powered up in order to enable troubleshooting and extracting information. The e-stop can be seen in Fig. C.7(b).

C.3 Physical Hardware

The physical hardware enables the functioning of the remainder of the robotic sys­ tem. A component of any complex system is the requisite wiring harness, connecting 186 (a) Computer Power Supply (b) Emergency Stop - Double Pole Single Throw

Figure C.7: Computer Power Supply and Emergency Stop the multitude of systems to the electrical brain. In order to facilitate assembly and debugging, connectors rapidly become a necessity. Chosen for both their robust­ ness and ease of integration, DB9 connectors have become prolific in amateur and personal electronics. These are illustrated in Fig. C.8(a).

(a) DB9 Connectors (b) Flexible Shaft Coupling

Figure C.8: DB9 Connector and Flexible Shaft Coupling

In order to provide the stability from the base of the robotic system contin­ uously through to the end-effector, several design decisions required sacrifice in simplicity or accessibility. From the torso motor to the first link, an enclosed tube was employed to house a matched pair of bearings. The motor was mounted from one end of the tube and the torso shaft was inserted from the other. In order to connect one to the other, a flexible shaft coupling was used. This allowed for minor 187 shaft misalignments and only required the tightening of a single set screw. This component can be seen in Fig. C.8(b). Stepper motors provide an advantage in accuracy and torque, both of which provide necessary properties to a robotic manipulator. However, stepper motors are unable to apply any torques when there is no power applied to either the electronics or the coils. This means that during a power failure, or an accidental disconnect from either power supplies would bring the arm crashing down. In order to combat this possibility, electromagnetic brakes were installed. These function by using a spring loaded friction plate that is disabled when power is applied to an electromagnet that compresses the spring-plate. The brake can be seen in Fig. C.9(a).

(a) Electromagnetic Brakes (b) Rotary Damper

Figure C.9: Electromagnetic Brakes and Rotary Damper

The use of a flexible shaft coupling created complexities in the control of the torso joint for the robotic manipulator. Several strategies exist in order to combat the undesired oscillating motion, some of them involve control based solutions and others are mechanical solutions. In order to compare and contrast both types, a mechanical solution was sourced and installed involving a viscous rotary damper seen in Fig. C.9(b).

C.4 Target Hardware

Similar to the previous section, the target hardware enables motion and sensing. The first section of target hardware are the strain gages which enables the sensing of deflection in the grasping bar. Strain gages contained calibrated metallic foil patterns which, when deflected, vary in resistance. Strain gage can be seen in Fig. C.10(a). 188 •X (a) Strain Gage (b) Data Acquisition Unit from National In­ struments

Figure C.10: Strain Gage and Data Acquisition Unit

In order to accurately and rapidly read the strain gages, a special data acquisi­ tion unit is required. Simply stated, this piece of equipment, combined with custom software, can calibrate and read several strain gages at a high rate. This can be seen in Fig. C. 10(b). When high speed in communication is not a crucial proponent of the sensor and a physical wiring link is unlikely or complicated, wireless communication is a great solution. XBEE wireless communication devices offer a great simple and easy solution. XBEE are a modular product that function on radio frequency in a low power and small footprint package. Each module can be programmed as a receiver or transmitter with multiple channels, unique identification numbers, and with a range from 40 meters up to 24 km depending 011 the band. A standard XBEE module can be seen in Fig. C.l 1(a). The final piece of hardware for the project is an inertial measurement unit (IMU). IMUs are used to measure accelerations and rotations. In specifics, the chosen IMU is the IMU 6DOF Razor from Sparkfun electronics. The Razor makes use of an LPR530AL (for pitch and roll) and LY530ALH (for yaw) gyros, as well as the popular ADXL335 triple-axis accelerometer in order to provide six degrees of sensing. Power regulation is performed off-board and requires clean 3.3 VDC. The selected IMU can be seen in Fig. C.11(b). If the hardware components constitute the body of the system, the software is the brain and will be covered in the next chapter.

189 (a) XBEE Wireless Communication Device (b) Inertia Measurement Unit

Figure C.ll: XBEE Wireless and IMU

190 D Appendix - System Dynamics Matrices

M u M \2 m 13 M(d) M 2 1 M 22 M 2 3 (D.l) M 31 M 3 2 M 33

M11 0.6741 - 0.1841C2 - 0.05854s2c2 + 0.02521s2c|c2 + 0.4062s2c3c2 + 0.2033c^ - 0A0Q544 + 0.4065s2c3s3c2 - 0.01260c3,s3 + 0.02521c3.s3c| 0.01414s2s3c2

+ 0 .4 0 6 2 s 3 c| + 0.01414 c3c2 ( D .2 )

M 12 = 0.00569s2 - 0.002399.s3s, + 0.001593c3s2 - 0.1224 c2 4- 0.002399 c3 c2 + 0 .0 0 1 5 9 3 s 3 c2 ( D .3 )

Afi3 = 0.001593c3,s2 - 0 .0 0 2 3 9 9 s 3 ,s2 + 0.001593s3c2 + 0.002399c3c2 (D.4)

M21 = 0.000106s3c2 + 0.000106c3s2 + 0 .0 4 5 1 8 . s3 s 2 + 0.437938,s3(0.002214,s3s2 - 0.002214c3c2)

+ 0 .4 3 7 9 3 8 c3 (0.002214c3.s2 + 0.002214s3c2) - 0.05118s2 - 0 .1 2 9 9 8 c2 - 0.04518c3c2 (D.5) M22 = 0.43794c3(0.016145 + 0.59c3) + 0.20312s3 + 0.00707c3 + 0.8015 + 0.43794.s3(0.59s3 + 0.4638) (D.6)

M 23 = 0 .2 0 3 1 s 3 + 0.007070c3 + 0.1604 (D.7)

M31 = —0.04518c3c2 + 0.00010583c3s2 + 0.00010583s3c2 + 0.04518s3s2 (D.8) M:i2 = 0.2031 s3 + 0.007070c3 + 0.1604 (D.9) M33 = 0.1604 (D.10)

' C n C\2 Ciz c { e ,9 ) = C 21 G 22 C 23 (D .ll) Czi C 3 2 C 33

C11 = —5.9780e(—14)c3s2 — 5.9780e(—14)s3c2 — 4.6202e(—15 )c 3 c2 + 4.6202e(—15)s3s2

- 1.0e(-10).s2 c| + 1.0e(-10)s2 + 8.544e(-14)c2 (D.12)

C12 = ( ( s 2 (0 .6 1 9 4 c2 - s3(0.04245c3s2 + 0.04245s3c2 - 0.005479s3s2 + 0.005479c3c2 - 0.00707s2) + c3(—0.1608s3s2 + 0.1608c3c2 - 0.007124c3s2 - 0.007124.s3c2 - 0.2031s2) + 1.9373e(-7)s2) + c2(s3(-0.16081s3s2 + 0.1608c3c2 - 0.007124c3s2 - 0.007124s3c2 - 0.2031s2) - 0.2031s3s2 + 0.2031c3c2 - 0.00707c3s2 - 0.00707s3c2 - 0.232sz - 0.04593c2

191 + c3(0.04245c3s2 + 0 .0 4 2 4 5 s 3c2 - 0.005479s3s2 + 0.005479c3c2 - 0.00707s2)))(

+ (s2(—s 3 (0 .0 4 2 S 2 s 3 c2 — 0.0126s3s2 + 0.04282c3s2 + 0.0126c3c2) + c3(0.3637e3c2

- 0 .3 6 3 7 s 3 s 2 - 0 .0 1 2 6 s 3 c2 - 0.0126c3s2)) + c2(-0.4062s3s2 - 0.01414c3s2

4- 0 .4 0 6 2 4 c3 c2 — 0.01414s3c2 + s3(0.3637c3c2 — 0.3637s3s2 — 0.0126s3c2 — 0.0126c3.s2) 4- c3(0.04282,s3c2 - 0.0126s3s2 + 0.04282c3s2 + O.O126c3c2)))(6>)3)(0)1 + s2 (0.1224 + c3(—0.002399 + 0.00097s3) - s3(0.00097c3 + 0.00159)) + c2(0.00472 4- c3(0.00097c3 + 0.001593) + s3(-0.002399 4- 0.00097s3)) (D.13)

Cis = -O.OO4797(0) 3 (0)2 s2 c3 - 0.003185(0)3(0)2s2s3 4- O.OO3185(6>)3(0)2c2c3

- O.OO4797(0) 3 (0)2 c2 s3 - 0.002399c3s2 - 0.001593s3s2 + 0.0015923c3c2 - 0.002399s3c2 (D.14)

C2\ = —(!■(—^3 ^ 2 4~ c3c2))(0.0000793s3s2 — 0.0000793c3c2 4- 0.04321c3s2 4- 0.04321s3c2) 4- (0.4638(—s3s2 4- c 3 c2 ))(-0.3443c3 ,s2 - 0.3443s3 c2 4- 0.01198.s-3s2 - 0.01198c3c2)

4- 0 .2 0 3 1 s 2 s 3 c2 - c2(3.782e(-7)s2 - 0.04593c2) - 0.641,s-2c2 4- s 2 (-1.9373e(-7)s 2

- 0 .0 0 4 7 c2 ) 4- 0.00707s3c2 - (0.0161(e3s2 + s3 c2 ))(-0.344c3 s2 - 0.344s3 c2 4- 0.01198s3s2

- 0.01198c3c2) 4- 0.4379 c3(—(1.3473(c3.s2 4- ,s3c2))(-0.3443c3.s2 - 0.3443,s3c2 4- 0.01198.s3.s2

- 0.01198c3c2) 4" 0.59- s3 c2 4- 0 .5 9 a*2 c3 c2 ) 4- 0.43/'9s 3 (—0.59c 3 c2 4~ 0.59s 2 s3 c2 4- (1.3473(—,s3 s 2 4- c3 c2 ))(-0.3443c3 s2 - 0.3443s3 c2 4- 0.01198.s3.s2 - 0.01198c3c2)) 4- (c3 .s2 4- s3c2)(0.00057.s3,s2 - 0.00057c3c2 4- 0.00157c3,s2 4- 0.00157.s3c2) - 0.20312c3c2 4- 0.00707s2c3c2 (D.15)

C22 = ((—0.1224s2 4- 0 .0 0 2 3 9 9 c3 s 2 4- 0.002399s3c2 4 - 0.43794c3(0.002214.s3s2 - 0.002214c3c2)

4- 0.43794s3(—0.002214c3s2 - 0.002214,s3c2) 4- 0.001593 s 3 ,s2 - 0.001593c3c2 - 0.00472c2)

4- (0 .0 4 7 6 s 3 c2 4- 0.001487s3s2 - 0.001487c3c2 4- O.O4758c3,s2)(0)3)(<9)i 4- le(-12) - 0.2031c3 4- 0.4379.s 3 (-0.59c 3 - 0.01615) 4- 0.00707s3 4- 0.43794c 3 (0.59s 3 4- 0.4638) (D.16)

C23 = (—0 .0 1 4 1 4 s 3 4- 0.40624c3)(6>)3(6»)2 - 0.00707s3 4- le(—1 2 ) 4- 0.2031c3 (D.17)

C31 = (0.4638(—s3s2 4" c3c2 ))(—0.3443c3 s2 — Q.3443s3 c2 4" 0 .0 1 1 9 8 a'3 s 2 — 0.01198c3c2)

- ( 0 .0 1 6 1 4 5 ( c3.s2 4- s3 c2 ))(-0.3443c3 *-2 - 0.3443s3 c2 4-0.01198s3.s2 - 0.01198c3c2)

4" 0.00707s2c3c2 4- 0.00707s3c2 4" (c3 s2 + s 3 c2 ) ( 0 .0 0 0 5 7 s 3 s2 — 0.00057c3c24-

0 .0 0 1 5 7 c3,s2 4- 0.00157,s3c2) - 0.2031x3c| - ((-,s 3 s2 4- c3c2))(0.000079s3s2 - 0.000079c3c2 4 - 0.0432c3.s2 4- 0.04321s3c2) 4- 0.2031s2s3c2 (D.18)

CZ2 = ((0.00159s3s2 - 0.001593c3c2 4- 0.002399c3s2 4- O.OO2399.s3c2)(0)2 4- (0.04758s3e2 + 0.001487s3s2 - 0.001487c3c2 4- 0.04758c3s2)(6>)3)(6l)1 4- le(-12) + 0.00707s3 - 0.2031c3 (D.19)

C33 = le(—1 2 ) (D.2 0 )

Ari N($) = N2 (D.21) N3 192 Ari = 0 (D.22) N2 = 5.0e - ll<7(9.2761e9c3S2 + 3.2290e8c3c2 + 9-2762e9s3c2 - 3.2290e8s3s2 - 2.6939el0s2 + 1.180el0c2) (D.23)

N3 = 0 .4 6 3 8 c3 s 2.9 + 0.01614c3c2.9 + 0.4638s3c2.g — 0.01614, s3s2g (D .2 4 )

193 E Appendix - Dynamics Equations

Summary: This chapter will cover the generation of the dynamic model for the robotic manipulator system. The dynamic model is the basis for all simulation as well as the internal regulator for the controllers and therefore is a critical component of this research.

The dynamics of a system is a mathematical representation of the behaviour with respect to time. The most common form is Newton’s second law (F = via) in a multidimensional space, typically Euclidean space.

E.l Dynamics of an Open-Chain Manipulator

There are two common methods to obtain the dynamics of such a complex system: Lagrangian and Newton-Euler. Both these methods will be employed to generate the dynamic model of the arm, forming the basis for all future control development.

lo

Figure E.l: Open Chained Manipulator

194 K t = J l W (E.l)

Bt = [ Cl Cl ■ ■ • Cl o ... o (E-2) @i+1

On = Ji{9)9 (E.3) where

(E.4)

E.1.1 General Lagrangian Dynamic Equation Development

Recall, from equation ??, in order to employ the Lagrangian equation we first need to determine the kinetic and potential energy equations. The kinetic energy can be written out as the following individual link energy.

t .(M) - j O D ' n M (E.5)

(E-6) where the total kinetic energy is.

r(0) = y> (M ) (E.7) i- 1 1 • e1 M{e)0 (E.8) where

m (6) = J h r n t m (E.9)

195 = I i t MijWMj (E.10) 2 i.j—1

and the following are defined h 0 ): Height of L,, Vi{0) = mj,ghi(Q), and V(8) = fntghdO). We can now define Lagrange’s Equation for a multi-body system as the following.

d 8L 8L . = " (E11) d 8L d ' n dtdOi dt, (E-12)

'y ^ M-ij&j + MijOj (E.13) j=i

89, 29 ^ 89,rift- k 3 j 86, r)(). (E.14) J.K— 1

knowing that

» » * we get the following formula

j = l j.k=l ' ' n n Q y I : Y ] JWyfl, + V ! % » & + (E .17)

j = 1 j,fc=l 4 where

* l / S M , ; d M , t d M kj '. r « “ 2 V 'sftT + " a e ; W > ( 8) we can define the following.

6i- 9j. % d j • Coriolis force, 9~ : Centrifugal force (E.19) 196 n Define : Cij(6,9) — £ T ^ fc% (E.20) j,k=1 1 dM.j + dMik _ dMkj\ (E.21) 2 oek ddj ddi J or in a more familiar format

M(9)9 + C (6,9)9 + N(9) = r (E.22)

E.2 Kinematics of Robotic Manipulator

Having developed the background mathematics, we can now develop the specific equations of kinematics and dynamics for the specific robotic manipulator designed for this research. Recall that the manipulator is a six degree of freedom bio-mimetic Piper configuration and bears the following resemblance, Figure E.3. We begin by defining the Denavit-Hartenberg transforms which are a 4x4 matrix representation of coordinate frame transformations. Figure E.2 defines the Denavit- Hartenberg reference frames and identifies the links and their properties. Table E.l contains the identification card for the robotic manipulator.

Figure E.2: Denavit-Hartenberg Frame Definitions

197 Table E.l: Robotic Manipulator Identification Card Joint ai- 1 O'i— 1 d.-i Bi

0 -4 1 Odeg 0 0 Bx 1 -4- 2 +90 deg 0 0 e2 2-4-3 Odeg Lx 0 03 3-4-4 —90 deg 0 l 2 0i 4-4-5 +90 deg 0 0 05

Figure E.3: Robotic Manipulator

cos(6i )—sin(0i) 0 0 ' COs($2 ) —sm(02) 0 0 ' sin(6i) cos{6i) 0 0 0 0 -1 0 T\ = 0 0 1 0 sin(9i) COs($i) 0 0 0 0 0 1 198 0 0 0 1 ’ cos(03) —sin(93) 0 Lr ' cos(94) —sin(94) 0 0

sin(03) COS(0 3 0 0 0 0 - 1 l ) II - 2 CO 0 0 1 0 sin(94) cos(94) 0 0

0 0 0 1 0 0 0 1

cos(95) - sin(95) 0 0 " " 1 0 0 •Kof fset 0 0 1 0 rpCamera 0 1 0 Voffset 13 — —sin(05) —cos(9$) 0 0 0 0 1 Zoffset 0 0 0 1 0 0 0 1

Taking the full transformation from T0X through to Tf we get the transformation from the base frame to the end-effector. The first step is to determine the conversion between the Cartesian and joint spaces.

7^(1, 1) = ((C!C 2C3 - C iS 2 6' 3 ) c 4 + S !S '4 ) c 5 - (C xC 2 S 3 + Tq(1, 2 ) = -((cic 2c3 - cis2s3)c4 + S4S4) S5 - (cic2.s3 4- cis2c3)c5

T o (1 ) 3 ) = — ( c iC 2 C3 — C iS 2 S 3 ) s 4 + S 1C4 T | (T 4) = — (—C1C2S3 — CiS2c3)L2 + CiC2Lx T |(2, 1) = (O 1C2C3 - S4S2S3)c4 - C!S4)c5 - (siC2S3 + 51S2C3).S5 To (2,2) = -((S 1C2C3 - s1s2s3)c4 - Ci54)s5 - (sic2s3 + .sis2c3)c5 Tq (2 .3) = — (sic2c3 — Sis2s3)s4 — C1C4 To (2,4) — — (—Sic2s3 — s is 2c3)L2 + sic2Li T05(3.1) = (s2c3 + c2s3)c4c5 - (s2s3 - c2c3)s5 To (3; 2) = — (s 2C3 + C2S3)C4S.5 — (s2s3 — C2c3)c5 To (3,3) = — (s 2c3 + c2s3)s4) To (3,4) = (s 2s3 — c2c3)L2 + s2L\ To (4.1) = 0 To (4,2) = 0 To (4.3) = 0 To (4,4) = 1 where s# and c# represent sin(9#) and cos(9#), additionally, Li and L2 represent the lengths of the arm sections. The (1,4), (2.4). and (3,4) cells represent the Cartesian x, y. z coordinates respectively and we can write the following for clarity’s sake. 199 x 0 lt $ 2 i @z) — (C1C2S3 + C\S2cs)L2 + C1 C2 L 1

y(9i,92,93) = (S1C2S3 + SiS2 C3 )L 2 + SXC2 L i

z(6i,82,93) = (S2 S3 - C2 C3 )L 2 + s2 L i (E.23)

Notice that the Cartesian coordinates do not depend on the final two 6s as they only control the orientation of the wrist, not its position in space. Also notice that the 2 coordinate is unaffected by 6\ as it is always normal to the plane of the actuator. Taking the derivative of equation E.23 we obtain the following.

x (6 i, 6 2, 6 3 , 6 1 , &2. 6 3 ) = — S i( 9 i ) ( L 2 C2 S3 + L 2 S2 C.3 + C2 L i ) + Ci( — L 2 S2 ($2 )s 3 +

L 2 C2 C3 {6 3 ) + L 2 C2 (92 )C3 — L 2 S2 S3 (6 3 ) — S2 (92 ) L i)

y(6i, 02 , 6 3 , 9\, 92 , 6 3 ) = L 2 c \( 9 \)c 2 S3 — L 2 s \s 2 (9 2 )s 3 + L 2 sic 2 C3 0 3 )

+ L 2 C i0 i) s 2 C3 + L 2 SiC2 (92 )c3 — L 2 SiS2 S303^ + C i(9i)c2 L i

— siS 2 0 2 )L i

Z01- #2; @3, # Ij $2; ^3 ) = L 2 C2 (9 2 )s3 + L 2 S2 C3(93) + L2S2 (9 2 )C3 + ^ 0 2 8 3 (6 3 )

+ c2 (92 ) L i (E.24) (E.25) or in a more compact matrix format.

x y

Si(L2 C2 S3 + L2 S2 C3 + C-2 L1 ) C i(—L 2s 2S3 + L2C2C3 — s2L i) c l( — L 2S2S3 + L2C2C3) L2C1C2S3 + C1C2L1 + L2C1S2C3 L2s\c2C3 — L2S1S2S3 ~ s\s2Li —L 2S\ S9S3 + L 2S\C2C3 0 L2 C2 S3 + L2 S2 C3 .+ C2 L\ L2C2S3 + L2S2C3 (E.26 )

72 h J where the Jacobian is .7(01,02,03) =

200 A‘i( ^ 2 C 2 S .3 + L 2 S2 C3 + C2 L\) C j ( — L 2 S2 S3 + L 2 C2 C3 — S 2 L\) Ci( — L2S2S3 + L2C2C3) L2 C1 C2 S3 + C1 C2 L1 + L2 C1 S2 C3 L2 S1 C2 C3 - L2 S1 S2 S3 - S3 S2 L 1 — “t L2 S1 C2 C3 0 L2C2S3 + L2S2C3 + C2L1 L2 C2 S3 + L2 S2 C3 (E.27)

Taking the derivative of equation E.24 we can obtain the acceleration of the end-effector.

x = ~-Cl 0 lL 2C2$3 - Ci(9j>L 2-S2C3 - CiO'fc2Li - S161L2C2S3 - Si 'ei L 2S2C3 - sJjiCzLi

+ 2 s i6 iL 2s 202s 'A ~ 26'x ^ 2 C2C3^3 — 2S161L2C262C3 + 2 s 19 iL 2 S 2S363 + 2s i 0iS2&2^1 ~ C1L2C2&2S3 — C1L2S2O2S3 — 2C1L2S2O2C363 — C1L2C2S3O3

+ C1L2C2C3O3 — C1L2S2&2C3 + C1L2C2O2C3 — 2C1L2C2O2S 3 83 — C2L2S2C3O3

C1L2S2S363 — C\C202L i — CxS 2^2-^l (E.28 )

y = —L2S183C2S3 + L2C1O] c2$3 — 2L2C181S2O2S3 + 2L2C1O1C2C383 — L2S1C282S3 — L2S1S282S3—

2L2S1S282C383 — L 2S \<'-2^383 + L2S1C2C383 — L 2s\ f)\s2C3 + L 2 C 3 8 1 S 2 C 3 + 2 L 2 f ’ 1 8 x c 2 0 2 C3 —

2 L 2 c i 8 i S 2 S3d 3 — L2S1.S282C3 + L 2 S 1 C 2 8 2 C 3 — 2L2S1C282S383 — L2S1S2C3O3 — L2S1S2S383

— s\8 \c2L i + c\ 8 \ C2 L 1 — 2 C1 8 1 S2 8 2 L 1 S1 C2 8 2 L 1 SxS26,2^1 (E/29 ) Z = —L2S282S3 + L2C282S3 + 2L2C282C383 — L2S2S383 + L2S2C383 + L2C2&2C3 + L2S282C3 — 2L2S282S383 + L2C2C383 + L2C2S383 — S282L1 + C282L1 (E.30) Presenting the acceleration formula in matrix format we obtain the following.

— L2S1C2S3 — r>2Sl^2C3 — S1C2L1 — C1L2S2S3 + C1L2C2C3 — C1S2L1 —C1L2S2S3 + C1Z.2C2C3 C1L2S2C3 + ClC 2 l a + C1L2C2S3 L2S\C2 C3 — L2S1S2S3 — S1S2L1 L2S1C2C3 — L2S1S2S3 0 L2C2S3 4- L2S2C3 + C'2 L j L2C2S3 + L 2s2C3

0 1 ' 02 + *3 . (-ClC2i l — ClL2C2S3 - C\Ij2 S2Cz)01 (-C1C2L1 - ClL2C2S3 — C \L 2 S2 Cz)0 2 (-ClL2S2C3 - ClL2 C2 S3 )0 3

( — L 2 s i c 2 S i — L2S1S2C3 — s\c2L\)0i —L2S1C.2S3 — L 2sis2C3 — siC 2 L i 9 2 (—L2S1C2S3 — L 2sis2C3)03 0 ( — L2S2S3 + L2C2C3 — S2L i0 ) 2 {—L2S2S3 + L2C2Cz)9;i 01 02 (E.31) 03 J which can be simplified to.

X " 0 > ' ' k '

y - j{0i. 62.93) 8 2 + j(eu02 ,93 J 1 ,e2 te3) k (E.32) z . 8 3 . . k .

201 E.3 Dynamics of Robotic Manipulator

E.3.1 Lagrangian Dynamics

In order to develop the Lagrangian dynamics, we need to establish the global energy equation framework. The previous section will be of great help in this endeavour. The Lagrangian equation, equation ??, requires the aforementioned energy equa­ tions.

E.3.1.1 Potential Energy

Due to the absence of any intentional elastic energy storage capability in the arm, the potential energy is the gravitational potential energy of the arm. It is the summation of the potential energy of the masses of the system and can be written as the following.

n where n = 5 (E.33)

The five masses are a point mass representations of the joints, the gearing, the linkages, the brakes, and the motors and can be seen in figure E.4. The potential energy for the system can be written out as the following.

m 5

S

Figure E.4: Manipulator Mass Distribution

202 u = 'Y^mighi{6) £= 1

= ^m2 L4gsin 02 + m^Ligsin 02 + m,4g sin 02 - ^ cos(&2 + #3 )^ + m5g(Li sin 02

- L2 c o s ( 6>2 + e3)) (E.34)

E.3.1.2 Kinetic Energy

Developing the kinetic energy of the open chain manipulator system is more compli­ cated than the potential energy equations. We have two separate actuated motors and can simplify our equations by separating them into two sub-equations.

n T = ^ Tj, where n = 3 (E.35) i= 1 where we can define the following equations.

r = 5 > i=l = Ti + T2 + T3

Ti = ^m,VgVCi + i w flM (E.36)

The kinetic energy of link 1

Ti = l-l J \ (E.37)

The kinetic energy of link 2

T2 = ^rn2v22c + l-l J l (E.38)

The kinetic energy of link 3

Tz = -m 3t& + - / C3^3 203 r2 L\el + -^-{d2 + 03)2 + L xL2sin{6z){e 2 + 03)02 (E.39) 2™4 + 2^c3^3 where

/ = [ p(r)r2dV (r ) Jv The Kinetic energy term is convoluted and can be found in detail in the appen­ dices.

E.3.1.3 Lagrangian Equation

From the previous two sections we can assemble the Lagrangian equation and derive the dynamic model of the three dimensional robotic manipulator.

T = T(q, q) (E.40) V = V ( q ) (E.41) L(q,q) = T(q,q) - V(q) (E.42) d_dL_ _ OL dt dqi dqt The dynamic model is easily assembled at this point, however the tediousness of tracking the indices and trigonometric functions is best left to the appendices. The final equation is the following.

' M n M l 2 M\z ' '0 i ‘ ' C n C\2 C i3 ‘ ' 01 ‘ ' Ni ' Tl Mai M 22 M 23 02 + C21 C 22 C23 02 + n 2 = T2 _ M 3i M 32 M 33 . . On C32 C 33 . 03 . n 3 . T3 . (E.43)

E.3.2 Iterative Newton-Euler Closed Loop Dynamic Formulation

The Newton-Euler iterative method assumes each link of the manipulator to be a rigid body. Knowing the location of the center of mass and the inertia tensor for each of the links, the its mass distribution is completely characterized. Motion of the links is achieved through acceleration and deceleration/ The forces required for such motion are a function of the acceleration desired and of the mass distribution of the links. Newton’s equation, along with its rotational analog, Euler’s equation, describes how forces, inertias, and accelerations relate. 204 E.3.2.1 Newton’s Equations

A rigid body whose center of mass m is subjected to a force F causing an acceler­ ation of i'c which can be described using Newton’s equation:

F = m vc (E.44)

E.3.2.2 Euler’s Equations

A rigid body rotating with angular velocity u and with angular acceleration lu, possessing an inertia tensor Ic of the body written in frame C whose origin is located at the center of mass, can be related to the moment N acting on the body and causing the motion through the following equation.

N = I qCj -f w x (E.45)

E.3.2.3 Iterative Dynamic Formulation

The iterative dynamic formulation relies on calculating the outward iteration of velocities and accelerations and then proceeding to calculate the inward iterations of the forces and torques. Therefore we need to first calculate the propagation of the velocity from link to link.

"S;= (e.46) Next we need to calculate the propagations of the accelerations from link to link.

W-+,1 = + fl;+'u4 x e M z i i l + ft+iZSf (E.47)

tft} = i?‘+1 (w* x P U + u4 x (ui‘ x /J +1) + (E.48)

x -fit1, + " k i x « ; x ) + i'll (E.4 9 ) Now that the velocities and accelerations has been propagated for all links, we can calculate the linear forces and rotational torques associated with each link.

F tu = (E-5°)

Ki= + “iti x (E.5 1 ) 205 In order to calculate the final torques and forces acting on the joints, we must now iterate the totals from the last link back to the base of manipulator.

f t = Ri+ift+i + Ft (E.52)

«: = «? + +P lc, X Ft + H» X ( f l k . O (E.53) and finally.

n = (rif'Zl (E.54) Note, that in order to account for the gravity effect on the arm, it is simply a case of giving the zeroeth v frame an acceleration of g in the positive Z° frame. Similar to the Lagrangian method, we can now present the dynamic equation of the manipulator in the following format.

r = M(0)@ + V(Q, 0 ) + C ? ( 0 ) (E.55) We can also right the velocity dependant term in two separate parts providing the differentiation of the Coriolis coefficients and the centrifugal coefficients.

r = M(©)0 + 5(0) [00] + C(©)[©2] + 5(0) (E.56) where

[00] = [0,02... ©„f (E.57) and

[02] = [0202 ... 02f (E.58)

206 F Appendix - Detailed Look at Kalman Filter Cycle

Ongoing Kalman Filter is a cycle that predicts and corrects in a loop as follows:

Prediction Correction _ (time update ) (measurement update ) ^ ' The Kalman filter is a set of mathematical equations that provides an efficient recursive solution of the least-squares method. It can provide estimates of the past, present, and also future states, and it can do so when the precise nature of the modelled system is unknown, for example, when the modelling errors of the system model are not well known. This is extremely useful for the visual servoing problem where the exact motion of the target is noisily estimated and the nature of the oscillations are not precisely modelled. The following linear discrete-time system equations are considered

x k+i = F x k + Wk (F.2) yk = H x k + vk (F.3) where F E 97nxn is a dynamic model, x k E 3ftrtXl is the state vector, H £ 97mxn is a measurement output model, and yk E Wnxl is the observation vector. It is necessarily assumed that the noise vectors are stationary, white Gaussian processes with the zero-mean and covariance.

E[wkwJ] = 5kjQk (F.4) E[vkvj] = SkjRk (F.5) E[vkwJ] = 0 (F.6) Vktj The dynamic and measurement models play a role in determining the state tran­ sition probability p(xk+i\xk) and measurement likelihood function p(yk\xk). Specif­ ically, the state transition density is computed by the state space model in Eq. 207 F.2, and the additive Gaussian process noise p(wk) = N(Q, Qk). Thus, the state transition probability p(xk+i\xk) is obtained by

p(.Xk+i\xk) = N ‘{xk+i,Xk+i,Qk) (F.7) Similarly, the likelihood function p(yk\x k) is determined by the observation model, and the measurement noise density p(vk) = A/”(0, Rk)

P(yk\xk) = Af{yk\yk, R l) (F.8) The recursive relationships are given by

p(xk\Yk) = Af{xk-xk,Pk) (F.9) p(xk+i\Yk) = M (xk+1 ;Xfc+1,P tr) (F.10) p(xk+1\Yk+L) = M (x k+1;xk+u P k+i) (F .ll) where N (x\ m, P ) denotes a Gaussian density with argument x, mean rri, and co- variance P expressed by:

Af(x; m, P) = \2/KP\l^2exp—\-{x — m)P~1(x — m)7 (F.12) Z If it is assumed that all densities remain Gaussian, then the Bayesian recursion can be simplified in terms of only the conditional mean xk — Exk\Yk and covariance Pk — E6xkdxr[\Y k. The optimal components in the recursion estimation are given by

£fc+1 = E F xk + wk\Yk%+l = EHx^+1 + vk+1\Yk (F.13)

The state prediction in Eq. F.13 can be represented by

= Fit (F.14) where xj~+1 is the model prediction and x is the currently estimated state. The estimate x k+1 of the true state x k+i is obtained by combining the observations yk+\ and the model predictions x k+1

®*+i = *k+i + Kk+iVk+i (F.15) where vk+\ is the innovation vector, which is equal to the difference between the actual and the predicted observations

208 uk+1 = yk+1 - yk+1 = yk+1 - H x k+1 (F.16) The predicted and updated equations for the state covariance matrix are com­ puted by

Pk+1 = FPtF T + QkPf+i = P k~+, - ( F . 1 7 ) where the covariance of the innovation vector is given by

P ^ 1 = HP^rlHT + Rh+1 (F. 18) The Kalman gain KLk+\ is computed by

Ki+i = Pgi(Pk+i)~l (F.19) where Pkl i the predicted cross-correlation matrix between x k+l and yk+l

= P m «T ( F . 2 0 ) The Bayesian relations of the predictor-corrector structure for the Kalman fil­ tering algorithm can be represented by the block diagram as shown in Fig. 3.8 and the specific algorithms are summarized in Table 3.4 with detail.

209 Bibliography

[1] A. Ellery. J. Kreisel, and B. Sommer. The case for robotic on-orbit servicing of spacecraft: Spacecraft reliability is a myth. Acta Astronautica, 63:632-648, August 2008.

[2] J. J. Craig. Introduction to Robotics. Pearson Prentice Hall, 2005.

[3] J. R. Wertz and W. J. Larson. Space Mission Analysis and Design. Microcosm Press and Kluwer Academic Publsihers, 1999.

[4] L. Y. Neal, C. Zhaopeng, P. Benedikt, and et al. Toward understanding the effects of visual- and force-feedback on robotic hand grasping performance for space teleoperation. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3745-3752, October 2010.

[5] B. Sommer and J. Kreisel. Automation and robotics within the german space program. 2003.

[6] L. Yen-Chen and C. Nikhil. Semi-autonomous teleoperation in task space with redundant slave robot under communication delays. IEEE/RSJ In­ ternational Conference on Intelligent Robots and Systems , pages 679-684, September 2011.

[7] J.-C. Piedboeuf. On-orbit servicing in Canada: Advanced developments and demonstrations. 2004.

[8] J. B. Snively, M. J. Taylor, and P. Jenniskens. Airborne imaging and nir spectroscopy of the esa atv spacecraft re-entry: Instrument design and pre­ liminary data description. International Journal of Remote Sensing, 32:3019- 3027, 2011.

[9] A. Ogilvie, J. Allport, M. Hannah, and et al. Autonomous robotic operations for on-orbit satellite servicing. Number 695809, 2008.

2 1 0 [10] F. Didot. P. Putz, J. Dettraann, and et al. Jerico: A small and dextrous robot for payload manipulation on the iss-russian segment. 1998.

[11] F.D. Roe, R.T. Howard, and L. Murphy. Automated rendezvous and capture system development and simulation for nasa. 2004.

[12] K. Landzettel, C. Preusche, A. Albu-Schaffer, D. Reintsema, B. Rebele, and G. Hirzinger. Robotic on-orbit servicing -dlr’s experience and perspective. IEEE International Conference on Intelligent Robots and Systems , October 2006.

[13] E. Stoll, J. Letschnik, U. Walter, J. Artigas, P. Kremer, C. Preusche, and G. Hirzinger. On-orbit servicing: Exploration and manipulation capabilities of robots in space. IEEE Robotics and Automation Magazine , December 2009. [14] A. Tatsch, N. Fit.z-Coy, and S. Gladun. On-orbit servicing: A brief survey. DARPA Report. 2007.

[15] E. Stoll, J. Letschnik, U. Walter, J. Artigas, P. Kremer, C. Preusche, and G. Hirzinger. On-orbit servicing. IEEE Robotics and Automation Magazine, December 2009.

[16] W. Xu, C. Li, B. Liang, and X. Wang. A modelling and simulation sys­ tem of space robot for capturing non-cooperative target. Mathematical and Computer Modelling of Dynamical Systems , 15(4):371-393, May 2009.

[17] G. R. Vossoughi and A. Karimzadeh. Impedance control of a two degree-of- freedom planar flexible link manipulator using singular perturbation theory. Robotica , 24:221 228, November 2005.

[18] K. Hashimoto. A review on vision-based control of robot manipulators. Ad­ vanced Robotics , 17(10) :969-991, 2003. [19] Y. Edan. Design of an autonomous . Applied Intelligence, 5:45-50, April 1995.

[20] F. Ingrand, S. Lacroix, S. Lemal-Chenevier, and F. Py. Decisional autonomy of planetary rovers. Journal of Field Robotics , 24(7):559-580, June 2007.

[21] R. Rembala and C. Ower. Robotic assembly and maintenance of future space stations based on the iss mission operations experience. Acta Astronautica, 65(7-8) :912-920, OCT-NOV 2009.

211 [22] K.H. Zaad and K. Khorasani. Control of non-minimum phase singularly per­ turbed sj'-stems with application to flexible-link manipulators. International Journal of Control , 63(4):679-701, MAR 1996. [23] A. Bazaei and M. Moallem. A fundamental limitation on the control of end­ point force in a very flexible single-link arm. Journal of Vibration and Control, 18(8):1207-1220, JUL 2012. [24] L. Cheng, T. Qiang, H. Haiyan. and et al. Simple formulations of impos­ ing moments and evaluating joint reaction forces for rigid-flexible multibody systems. Nonlinear Dynamics, 69(1-2):127-147, JUL 2012. [25] K. M. Habinejad, H. N. Rahimi, and A. Nikoobin, Mathematical modeling and trajectory planning of mobile manipulators with flexible links and joints. Applied Mathematical Modelling, 36(7):3223-3238, JUL 2012. [26] P. O. Stalph and M. V. Butz. Learning local linear jacobians for flexible and adaptive robot arm control. Genetic Programming and Evolvable Machines, 13(2): 137-157, JUN 2012. [27] J. G. Garcia, A. Robertsson, G. Ortega, and R. Johansson. Sensor fusion for compliant robot motion control. IEEE Transactions on Robotics and Automation , 24(2):430-441, April 2008. [28] M. Vukobratovie. How to control robots interacting with dynamic environ­ ment. Journal of Intelligent and Robotic Systems , 19:119-152, August 1997. [29] J. G. Garcia, A. Robertsson, G. Ortega, and R. Johansson. Sensor fusion for compliant robot motion control. IEEE Transactions on Robotics , 24(2):430- 441, April 2008. [30] A. Lanzon. Compliant Motion Control for Robotic Manipulators. PhD thesis, University of Cambridge, September 1997. [31] Y. Uno, M. Kawato, and R. Suzuki. Formation and control of optimal tra­ jectory in human multijoint arm movement - minimum torque-change model. Biological Cybernetics, (61):89—101, 1989. [32] L. Weifeng, W. Chenglin, H. Chongzhao, and et al. A bayesian estimation for single target tracking based on state mixture models. Signal Processing, 92(7):1706-1714, July 2012. [33] L. Pedersen, D. Kortenkamp, D. Wettergreen, and I. Nourbakhsh. A survey of space robotics. 2003. 212 [34] W. Xu, C. Li, B. Liang, Y. Xu, Y. Liu, and W. Qiang. Target berthing and base reorientation of free-floating space robotic system after capturing. Acta Astronautica, 64:109-126, August 2009.

[35] T. Tsuji and Y. Tanaka. Bio-mimetic impedance control of robotic manipu­ lator for dynamic contact tasks. Robotics and Autonomous Systems, 56:306- 316, 2008.

[36] B. Jose, F. Manuel, and A. Rafael. A heterogeneous modular robotic design for fast response to a diversity of tasks. Robotics and Autonomous Systems, 60:522-531, 2012.

[37] Y. Zuo, Y. Wang, X. Liu, S. X. Yang, L. Huang, X. Wu, and Z. Wang. Neural network robust h inf tracking control strategy for robot manipulators. Applied Mathematical Modelling, 34:1823-1838, September 2010.

[38] P. Herman. Pd controller for manipulator with kinetic energy term. Journal of Intelligent and Robotic Systems, 44:107-121, 2005.

[39] F. Caccavale, C. Natale, B. Siciliano, and L. Villani. Resolved-aceeleration control of robot manipulators: A critical review with experiments. Robotica, 16:565-573, November 1998.

[40] N. G. Chalhoub, G. A. Kfoury, and B. A. Bazzi. Design of robust controllers and a nonlinear' observer for the control of a single-link flexible robotic ma­ nipulator. Journal of Sound and Vibration, 291:437-461, August 2006.

[41] D.C. Slaughter, D.K. Giles, and D. Downey. Autonomous robotic weed con­ trol systems: A review. Computers and Electronics in Agriculture, 61:63-98, 2008.

[42] M. Ozdemir and S. K. Ider. Inverse dynamics control of parallel manipulators around singular configurations. Journal of Mechanical Engineering Science, 2007.

[4-3] G. Flandin, F. Chaumetter, and E. Marchand. Eye-in-hand/ eye-to-hand co­ operation for visual servoing. Proceedings. ICRA ’00. IEEE International Conference on Robotics and Automation, 3(l):2741-2746, 2000.

[44] S. Se, D. Lowe, and J. Little. Vision-based localization and mapping using scale-invariant features. Proceedings. ICRA ’00. IEEE Inter­ national Conference on Robotics and Automation, 2(l):2051-2058, 2001.

213 [45] D. A. Forsythe and J. Ponce. Computer Vision: A Modern Approach. Prentice Hall Professional Technical Reference 2002. 2002.

[46] F. Chaumette. The Confluence of Vision and Control. Springer, 1998.

[47] Y. Fang, A. Behai, W.E. Dixon, and D.M. Dawson. Adaptive 2.5d visual servoing of kinematically redundant robot manipulators. Proceedings of the 41st IEEE Conference on Decision and Control, 2002 , 3(l):2860-2865, Dec 2002 .

[48] A. Mittal and N. Paragios. Motion-based background subtraction using adap­ tive kernel density estimation. Proceedings of the 2004 IEEE Computer on Computer Vision and Pattern Recognition, 2004-, 2(l):II-302 - 11-309, June 2004.

[49] A. Cretual and F. Chaumetter. Application of motion-based visual servoing to target tracking. The International Journal of Robotics Research , 20(111:878- 890, Nov 2001.

[50] M. Marshall, M. Matthews, A.-P. Hu, G. McMurray, and H. Lipkin. Uncal­ ibrated visual servoing for intuitive human guidance of robots. 2912 IEEE International Conference on Robotics and Automation, l(l):4463-4468, May 2012 .

[51] F. Yongchun, L. Xi, and Z. Xuebo. Adaptive active visual servoing of nonholo- nomic mobile robots. IEEE Transactions on Industrial Electronics, 1 (59):489- 497, Jan 2012.

[52] D.-H. Park, J.-H. Kwon, and I.-J. Ha. Novel position-based visual servoing approach to robust global stability under field-of-view constraints. IEEE Transactions on Industrial Electronics , 12(59):4735-4752, Dec 2012.

[53] L. Weiss, A. Sanderson, and C. Neuman. Dynamic sensor-based control of robots with visual feedback. IEEE Journal of Robotics and Automation, 3(5):404—417, Oct 2011.

[54] B. Thuilot, P. Martinet, L. Cordesses, and J. Gallice. Position based visual servoing: keeping the object in the field of vision. International Conference on Robotics and Automation, 2008, 2(2):1624-1629, Nov 2008.

[55] W.J. Wilson, C.C. Williams Hulls, and G.S. Bell. Relative end-effector control using cartesian position based visual servoing. IEEE Transactions on Robotics and Automation, 12(5):684-696, Oct 2006. 214 [56] F. Yongchun, L. Xi, and Z. Xuebo. Motion-estimation-based visual servoing of nonholonomic mobile robots. IEEE Transactions on Robotics. 6(27): 1167- 1175, Dec 2011.

[57] R. Richardson, M. Brown, B. Bhakta, and M. Levesley. Impedance control for a pneumatic robot-based around pole-placement, joint space controllers. Control Engineering Practice, 13:291-303, May 2005.

[58] N. Hogan. Impedance control: An approach to manipulation. .Journal of Dynamic Systems. Measurement, and Control, March 1987.

[59] F. Nagata, K. Watanabe, and K. Izumi. Position-based impedance control using a fuzzy compensator. 2000.

[60] C.-C. Cheah and D. Wang. Learning impedance control for robotic manipu­ lators. IEEE Transactions on Robotics and Automation, 14(3), 1998.

[61] M.-C. Chien and A.-C. Huang. Adaptive impedance control of robot ma­ nipulators based on function approximation technique. Robotica , 22:395-403, December 2004.

[62] S. Jung, T. C. Hsia. and R. G. Bonitz. Force tracking impedance control of robot manipulators under unknown environment. IEEE Transactions on Control Systems Technology, 12(3):474-483, May 2004. [63] H. Jianbin, X. Zongwu, J. Minghe, J. Zainan, and L. Hong. Adaptive impedance-controlled manipulator based on collision detection. Chinese Jour­ nal of Aeronautics, 22:105-112, May 2009.

[64] A. A. G. Siqueira and M. H. Terra. Mixed model-based/neural network hinf impedance control of constrained manipulators. IEEE International Confer­ ence on Control and Automation, December 2009.

[65] J. W. Sensinger and R. F. Weir. Control. IEEE Xplore, 2008.

[66] S. H. Kang, M. Jin, and P. H. Chang. A solution to the accuracy/robustness dilemma in impedance control. IEEE Transactions on Mechatronics, 14(3):282-294, June 2009.

[67] H. C. Liaw and B. Shirinzadeh. Robust generalised impedance con­ trol of a piezo-actuated flexure-based four-bar mechanisms for micro/nano­ manipulator. Sensors and Actuator's A: Physical, 148:443-453, May 2008.

215 [68] J. De Schutter. A study of active compliant motion control methods for rigid manipulators based on a generic scheme. IEEE International Conference on Robotics and Automation, 4(1): 1060-1065, Mar 1987.

[69] K. Youcef-Toumi and A.T.Y. Kuo. High-speed trajectory control of a direct-drive manipulator. IEEE Transactions on Robotics and Automation, 9(1)-.102-108, Feb 1993.

[70] E. Dumetz, J.-Y. Dieulot. P.-J. Barre, F. Colas, and T. Delplace. Control of an using acceleration feedback. Journal of Intelligent and Robotic System, 46(2):111-128, Jul 2006.

[71] A. M. Lopes and F. G. Almeida. Acceleration-based force-impedance control of a six-dof . Industrial Robot: An international Journal, 2007.

[72] Y.H. Kim and F.L. Lweis. Neural network output feedback control of robot manipulators. IEEE Transactions on Robotics and Automation, 15(2):301- 309, Apr 1999.

[73] R. D. Robinett, R. D. Robinett III, J. Feddema., G. R. Eisler, C. Dohrmann, G. R. Parker, D. G. Wilson, and D. Stokes. Flexible robot dynamics and controls. Springer, 2001.

[74] N. Singer, W. Singhose, and E. Kriikku. An input shaping controller enabling cranes to move without sway. American Nuclear Society 7th Topical Meeting on Robotics and Remote Systems, 1(1), 1997.

[75] W. E. Singhose, A. K. Banarjee, and W. P. Seering. Slewing flexible spacecraft with deflection-limiting input shaping. Journal of Guidance, Control, and Dynamics, 20(2):291-298, Apr 1997.

[76] A. Tzes and Yurkovieh J. W. An adaptive input shaping control scheme for vibration reduction suppression in slewing flexible structures. IEEE Trans­ actions on Control Systems Technology, 2(1), 1993.

[77] M. Bodson. An adaptive algorithm for the tuning of two input shaping meth­ ods. Automatica, 6(34), 1998.

[78] J. Park and P.H. Chang. Learning input shaping technique for non-lti systems. ASME Journal on Dynamic Systems, Measurement, and Control, 2(123), 2001 .

216 [79] S. Rhim and W. J. Book. Noise effect on adaptive command shaping meth­ ods for flexible manipulator control. IEEE Transactions on Control Systems Technology, 1(9), 2001.

[80] I. Rekleitis, E. Martin, G. Rouleau, R. L’Archeveque, K. Parsa, and E. Dupuis. Autonomous capture of a tumbling satellite. Journal of Field Robotics, 24(4):275-296, 2007.

[81] Y. Cao and de Silva C. W. Dynamic modeling and neural-network adaptive control of a deployable manipulator system. Journal of Guidance Control, and Dynamics, 29(1):192-194, February 2006.

[82] F. M. Raimondi and M. Melluso. A new fuzzy robust dynamic controller for autonomous vehicles with nonholonomic constraints. Robotics and Au­ tonomous Systems , 25:115-131, 2005.

[83] D.H. Wolber and W.G. Macready. No free lunch theorems for search. Tech­ nical Report SFI-TR-95-02-010, (5), 1995.

[84] A. Lopes and F. Almeida. A force-impedance controlled industrial robot using an active robotic auxiliary device. Robotics and Computer-Integrated Manufacturing , 24:299 309, 2008.

[85] R. V. Patel, H. A. Talebi, J. Jayender, and F. Shadpey. A robust position and force control strategy for 7-dof redundant manipulators. IEEE/ASME Transactions on Mechatronics, 14(5), October 2009.

[86] B. Siciliano and O. Khatib. Springer Handbook of Robotics. Springer, 2008. [87] A. Frisoli, E. Sotgiu, C.A. Avizzano, D. Checcacci, and M. Bergamasco. Force-based impedance control of a haptic master system for teleoperation. Sensor review, 24(l):42-50, 2004.

[88] J. D. Davis, S. F. Barret, C. H. G. Wright, and M. Wilcox. A bio-inspired apposition compound eye machine vision sensor system. Bioinspiration and Biomimetics, 4, November 2009.

[89] Y. Kusuda. High speed vision sensor and quick robotic hand enable a robot to catch a ball. Industrial Robot: An international Journal , 30(4):319-321, 2003.

[90] V. Lippiello, B. Siciliano, and L. Villani. Interaction control of robot manip­ ulators using force and vision. 2008.

217 [91] F. Aghili and J. Parsa. An adaptive vision system for guidance of a robotic manipulator to capture a tumbling satellite with unknown dynamics. 2008.

[92] A. Morales, P. J. Sanz. A. P. del Pobil, and A. H. Fagg. Vision-based three-finger grasp synthesis constrained by hand geometry. Robotics and Au­ tonomous Systems . 54:496-512, 2006.

[93] H. Wang and Y. Xie. Adaptive jacobian position/force tracking control of free- flying manipulators. Robotics and Autonomous Systems. 57:173-181, June 2009. [94] D. E. Whitney. Resolved motion rate control of manipulators and human prostheses. IEEE Transactions on Man-Machine Systems, (10):47-53, 1989. [95] Y. Cao, V. J. Modi, C. W. de Silva, M. Chu, Y. Chen, and A. K. Mistra. Tra­ jectory tracking experiments using a novel manipulator. Acta Astronautica , 52:523-540, October 2003. [96] H. Elliot and W.A. Wolovieh. Parametrization issues in multivariable adap­ tive control. Autornatica, 20(5):533-545, 1984.

[97] L.-C. Wang and C.C. Chen. A combined optimization method for solving the inverse kinematics problems of mechanical manipulators. IEEE Transactions on Robotics and Automation, 7(4):489-499, 1991.

[98] J. Zhao and N. I. Badler. Inverse kinematics positioning using nonlinear programming for highly articulated figures. ACM Transactions on Graphics, 13(4):313-336, 1994.

[99] Y. Nakamura and H. Hanafusa. Inverse kinematic solutions with singular­ ity robustness for robot manipulator control. Journal of Dynamic Systems, Measurement. and Control, 108(4): 163—173, 1986.

[100] C. W. Wampler. Manipulator inverse kinematic solution based on damped least-squares solution. IEEE Transactions Systems, Manipulators, and Cy­ bernetics, (16):93-101. 1986.

218