ROBOTIC ASSISTED SUTURING IN MINIMALLY INVASIVE SURGERY

By

Hyosig Kang

A Thesis Submitted to the Graduate Faculty of Rensselaer Polytechnic Institute in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY Major Subject: Mechanical Engineering

Approved by the Examining Committee:

Dr. John T. Wen, Thesis Adviser

Dr. Stephen J. Derby, Co-Thesis Adviser

Dr. Daniel Walczyk, Member

Dr. Harry E. Stephanou, Member

Rensselaer Polytechnic Institute Troy, New York

May 2002 (For Graduation August 2002) ROBOTIC ASSISTED SUTURING IN MINIMALLY INVASIVE SURGERY

By

Hyosig Kang

An Abstract of a Thesis Submitted to the Graduate Faculty of Rensselaer Polytechnic Institute in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY Major Subject: Mechanical Engineering The original of the complete thesis is on file in the Rensselaer Polytechnic Institute Library

Examining Committee:

Dr. John T. Wen, Thesis Adviser Dr. Stephen J. Derby, Co-Thesis Adviser Dr. Daniel Walczyk, Member Dr. Harry E. Stephanou, Member

Rensselaer Polytechnic Institute Troy, New York

May 2002 (For Graduation August 2002) c Copyright 2002 by Hyosig Kang All Rights Reserved

ii CONTENTS

LISTOFTABLES...... vii

LISTOFFIGURES...... viii

ACKNOWLEDGMENT...... xiv

ABSTRACT...... xvi

1.INTRODUCTION...... 1 1.1BackgroundandMotivation...... 1 1.2ProblemStatement...... 2 1.3Contributions...... 3 1.4Outline...... 4

2.LITERATUREREVIEW...... 5 2.1RoboticDevicesinSurgicalApplications...... 5 2.2 Augmentation of Human Capability in Surgical Tasks ...... 7 2.3 Motion and Force Control of Rigid Robot Systems ...... 8 2.3.1 RobotMotionControl...... 8 2.3.2 RobotForceControl...... 9

3.MODELINGANDIDENTIFICATION...... 11 3.1EndoBotDesign...... 11 3.1.1 ManipulatorDesignRequirements...... 11 3.1.2 CurrentPrototype...... 12 3.2KinematicModelingoftheEndoBot...... 15 3.2.1 ForwardKinematics...... 15 3.2.2 InverseKinematics...... 17 3.2.3 JacobianFormulation...... 18 3.2.3.1 GeometricJacobian...... 18 3.2.3.2 AnalyticalJacobian...... 19 3.2.4 SingularityAnalysis...... 19 3.3DynamicsoftheEndoBot...... 20 3.3.1 LagrangianFormulation...... 20 3.3.2 PropertiesofDynamicModel...... 21

iii 3.3.3 DynamicModeloftheEndoBot...... 22 3.4FrictionModelingandCompensation...... 24 3.5IdentificationofDynamicParameters...... 26 3.5.1 Input Signal Design ...... 28 3.5.2 Dynamic Models for Parameter Identification ...... 34 3.5.2.1 UseofDifferentialModel...... 35 3.5.2.2 UseofEnergyModel...... 36 3.5.3 Experimental Identification of Dynamic Parameters ...... 38 3.5.3.1 Identification of the Friction Coefficients in Energy Model...... 39 3.5.3.2 ExperimentalResults...... 41 3.5.4 Validation of Parameter Identification ...... 43

4. SUTURING IN MINIMALLY INVASIVE SURGERY ...... 45 4.1SuturingTaskAnalysis...... 45 4.2SutureLengthTracking...... 48 4.3MotionControlintheStitchingTask...... 49 4.3.1 DynamicmodelingofStitching...... 49 4.3.1.1 Kinematics of the Suture Motion ...... 50 4.3.1.2 Static Model of the Suture Tension ...... 50 4.3.2 ProblemFormulation...... 51 4.3.3 RegionoftheFeasibleMotion...... 51 4.3.4 ProblemSimplification...... 52 4.3.5 ProblemTransformation...... 52 4.4KnotTying...... 53 4.4.1 LigationAlgorithm1...... 54 4.4.2 LigationAlgorithm2...... 57 4.4.3 LigationAlgorithm3...... 61 4.5PlacingtheKnot...... 62 4.5.1 ProblemFormulation...... 62 4.5.2 Sliding Condition of Knot Placement ...... 63 4.5.3 Trajectories of Suture Ends for Placing a Knot ...... 68 4.6ControllerRequirementandArchitecture...... 70 4.6.1 HybridDynamicSystem...... 70 4.6.1.1 HybridState...... 70

iv 4.6.1.2 HybridAutomaton...... 71 4.6.2 HumanSharingSupervisoryController...... 72 4.6.3 PlanningofSuturingTask...... 73 4.6.4 DevelopmentEnvironment...... 75

5.MOTIONCONTROL...... 79 5.1GravityCompensationinManualMode...... 79 5.2MotionControlinAutonomousMode...... 79 5.2.1 Energy Based Output Feedback Control ...... 80 5.2.2 Experimental Evaluation of Joint Space Control ...... 82 5.2.3 Nonlinear Decoupled State Feedback Controller ...... 86 5.2.3.1 Global Linearization of the Nonlinear Robotic System 86 5.2.3.2 Optimal State Feedback Control ...... 87 5.2.3.3 VelocityEstimation...... 91 5.2.3.4 Linear Quadratic Gaussian (LQG) Control ...... 94

6.SHAREDCONTROL...... 100 6.1ConstraintsDescription...... 100 6.2ControlObjective...... 101 6.3TaskSpaceControlDesign...... 102 6.3.1 Stability ...... 102 6.3.2 ControllerDescription...... 104 6.3.3 Example...... 104 6.3.4 Experimental Evaluation of Task Space Shared Controller . . 105 6.4JointSpaceSharedControlDesign...... 109 6.4.1 ControllerDescription...... 112 6.4.2 Examples...... 113 6.4.3 Experimental Evaluation of Joint Space Shared Controller . . 114

7.TENSIONCONTROL...... 121 7.1SecuringaKnot...... 121 7.1.1 Principle of Direction of Securing a Square Knot ...... 121 7.2TensionMeasurement...... 123 7.2.1 BasicIdea...... 123 7.2.2 Tension Estimation with a Base Force/Torque Sensor . . . . . 125 7.3ForceandPositionControl...... 127

v 7.3.1 ProblemFormulation...... 127 7.3.2 Hybrid Force/Position Regulating Control ...... 128 7.3.3 ExplicitForceControl...... 131 7.4 Stability Analysis ...... 132 7.4.1 Proportional plus Integral Control with Active Damping . . . 132 7.4.2 Stability of Time Delayed System ...... 136 7.5ExperimentalStudy...... 139 7.5.1 ExperimentationEnvironment...... 139 7.5.2 Experimental Evaluation of Tension Controller ...... 141

8.CONCLUSIONS...... 147 8.1Summary...... 147 8.2FutureWork...... 148

LITERATURECITED...... 150

APPENDICES

A. INVERSE KINEMATICS USING SUBPROBLEMS ...... 158

B. LAGRANGIAN FOR A ROBOT MANIPULATOR ...... 159 B.1KineticEnergyofaRobotManipulator...... 159 B.2 Potential Energy of a Robot Manipulator ...... 160

C.LEASTSQUAREMETHOD...... 161

D. PERSISTENT EXCITATION AND OBSERVABILITY GRAMIAN . . . . 163

E.DIFFEOMORPHISM...... 165

vi LIST OF TABLES

3.1 Parametersoftheelectricalsystem...... 39

3.2 Coefficientsofcrosscorrelation...... 40

3.3 Frictionparametersofeachlink...... 41

3.4 Estimates of the dynamic parameters of the EndoBots...... 42

5.1 Summary of friction compensation performance...... 84

5.2 Circletrackingerror...... 85

6.1 Parametersforconstrainedline...... 106

6.2 Desired and measured stiffness for shared control...... 109

6.3 Effective constraint stiffness for linear trajectory ...... 117

6.4 Effective constraint stiffness for circular trajectory ...... 118

7.1 Suturediameter-strengthrelationship...... 140

7.2 Suturepulloutvalues...... 141

7.3 Comparison of force controller with active damping...... 142

7.4 Effectoftheproportionalgain...... 142

7.5 Performance of force controller with a modified force reference trajectory.143

vii LIST OF FIGURES

3.1 Fulcrumeffectinlaparoscopicsurgery...... 11

3.2 MechanicaloverviewoftheEndoBot...... 13

3.3 Closeuponsemicirculararches...... 13

3.4 Translationalstage...... 14

3.5 PictureofapairoftheEndoBots...... 14

3.6 Grasperandsuturingtools...... 14

3.7 KinematicdiagramoftheEndoBot...... 15

3.8 SimplifieddynamicmodeloftheEndoBot...... 22

3.9 Model-basedfrictioncompensation...... 24

3.10Classicalfrictionmodels...... 25

3.11Parametricsystemidentification...... 26

3.12GenerationofPRBS...... 31

3.13 Schroeder-phased signal design...... 32

3.14 Comparison of input signals in time domain...... 33

3.15 Comparison of power spectra of input signals...... 34

3.16Comparisonoftwoidentificationmethods...... 35

3.17Blockdiagramofvelocitycontrolloop...... 40

3.18Trajectoriesofthefirstjoint...... 41

3.19 Estimates of the dynamic parameters of the first joint...... 42

3.20 Comparison on measured and simulated output trajectories ...... 43

4.1 Knottyingtechniques...... 45

4.2 Grasper with a needle in conventional suturing...... 47

4.3 Shuttleneedledevice...... 48

4.4 Componentsofasuture...... 49

viii 4.5 Autonomous simple knot tying algorithm 1...... 56

4.6 Flexiblehookforcatchingthesuture...... 57

4.7 Autonomous simple knot tying algorithm 2...... 59

4.8 Implementation of autonomous simple knot tying algorithm 2...... 60

4.9 Autonomous simple knot tying algorithm 3...... 61

4.10Formingasimpleknot...... 63

4.11 Two point contact model of the knotted suture...... 64

4.12 Free-body diagram for the knot sliding problem...... 64

4.13Free-bodydiagramofonestrand...... 65

4.14 Free-body diagram of one strand in symmetric tension...... 66

4.15 Geometric interpretation of the sliding condition...... 67

4.16Slidingconditioninaknottyingplane...... 68

4.17Evolvingtrajectoryoftheknot...... 68

4.18 Trajectory of the loop end for placing the knot...... 69

4.19Trajectoriesofloopend...... 70

4.20Stateinvariantspace...... 72

4.21Hybridstatetransitiondiagram...... 72

4.22Humansharingsupervisorycontroller...... 73

4.23Highlevelstatetransitiondiagram...... 74

4.24Manualstatediagram...... 75

4.25Autonomousstateevolution...... 75

4.26 State transition diagram for stitching task...... 76

4.27 State transition diagram for grasping suture tail...... 76

4.28 State transition diagram for creating the knot...... 76

4.29 State transition diagram for securing the knot...... 77

4.30Stateevolutionforcreatingtheknot...... 77

ix 4.31Calibrationoftwocoordinates...... 77

4.32Overviewoftheexperimentalenvironment...... 78

5.1 Fricitonapproximationforcompensation...... 83

5.2 Frictioncompensation...... 83

5.3 Cartesianspaceexperimentalresult...... 84

5.4 Experimentalresultsofcircletracking...... 85

5.5 Bode plot for the numerical differentiating filter...... 92

5.6 Bodeplotforthewashoutfilter...... 93

5.7 Observer based linear controller with feedback linearization...... 96

5.8 Experimental results of friction compensation in the LQG controller. . . 97

5.9 TrackingperformanceoftheLQGcontroller...... 98

5.10 Comparison on error states with the different weighting...... 98

5.11 Comparison on control inputs with the different weighting...... 99

6.1 Block diagram of task space shared control...... 104

6.2 Desiredconstrainedpath...... 106

6.3 Measuredtaskspacepositions...... 107

6.4 Constraintforces...... 108

T 6.5 Constraint forces projected onto range space R(Jc )...... 108 6.6 Measuredjointtorques...... 109

6.7 Experimental result of task space shared control with constrained line. . 110

6.8 Conceptofcoordinatesmapping...... 111

6.9 Block diagram of joint space shared controller...... 112

6.10Measuredtaskspacepositions...... 115

6.11Desiredandmeasuredjointposition...... 116

6.12Measuredjointtorque...... 116

6.13 Actual path with joint space shared controller ...... 117

x 6.14Desiredcircletrajectory...... 118 6.15Measuredtaskspaceposition...... 119 6.16Desiredandmeasuredjointposition...... 119 6.17Measuredjointtorque...... 120 6.18 Actual trajectory of the end-effector in the task space for the circular constraint...... 120 7.1 Suturingline...... 122 7.2 Directionofsecuringinthefirstthrow...... 122 7.3 Securingtheknot...... 123 7.4 An unstable two half hitch knot with reverse-directed tension during thesecondthrow...... 123 7.5 Straingaugetransducer...... 124 7.6 Visionsensor...... 125 7.7 Baseforce/torquesensor...... 125 7.8 Transformationofwrenches...... 126 7.9 Constrainedmotion...... 129 7.10Constrainedmotionforsecuringtheknot...... 129 7.11Simplifiedsecondorderenvironmentmodel...... 132 7.12ExplicitPIforcecontrol...... 133

7.13 Root locus under explicit PI force control with Kp =1...... 134

7.14 Root locus under explicit PI force control with Kp = 10...... 135 7.15 Explicit PI force control with active damping...... 135 7.16 Root locus under explicit PI force control with the active damping Kv = 1000 and Kp =1...... 136 7.17ExplicitPIforcecontrolwithtimedelay...... 136 7.18 Root locus of the time delayed system under explicit PI force control with Kp =10...... 139 7.19 Root locus of the time delayed system under explicit PI force control with active damping Kv = 1000 and Kp =10...... 139

xi 7.20Experimentaltestbedfortensioncontrol...... 140

7.21 Experimental data from integral control with Ki =0.5...... 143

7.22 Experimental data from integral control plus active damping with Ki = 0.5andKv = 1000...... 144

7.23 Experimental data from integral control plus active damping with Ki = 0.5andKv = 2000...... 144

7.24 Experimental data from PI control plus active damping with Kp =0.3, Ki =0.5, and Kv = 2000...... 144

7.25 Experimental data from PI control plus active damping with Kp =0.5, Ki =0.5, and Kv = 2000...... 145

7.26 Experimental data from PI control plus active damping with Kp =0.2, Ki =0.75, and Kv = 2000...... 145

7.27 Experimental data from PI control plus active damping with Kp =0.2, Ki =0.75, and Kv = 1000...... 145

7.28 Experimental data from PI control plus active damping with Kp =0.2, Ki =0.75, and Kv = 500...... 146

7.29 Experimental data from PI control plus active damping with Kp =0.2, Ki =0.3, Kv = 2000, and the modified force reference trajectory with Fdes = −(f +5)N...... 146 B.1 Kineticenergyofarigidbody...... 159

C.1 Orthogonal decomposition...... 161

C.2 Interpretationontheleastsquaresolution...... 162

E.1 One-to-oneandnotonto...... 165

E.2 Ontonotone-to-one...... 165

E.3 One-to-oneandonto...... 166

xii To my wife Kyunga and my daughters Dahyun and Hannah with all my love...

xiii ACKNOWLEDGMENT

With the completion of this thesis I am again at a crossroad in my life. The time I have spent here has been very fruitful and many people came into my life to inspire and encourage this effort. I would first like to express my truly gratitude to my advisor, Dr. John T. Wen. John is an exceptionally distinguished researcher with enthusiasm, but he stepped down to my level in order to collaborate with me. He was always there to answer my questions clearly and helped to make robotics and control fun for me. Throughout the last five years, he provided not only the promising research subject and motivation to challenging problems, but also the way of thinking and the way of proceeding research. In every sense, non of this work would have been possible without him. He has been a real partner and I will always keep unforgettable memories of discussions and friendship we have had together. I would like to acknowledge Dr. Harry E. Stephanou, director of the Center for Automation Technologies, who provided me the opportunity of my journey at Rensselaer Polytechnic Institute and wide latitude to explore interesting research areas. I would also like to thank the other members of my doctoral committee, Dr. Stephen J. Derby and Daniel Walczyk, both of whom provided many helpful comments and suggestions. My colleagues at the Center for Automation Technologies have created a very pleasant atmosphere to work in. In particular, I want to thank Wooho Lee and Jeongsik Sin for being my friend and for making me ponder many things about my life. I firmly believe that we can work together again. I also want to thank Ben Potsaid, for our fruitful conversations during many trips to Rio de Janeiro and Alaska. I would like to thank my long time friend, YoungCheol Park, and my seniors Youngkee Ryu, Hyunggu Park, Byunghee Won, and Hyungsoo Lee. Our occasional telephone chats and emails have always provided a welcome relief from the lonely journey. I thank my parents for their love, support, and understanding through these

xiv years. I also thank to my daughters, Dahyun and Hannah, who provided the joy of life and the ability to make peace with chaos. They prayed for me and brought me great comfort during times of extreme stress. My final, and most heartfelt, acknowledgement must go to my wife, Kyunga, who supported my decision to embark in graduate studies despite the significant changes in our life and provided stability to our family. Her support, encouragement, and companionship has turned my journey through doctoral studies here into a pleasure. She is my everlasting love.

Delight yourself in the LORD and he will give you the desires of your heart (Psalm 37:4)

xv ABSTRACT

Open surgery traditionally involves making a large incision to visualize the operative field and to access human internal tissues. Minimally invasive surgery (MIS) is an attractive alternative to the open surgery whereby essentially the same operations are performed using the specialized instruments designed to fit into the body through several pencil-sized holes instead of one large incision. It can minimize trauma and pain, decrease the recovery time thereby reducing the hospital stay and cost, but also offers more technical difficulties to surgeons. Despite the lack of dexterity and perception, all surgeries are moving toward MIS due to the benefits to patients. However, the demand on surgeons is much higher during suturing, which is the primary tissue approximation method. Sutur- ing is known as one of the most difficult tasks in MIS and consumes a significant percentage of the operating time. Despite the important role and technical challenge of suturing in MIS, there is little research on suturing. Motivated by these observations, a new surgical robotic system, which we have named EndoBot, is developed in this research. The focus of the research is the de- velopment of the EndoBot controller that is capable of a range of MIS operations including manual, shared, and supervised autonomous operations. Due to the chal- lenge of the suturing task, the particular emphasis is placed on its automation. The suturing operation is decomposed as knot forming, knot placement, and tension control. New algorithms are developed and implemented for each subproblem and integrated for the completely autonomous knot tying. To ensure safe operation, a discrete event controller allows interruption by the surgeon at any moment and continuation with autonomous operation after the interruption.

xvi CHAPTER 1 INTRODUCTION

1.1 Background and Motivation The use of robots in surgery is rapidly expanding in recent years. Robots can improve precision, filter human motion tremor, extend human reach into the body, and reduce the risk of infection. Surgical robots can be classified based on the planning strategy: model-based robotic surgery and nonmodel-based robotic surgery; or based on the level of surgical invasiveness: open surgery and minimally invasive surgery. Model-based surgery uses the geometric models obtained from scanned images using the computer tomography (CT) or magnetic resonance imaging (MRI) during the pre-operation stage. Typically, this requires a registration process in which pre- operative model is matched with intra-operative model. If the registration process can be accurately performed, the robotic surgery becomes as off-line robot motion planning problem as in a CAD-CAM system. This approach is applicable to surg- eries of hard tissues such as spine surgery, neurosurgery, hip replacement surgery, total knee replacement surgery, plastic surgery, and eye surgery. Quality of the surgery depends to a large extent on accuracy of the model. Nonmodel-based robotic surgery typically deals soft tissues whose model is either not available or not useful during operations due to the tendency of tissues floating in the body or changing in shape. Most surgeries involving a body organ are nonmodel-based. Surgeons either navigate a hand-held robot directly or manipulate a input device in the case of teleoperation. The man/machine interface plays a key role because surgeons are directly responsible for manipulating the surgical robots. In this surgery, the surgeon’s expertise is the key factor on the quality of surgery. Force feedback and cooperative control to guide surgeons can also improve the surgical results in nonmodel-based robotic surgeries. Surgery traditionally involves making a large incision to expose the operation sites and to access human internal tissues. This is referred to as the open surgery.

1 2

The incision needed to allow surgeons to visualize the field tends to delay patient recovery and causes most of the pain. Minimally invasive surgery (MIS), also known as laparoscopic surgery in the abdominal cavity, arthroscopic surgey in the joints, and thoracoscopic surgery in the chest, has became increasingly an attractive alternative to open surgery. In MIS, the same operations are performed with the specialized instruments which are designed to fit into the body through the several pencil-sized holes instead of one large incision. By eliminating the significant incision, MIS has many advantages over conventional open surgery. It can minimize trauma and pain, and decrease the recovery time thereby reducing the hospital stay and cost for patients. However, compared to open surgery, MIS offers greater challenges to surgeons. Instead of looking directly at the part of the body being treated, surgeons monitor the procedures via a two-dimensional video monitor without the three-dimensional depth information. Due to the inherent kinematic constraints at the incision points, the motions of MIS instruments are restricted to four degrees of freedom. Despite of lack of dexterity and perception, all surgeries are moving toward MIS to give more benefits to patients at the expense of a more stressful environment to surgeons. The demands to surgeons for dealing with these technical difficulties are higher during the suturing task, which is the primary tissue approximation method and has been known as one of the most difficult tasks in MIS operations and uses a significant percentage of operating time.

1.2 Problem Statement Several robotic systems have been developed for minimally invasive surgery [10, 11, 12]. There are also a few commercial systems available on the market such as AESOP and ZEUS (Computer Motion), daVinci (Intuitive Surgical), and Neu- romate (ISSS/immi). It is important to note that in the various surgical robots described above, the surgical procedures are still completely performed by the hu- man surgeon High skill level procedures such as suturing and precise tissue dissection continue to depend on the expertise of surgeons, and surgeon’s commands are mim- icked by the robotic devices through computer control. Despite the important role 3 and technical challenge of suturing in MIS, there has been little research effort re- ported an automated robotic suturing. Most results on robotic surgery focus on the development of the robotic system and leave the suturing task to the surgeon. This thesis presents the development of a new surgical robotic system, which we named the EndoBot, for MIS operations. The EndoBot is designed for the col- laborative operations between surgeons and the robotic device. Surgeons can select the device to operate completely in manual mode, collaboratively where motion of robotic device in certain directions are under computer control and others under surgeon’s manual control, or autonomously where the complete robot system is un- der computer control and surgeon’s supervision. Furthermore, the robotic tools can be quickly changed from a robotic docking station, allowing different robotic tools to be used in operations. For automated robotic suturing using the EndoBot, the suturing task needs to be analyzed and the algorithms for knot tying task need to be developed to deal with a flexible suture. In addition, knot placement and knot tension control should also be considered. Motivated by the above observations, the goal of this research is to develop a robotic system that can collaboratively perform laparoscopic procedures with sur- geons, conduct certain tasks autonomously to reduce the strain on surgeons, remove the variability of surgeon’s training levels, and enhance the system efficiency by decreasing the operation time.

1.3 Contributions The major contributions of this research are summarized here.

• Robotic system for minimally invasive surgery. A new surgical robot system, called the EndoBot, for MIS operation is developed and the dynamic param- eters are identified experimentally.

• Suturing task analysis and implementation. The suturing task is decomposed into five subtasks and the technical difficulties on each task are analyzed. Knot placement problem is formulated, and the sliding condition is proposed. The 4

knot tying algorithm is presented and implemented with the EndoBot.

• Supervisory controller. In order to guarantee the safe operation of sutur- ing task in non-deterministic environments, a supervisory controller is imple- mented based on the hybrid dynamic system framework.

• Shared control. Shared control using artificial constraints is proposed and experimentally verified.

• Tension Control. A base sensor method is proposed to measure the tension in suture. A hybrid force/position control is implemented to effectively regulate the suture tension while achieving the desired direction.

1.4 Outline This document is organized in the following manner. In Chapter 2, related work is reviewed. Chapter 3 provides the kinematics and dynamics analysis of new surgical robot and discusses the identification method for dynamic parameters. In Chapter 4, a suturing task analysis is presented with the high level control architec- ture. Chapter 5 discusses the motion controller design and the experimental results and the shared control is presented in order to augment the human capability in Chapter 6. Chapter 7 provides a hybrid force/position control strategy to effectively regulate the tension with the experimental results. Finally, Chapter 8 summarizes this document, outlines the future work. CHAPTER 2 LITERATURE REVIEW

To the best of the author’s knowledge, this thesis is unique by being the first work in autonomous robotic suturing specifically for minimally invasive surgery. However, there have been many research results related to various aspects of robotic surgery. This section reviews these results.

2.1 Robotic Devices in Surgical Applications Many surgical robots have been successfully used in various applications. A surgical robot system for total hip replacement surgery (RoboDoc, ISS Inc.) was developed [1]. The robot systems for total knee arthroplasty [2, 3, 4, 5], eyesurgery [6], and neurosurgery [7, 8] have been developed. In recent years, there have been many research efforts in minimally invasive surgery.

Manipulator design for laparoscopic surgery Due to the inherent spherical motion requirement in laparoscopic surgery, several research on mechanisms has been carried out. Three types of mechanisms for generating the spherical movements were compared in [9] and they carried out the optimization on reachability with the concentric multi-link spherical joint mechanism. Funda and his colleagues [10] compared two manipulator designs (Parallel Linkage Remote Center of Motion and Hopkins-IBM Assistant Robot) in terms of sev- eral criteria such as safety, ergonomics, ease of control, and structural factors. However, these mechanisms are suitable for telemanipulation, not for direct navigation. Due to the fulcrum effect at the incision points, the motions of conventional laparoscopic instruments are restricted to only four degrees of freedom and hence significantly the dexterity is reduced [14]. There has been a research on increasing the dexterity by adding two degrees of freedom to the end of the laparoscopic tool. This is particularly useful in the cardiac op- eration where the motion of the tool stem is limited. The laparoscopic wrist mechanism driven by tendons was invented in [11] and it is now commercially

5 6

available on the market as the da Vinci Surgical System (Intuitive Surgical Inc.). The six degrees of freedom slave manipulator was developed in [12], and the parallel mechanism was chosen for gross motion to increase rigidity and the tendon actuated millirobot was added. In [13], the workspace of laparoscopic extender with flexible stems was formulated. They performed the optimization on design of flexible stems by defining the dexterity measure, which is the ratio of areas of dexterous workspace and reachable workspace. The redundant wrist with parallel structure, which has three degrees of free- dom, was presented in [15], and they carried out an optimal design of the mechanism.

Laparoscopic suturing Suturing is one of the most difficult tasks and takes a significant percentage of operating time and involves in complex motion plan- ning. The general description on manual laparoscopic suturing can be found in [16, 17]. As a direct result of constraints in laparoscopic surgery, there is an extended learning curve that surgeons must go through to gain the required skill and dexterity. Furthermore, there is a great deal of variability even among trained surgeons. As demonstrated in [21], time-motion studies of laparoscopic surgery have indicated that for the operations such as suturing, knot tying, suture cutting, and tissue dissection, the operation time variation between surgeons can be as large as 50%. In suturing, it was noted that the major difference between surgeons lies in the proficiency at grasping the needle and moving it to a desired position and orientation, without slipping or dropping it. Cao et al. The motion analysis on the suturing task using conventional needle holders was performed in [21], and they broke down into five basic motions such as reach and orientation, grasp and hold, push, pull, and release. The teleoperator slave system with a dexterous wrist for minimally invasive surgery was developed in [20], and they demonstrated the suturing task with direct vision. More recently, the knot tying simulation for surgical simulation with a spline of linear spring model was proposed in [85]. The future trends in laparoscopic suturing can be found in [19]. However, these all publications did not address the issues on the autonomous robotic suturing. 7

2.2 Augmentation of Human Capability in Surgical Tasks There are many research to augment the human’s physical capability in the robotic applications. Surgical procedures that require the fine motions and take long time can be good candidates to enhance the surgeon’s ability. Typically, they include teleoperation, shared control, collaborative control, supervisory control, and autonomous control. One class of surgical robotic devices that has been proposed to assist surgeons in laparoscopic surgeries is based on the concept of teleoperation where surgeons can perform the operation remotely while robots are operated on the patient under the surgeon’s commands. The robot motion is slaved via mechanical linkages or computer control to the surgeon’s movements [22, 24, 25, 26]. In [27, 28], the surgeon’s view of the operation may be further enhanced by vision system to create a virtual presence. Funda et al. [22, 23] proposed a control methodology to compute the optimal approximation to the desired motion, given the existing constraints by the extended Jacobian technique. When robots face the unstructured environments and precise motions are needed, autonomous planning is not available. Pure teleoperation causes fatigues to the human operators by demanding more concentration. In this case, higher level of assistance to human operators is desirable rather than pure telemanipulation. Shared control, as the name implies, enables the human operators and the com- puter control the manipulator in parallel. This would be useful in the guidance and alignment tasks. The shared control schemes have been discussed in [30, 31]. In [29], exoskeletons to amplify the strength of human operators was developed, and they performed the analysis of the stability of cooperative human-robot manipulator control systems. Many research efforts have been carried out on shared control in the space telerobotics where time-delays can affect the task performance [32, 33], but little literature has been published in the surgical context. In [35, 34, 36], the problem of cooperative manipulation to improve positioning and path following abilities of human was discussed. They compared the assisted mode and the aug- mented mode experimentally. With well structured environments, augmentation would be maximized by giving the degree of autonomy to the robotic systems under the human’s supervision. This paradigm is referred to as supervisory control in [30]. 8

During the operation, the human supervises each evolution of task with intervening capability. This framework can be the basis of automating surgical tasks.

2.3 Motion and Force Control of Rigid Robot Systems 2.3.1 Robot Motion Control Robot manipulators have complex nonlinear dynamics and there are many con- trol schemes in motion control. The selection of the motion controller may depend on the required task, sensor availability, computational power, and the knowledge of the values of the dynamic parameters. The basis in motion control of robot manipulators is the PD (Proportional and Derivative) plus gravity compensation scheme at the joint level. In [66], it was proved that the PD control with computed feedforward terms was locally exponentially stable by using an energy-motivated Lyapunov fuction. In [57], the stability of PD control with gravity compensation by shaping the potential energy of closed loop system and injecting the required damping was proved. This controller provides the globally asymptotic stability for the regulation problem [58] and it is the most widely used in industrial robots. This controller may be practically used in tracking control applications at the expense of degrading the tracking performance. For the general trajectory tracking problem, computed-torque control has been shown to have the globally asymptotic stability [59, 60, 47, 62]. The basic idea is to transform the nonlinear dynamic system into a linear one by cancelling the nonlinearities of the robot dynamics with a nonlinear inner feedback linearization loop [63]. The beauty of this approach is that we can apply many linear control schemes in designing the outer feedback loop. Linear PD or PID control is the common choice and linear quadratic optimal control is also used for designing the outer feedback loop [64, 65]. For robot systems, which do not admit exact feedback linearization, the outer loop controller must be robust in order to handle the uncertainties. The robust control problem translated into the optimal control problem in [67]. They designed an optimal LQ (Linear Quadratic) regulator for the linearized system with uncertainties reflected in the performance index. A survey on robust control of robots is given in [68, 69]. In many robot systems, only a part of states can be measured, but the state feedback controller needs the knowl- 9 edge of positions and velocities. The immediate solution to get the velocity signals is to numerically differentiate the position signals. Inherently numerical differenti- ating is very sensitive to the noises because of the improper function characteristics. When the system dynamics is given, the most effective method to estimate all states can be to design a dynamic observer. The computed-torque controller with linear observer was proposed in [70], and they proved the exponential stability for the robot system, which has only position measurement. The performance comparison on linear and nonlinear observers was given in [71] with the linear state feedback algorithm for a two-link manipulator.

2.3.2 Robot Force Control When robot manipulators interact with the environments, force control is es- sential for successful execution of tasks. Force control has been a research subject for many years, and various control schemes have been proposed. Basically, the existing force control strategies can be classified into two categories. The first group aims at performing indirect force control by controlling the relationship between the manipulator position and the interaction force. Compliance control and impedance control can be grouped into the indirect force control scheme. In [72], Hogan pro- posed impedance control to achieve a desired dynamic behavior while compliance control can achieve the static behavior in [73, 74]. These schemes are suitable for tasks where accurate force regulation is not required and the force measurement is not available. In order to regulate the exact force, the parallel position/force control scheme was introduced in [79]. Parallel position/force control gives the priority to force errors over position errors by closing an outer force control loop. The second group aims at controlling the positions and the interacting forces simultaneously by decomposing the task space into the directions of the admissible motions and the interacting forces. Hybrid force/motion control, proposed in [75], belongs to this scheme. This control with consideration on the dynamics of robot manipulators was extended in [76, 77]. The state of the art of force control for robot manipulators is surveyed in [78]. For explicit force control, integral force control with robustness enhancements in order to reduce the initial impact force was proposed in [80]. In 10

[81], experimental results for explicit force control with an active damping term were presented. CHAPTER 3 MODELING AND IDENTIFICATION

3.1 EndoBot Design 3.1.1 Manipulator Design Requirements In laparoscopic surgery, a patient’s abdomen is inflated with carbon dioxide

(CO2) through a needle to lift the abdominal wall away from the organs so as to expose and access the operating field, and then three pencil-sized small holes are punctured on abdominal wall for fitting of laparoscopic instruments and camera as shown in Figure 3.1. Due to the requirement of minimizing the tear of the incision point, a manipulator for MIS has an inherent kinematic constraint (a spherical joint at the incision point). This constraint is a primary design consideration. This section describes the mechanical design issues of the EndoBot.

Figure 3.1: Fulcrum effect in laparoscopic surgery.

Fulcrum accommodability Due to the fulcrum constraint, it is required for the MIS surgical robot manipulator, which passes through a small hole, to have an effective center of rotation at the incision points. The remote center of rotation can be implemented with several mechanical design such as spherical joint, spherical link mechanism, double-parallelogram mechanism [37, 20], and

11 12

also can be implemented with tool center position technique with conventional industrial robot programming.

Backdrivability Backdrivability is needed in the cooperative control mode for the surgeon to move the manipulator directly. It is also critical for removing the robot from the patient in case of unanticipated power shutdown or emergency.

Rapid tool changability Surgical operations typically require many surgical tools. Faster tool change can reduce the overall operation time.

3.1.2 Current Prototype This section gives a brief overview on the EndoBot manipulator, which was built by Bernard [18]. By comparing various mechanisms based on the above design constraints, a simple spherical joint mechanism with semi-circular arches is found to be a suitable choice because it gives a compact and light design and has mini- mum number of joints and a mechanical fulcrum constraint, and can be operated in multiple control mode such as manual mode, shared control mode, and autonomous mode. The EndoBot is capable of four degrees of freedom of motion and consists of two parts as shown in Figure 3.2: rotational stage and translational stage. The rotational stage creates the spherical motion based on a pair of motor-driven semi- circular arches for yaw and pitch, and a sleeve that can generate rolling motion. The translational stage carries the specific tool, which is actuated pneumatically and is translated along the tool z-axis. All four actuators are DC servo motors and linear motion for translational stage is performed by a lead screw. All axes are back-drivable when the motors are not energized. The center of rotation of the arch joints is at the incision point of the patient’s abdominal wall. Therefore, motion of the EndoBot will not cause tearing of the incision point. Each tool can easily slip through the locking hole in the translational stage and the sleeve and be locked via a locking pin. Figures 3.3–3.4 show the closeups on the semicircular arches and translational stage carrying grasper tool. Figure 3.5 shows two manually operated EndoBots and Figure 3.6 shows the closeup of two EndoBots with grasper and stitching tool. 13

Figure 3.2: Mechanical overview of the EndoBot.

Figure 3.3: Closeup on semicircular arches. 14

Figure 3.4: Translational stage.

Figure 3.5: Picture of a pair of the EndoBots.

Figure 3.6: Grasper and suturing tools. 15

3.2 Kinematic Modeling of the EndoBot 3.2.1 Forward Kinematics The product of exponentials formula [38] is used to derive the kinematics equa- tion. Consider the following manipulator with spherical joint shown in Figure 3.7. It consists of four joints - three revolute joints and one prismatic joint, and the base 3 and tool frame are shown in Figure 3.7. Let hi ∈ be a unit vector, which spec- ifies the direction of rotation or translation and qi ∈be the angle of rotation or 3 th th linear displacement and pi,i+1 ∈ be the position vector between i and (i +1) link frame. The transformation between base frame, E0, and tool frame, ET ,at

Figure 3.7: Kinematic diagram of the EndoBot.

qi =0,i=1, ..., 4isgivenby    0       I3×3 0    p0T (0) =   , (3.1)    0  0001 where    100     I3×3 =  010 . (3.2)   001 16

Then the hi and pi,i+1 can be expressed in base frame as follows:          1   0   0   0                  h1 =  0  h2 =  1  h3 =  0  h4 =  0  (3.3)         0 0 1 1

       0   0   0              p01 = p12 = p23 =  0  p34 =  0  p4E =  0  . (3.4)       0 q4 0

The product of exponentials formula gives the forward kinematics map:

ˆ ˆ ˆ ˆ h2q2 h1q1 h3q3 h4q4 R0(q)=R04 = R12R01R23R34 = e e e e (3.5)

ˆ ˆ ˆ h2q2 h1q1 h3q3 p0T (q)=p04 = p01 + R01p12 + R12R01p23 + R12R01R23p34 = e e e p34. (3.6)

The individual exponentials are given by    10 0 ˆ   h1q1   e =  0 c1 −s1  (3.7)   0 s1 c1    c2 0 s2  ˆ   h2q2   e =  010 (3.8)   −s2 0 c2    c3 −s3 0  ˆ   h3q3   e =  s3 c3 0  (3.9)   001 ˆ h4q4 e = I3×3. (3.10)

where, ci and si are abbreviation of cos(qi) and sin(qi) respectively. Expanding the 17 terms in the product of exponentials formula results in    c2c3 −c2s3 s2      R0T (q)= s1s2c3 + c1s3 −s1s2s3 + c1c3 −s1c2  (3.11)   −c1s2c3 + s1s3 c1s2s3 + s1c3 c1c2

   s2q4      p0T (q)= −s1c2q4  . (3.12)   c1c2q4

We parameterize the joint vector q as yaw angle (rotation about the inertial x axis), pitch angle (rotation about the inertial y axis), roll angle (rotation about the body z axis), and translation (extension of the tool along the body z axis). Since each EndoBot only has four degrees of freedom, we can define the configuration of tool 4 frame as xT =(rT (q),φ(q3)) ∈,whererT (q)=p0T (q) is the Cartesian position of the tool frame and φ(q3)=q3 is the roll angle. Then the forward kinematics can be viewed as the mapping xT = f(q):        x   s2q4           rT   y   −s1c2q4        xT = =   =   . (3.13)     φ  z   c1c2q4  φ q3

3.2.2 Inverse Kinematics Due to the spherical joint at the incision point, the inverse kinematics can be easily solved with the following algebraic identity.

Translational distance, q4 The translational distance q4 can be found through 18

   x      the magnitude of rT (q)= y :   z

       x      q4 = rT  =  y  . (3.14)      z 

Second joint angle, q2 −1 xc1 q2 =tan ( ). (3.15) z

First joint angle, q1 −1 −y q1 =tan ( ). (3.16) z

Roll angle, q3

q3 = φ. (3.17)

3.2.3 Jacobian Formulation 3.2.3.1 Geometric Jacobian Let ω and ν be the angular velocity vector and linear velocity vector of the end-effector. The vector Jacobian of the above manipulator in inertia frame can be expressed in as follows:      ω   h1 R01h2 R02h3 0    =   (3.18) ˆ   v h1(p1T )0 R01h2(p2T )0 R02h3(p3T )0 R03h4 19

where,(piT )0 = R0ipi,i+1+R0i+1pi+1,i+2+R04p4T . Calculating the associated elements yields the geometric Jacobian:   2 2  0 s1c2q4 + c1c2q4 0 s2       −c1c2q4 s1s2q4 0 −s1c2       − −   s1c2q4 c1s2q4 0 c1c2  J =   . (3.19)    10s2 0       0 c1 −s1c2 0    0 s1 c1c2 0

3.2.3.2 Analytical Jacobian The analytical Jacobian can be computed by direct differentiation of the for- ward kinematics equation. Let f : n →p represent a mapping from joint space to task space, then the forward kinematics is represented by xT = f(q)andbythe chain rule:

∂f x˙ = q˙ = J (q)˙q, (3.20) T ∂q a

∂f where the matrix Ja(q)= ∂q is termed analytical Jacobian.

In general, the analytical Jacobian, Ja(q), is different from the geometric Jacobian, J(q), for orientation part. In case of the EndoBot, since the roll angle can be avail- able in direct form, the analytical Jacobian and geometric Jacobian are identically same as follows   2 2  0 s1c2q4 + c1c2q4 0 s2       −c1c2q4 s1s2q4 0 −s1c2    J =   . (3.21)  − −   s1c2q4 c1s2q4 0 c1c2  0010

3.2.4 Singularity Analysis Since the Jacobian is a function of the configuration q, the singularity can be occurred at some configurations with which the rank of J is decreased, i.e., two or more of the columns of J becomes linearly dependent. When det(J) = 0, the robot 20 is in a singular configuration, implying that there is certain task space motion that cannot be achieved (and, conversely, small task space motion would result in large joint space motion). In order to get a reliable operation, it is necessary to identify singular configurations of a manipulator, if they exist. To analyze the rank of the Jacobian matrix, consider the determinant given by

2 det(J)=c2q4. (3.22)

From (3.22), it is easy to find that the determinant of the Jacobian vanishes when

π q2 = ± ,q4 =0. (3.23) 2

± π This means singular configuration occurs when q2 = 2 or q4 = 0. In the first case, the EndoBot would have to lie horizontal, and in the second case, the tip of the EndoBot would have to be in the center of the spherical joint. Both cases are outside of the normal workspace and therefore do not have to be considered.

3.3 Dynamics of the EndoBot The closed-form dynamic equation of robot manipulator can be obtained through the Lagrangian-Euler formulation based on either D’Alembert’s principle or Hamilton’s principle. The Lagrange-Euler equation can be written in the form:

d ∂L ∂L ( ) − = Q (3.24) dt ∂q˙ ∂q where L(q,q˙)=T (q,q˙)−P (q) represents the Lagrangian function, T (q,q˙) the kinetic energy, P (q) potential energy, q ∈n the vector of generalized coordinates,q ˙ ∈n the vector of generalized velocities, Q ∈n the vector of generalized forces.

3.3.1 Lagrangian Formulation The lagrangian function of a robot manipulator with n joints is thus given by

1 L(q,q˙)= q˙T M(q)˙q − P (q). (3.25) 2 21

Thus, we have

∂L = M(q)˙q (3.26) ∂q˙ d ∂L ( )=M(q)¨q + M˙ (q,q˙) (3.27) dt ∂q˙ ∂L ∂ 1 ∂P(q) = ( q˙T M(q)˙q) − . (3.28) ∂q ∂q 2 ∂q

The general equations of motion for a rigid robot manipulator can be written as

∂ 1 ∂P(q) M(q)¨q + M˙ (q,q˙)˙q − ( q˙T M(q)˙q) + = τ, (3.29) ∂q 2 ∂q C(q,q˙)˙q G(q) where τ is the vector of generalized actuator force. Define C(q,q˙)˙q to be the vector of Coriolis and centrifugal forces and G(q) the gravity force:

∂ 1 C(q,q˙)˙q = M˙ (q,q˙)˙q − ( q˙T M(q)˙q) (3.30) ∂q 2 ∂P(q) G(q)= . (3.31) ∂q

Thus, (3.29) becomes M(q)¨q + C(q,q˙)˙q + G(q)=τ. (3.32)

3.3.2 Properties of Dynamic Model The equation of motion has several properties, which are useful for stability analysis on deriving an energy based control algorithm and dynamic parameter identification. These properties follows below.

Symmetric and positive definite matrix, M(q) - The mass-inertia matrix M(q) is symmetric and positive definite.

Skew-symmetric matrix, M˙ (q) − 2C(q,q˙) - This property is often referred to as the passivity property and it is useful for deriving control law. Skew-symmetric means: xT (M˙ (q) − 2C(q,q˙))x = 0 for all x ∈ Rn. (3.33) 22

Linearity in the dynamic parameters - Through the reparametrization, we can find a parameter vector X that is linear in system dynamics. This property plays an important role on experimental identification of dynamic parameters. In view of this linearity property, we can write the equation of motion in (3.32) more compactly τ =ΦT (q,q,˙ q¨)X, (3.34)

where ΦT (q,q,˙ q¨) is the regressor, which is a function of the positions, velocities and accelerations and X is vector of dynamic parameters.

3.3.3 Dynamic Model of the EndoBot Due to the spherical joint motion of first three joints of the EndoBot, it would be convenient to be considered as a system with two links composed of rotational body and translational body for the dynamic analysis purpose as shown in Fig- ure 3.8. Define the inertial frame, B, and attach the body frame of first rotational

Figure 3.8: Simplified dynamic model of the EndoBot. link, O1, to the center of spherical joint and the body frame of second translational nd c link, O2, to the center of mass of 2 link. Let li be the distance between B and th the center of mass of the i link and mi be the mass of each link. For simplicity, the inertia tensor relative to a frame attached at the center of mass of each link is 23

assumed to be diagonal:  

 Iix 00   c   I =  0 I 0  . (3.35) i  iy  00Iiz Then, the total kinetic and potential energies of the EndoBot are given by

1 T T 1 T T T = q˙ J (q)M1J1(q)˙q + q˙ J (q)M2J2(q)˙q (3.36) 2 1 2 2 T c T c P = m1g (p1)0 + m2g (p02 + p2)0. (3.37)

Finally, after substituting the Lagrangian L = T − P into Lagrange’s equation, we have the mass inertia matrix, M(q) ∈4×4, for the EndoBot:

   M11 M12 M13 0       M21 M22 M23 0    M(q)=  , (3.38)    M31 M32 M33 0  000M44 where

2 2 c 2 2 M11 =(I1x + I2x)c2 +(I1z + I2z)s2 + m2(l2 + q4) c2

M12 = M21 = −(I1z + I2z)s1s2

M13 = M31 =(I1z + I2z)s2

2 2 c 2 2 M22 =(I1x + I2x)c1 +(I1z + I2z)s1 + m2(l2 + q4) c1

M23 = M32 = −(I1z + I2z)s1

M33 = I1z + I2z

M44 = m2. (3.39)

The Coriolis and centrifugal force terms are given by

c 2 c 2 C1(q,q˙)˙q =2[I1z + I2z − I1x − I2x − m2(l2 + q4) ]c2s2q˙1q˙2 +2m2(l2 + q4)c2q˙1q˙4

c 2 +[I1x + I2x − I1z − I2z + m2(l2 + q4) ]c1s1q˙2 24

2 +(I1z + I2z)(c1q˙2q˙3 + c2q2q3 − s1c2q˙2)

c 2 c 2 C2(q,q˙)˙q =2[I1z + I2z − I1x − I2x − m2(l2 + q4) ]c1s1q˙1q˙2 +2m2(l2 + q4)c1q˙1q˙4 2 2 − (I1z + I2z)(c1s2q˙1 + c1q˙1q˙3 + c2q˙1q˙3 + c2s2q˙1)

c 2 2 +[I1x + I2x + m2(l2 + q4) ]c2s2q˙1

C3(q,q˙)˙q =(I1z + I2z)(c2 − c1)˙q1q˙2

c 2 2 2 C4(q,q˙)˙q = −m2(l2q˙1 + c1q˙2). (3.40)

The gravity terms are

c c G1(q)=−[m1l1 − m2(l2 + q4)]gs1c2

c c G2(q)=−[m1l1 − m2(l2 + q4)]gc1s2

G3(q)=0

G4(q)=m2gc1c2. (3.41)

3.4 Friction Modeling and Compensation For many servo applications, the joint friction is the main limitation to preci- sion and performance. It could lead to stick-slip motions, static positioning errors, or limit cycle oscillations. Systematic lubrication should be implemented from the design stage to reduce frictional disturbance. Stiff (high gain) position control can reduce the frictional positioning error at the expense of possibly destabilizing ef- fect. Integral action is also a common alternative to reduce the steady-state error in constant-velocity application. When the friction behavior can be predicted , it may be compensated by feedforward compensation as in Figure 3.9.

Figure 3.9: Model-based friction compensation. 25

The friction compensation requires the knowledge of the friction model and corresponding parameters. Friction is usually modelled as a map between an instan- taneous friction force and velocity. Typical friction models are shown in Figure 3.10:

Figure 3.10: Classical friction models.

a) τf (˙q)=Fcsgn(q ˙) (3.42)

b) τf (˙q)=Fcsgn(q ˙)+Fvq˙ (3.43)   Fssgn(q ˙)if˙q =0 c) τ (˙q)= (3.44) f  Fcsgn(q ˙)+Fvq˙ otherwise   Fssgn(q ˙)if˙q =0 d) τf (˙q)= (3.45)  −|q/˙ q˙s|δs Fcsgn(q ˙)+(Fs − Fc)e + Fvq˙ otherwise.

The friction parameters could be estimated by observing the static friction- velocity curve. In [40, 39], a method of finding the friction parameters, Fc and Fv, experimentally was proposed. A series of constant input torques is applied to each joint and the correlating steady state joint velocities are recorded. Fitting the steady 26 state velocities versus input torques would then give the friction parameters.

3.5 Identification of Dynamic Parameters The advanced model-based control methods to improve the performance re- quire the knowledge of the dynamic parameters. In most cases, computing the numerical value of the dynamic parameters directly from the design data of the mechanical structure is not feasible due to the complexity. One way to remedy this problem is to identify the dynamic parameters experimentally. Identification of robot dynamic parameters is basically grey-box approach requiring the measured in- put and output signals with a given system structure resulting from analytic (white- box) modeling. Due to the fact of linearity in the dynamic parameters the equation of motion can be given with measurements of φ(t)andτ(t):

τ(t)=φT (t)θ + ε(t), (3.46) where θ ∈n is the vector of unknown parameters and ε(t) is the unmeasured noise signal. More compactly with m measurements at given time instants t1, ..., tm,

Figure 3.11: Parametric system identification. 27

τ¯ = φθ¯ +¯ε, (3.47)

        ···  τ(t1)   θ1   ε(t1)   φ1(t1) φ2(t1) φn(t1)                   τ(t2)   θ2   ε(t2)   φ1(t2) φ2(t2) ··· φ (t2)         n                 · · ··· ·   .   .   .    τ¯ =   θ =   ε¯ =   φ¯ =   ,          .   .   .   · · ··· ·                   .   .   .   · · ··· ·          τ(tm) θn ε(tm) φ1(tm) φ2(tm) ···φn(tm) where φ¯ ∈m×n is the matrix in which each row represents regressor at each time, m m τ¯ ∈ is a vector with τ(tk), andε ¯ ∈ is a vector with ε(tk). Sinceε ¯ is unknown, the exact solution to θ cannot be obtained. One attractive solution may be to seek to find θ so as to minimize the norm ofε ¯. Selecting Euclidean norm results in the standard least squares problem:

min φθ¯ − τ¯. (3.48) θ

Therefore, the identification problem turns out to find unknown parameters θ from two sets of measurement data, data about motions and data about torques. The simplest linear estimation method, the least square method, can be applied with the overdetermined regressor matrix as in Figure 3.11. Appendix C gives a brief summary of the least square method. This can also be viewed as an unconstrained optimization problem with the cost function given by

1 1 min J =min ε¯T ε =min (φθ¯ − τ¯)T (φθ¯ − τ¯). (3.49) θ θ 2 θ 2

The necessary and sufficient conditions for a minimum are

∂J = φ¯T φθ¯ − φ¯T τ¯ = 0 (3.50) ∂θ ∂2J = φ¯T φ>¯ 0. (3.51) ∂θ2 28

Due to convexity of the cost function, the solution is a global optimum and it can be obtained from the necessary condition with the gradient of J:

∗ ¯T ¯ −1 ¯T θLS =(φ φ) φ τ.¯ (3.52)

It is important to note that the matrix (φ¯T φ¯) is identical to the Hessian of the cost function. In order to get the minimum, the Hessian must be positive definite and this is an equivalent condition of the persistent excitation on input signals, which will be shown in the next section.

3.5.1 Input Signal Design When the parameter identification comes to real work, the practical question may be how to excite the systems. It is desirable to have systematic guidelines for designing the input signals. The following factors are considered in this research.

1. Persistent excitation Let φ¯ be the regression matrix and θ unknown parameter vector. The degree of persistence of excitation should be high enough with respect to the number of unknown parameters in θ such that the contribution of each element in θ can show up separately and consequently θ can be identifiable with invertible φ¯T φ¯. Matrix φ¯T φ¯ is often called the persistent matrix or the input correlation matrix. Therefore, persistence of excitation can be checked with either the input trajectory or the persistent matrix. A regression matrix φ¯ is called persistent exciting if there exist positive con-

stants α1, α2 and δ such that

 t+δ ¯T ¯ αaI ≤ φ φdτ ≤ α2I. (3.53) t

In frequency domain, input signal u(t) is said to be persistently exciting of

order p, if the condition on the power spectrum ,Ψu(w) =0,issatisfiedatat least p distinct points in the interval −π<ω≤ π. The strict requirement on the degree of persistent excitation under the noise signal is not yet fully investigated. However, since input signal satisfying the persistent exciting 29

condition can make the regressor uncorrelated, persistent exciting also can be obtained with the nonsingular persistent matrix. Clearly, this invertibility of φ¯T φ¯ is the requirement of existence of least square solution. The main objective of the input design is to make regressor vectors as uncorrelated among them as possible.

2. Condition number of regressor This property is a quality tag to the input signals to represent the conditioning of the persistent excitation. The condition number c(A) of the nonsingular matrix A is defined σ1 c(A)= , (3.54) σn

where σ1 ≥ ...σn > 0 are the singular values of matrix A. This number also represents the sensitivity of estimation to noise and unmodeled dynamics. The smaller condition number of regression matrix c(φ¯T φ¯) results in better identification.

3. Constraints on input amplitudes Large values of input signals in amplitude are desirable to provide better signal-to-noise ratio (SNR) in the identification process. However, there exist the constraints on the input signals in most applications. The input signals u(k) should be bounded so as to avoid the possibility introducing the nonlin- earity from input saturation:

umin ≤ u(k) ≤ umax, (3.55)

where umin and umax represent the minimum and maximum values of input signals respectively. It is desirable to have a way to impose this constraint into input design process.

4. Input spectrum The spectrum of the input signal determines the frequencies where the exciting energy is allocated. The input signal with the well distributed spectrum over the frequency bands where the system operates mostly results in high quality 30

of parameters. In this respect a white input signal may be the ideal input signal since it excites all frequencies with same weights. However, most of real systems have a limited bandwidth as in Figure 3.11. It makes no sense to put a lot of energy beyond this bandwidth since the systems act as low-pass filters. In this respect, the input signal swith a user specified design parameters to allocate the input powers over desired frequency bands will be advisable.

There also exist many efforts seeking to find an optimal input trajectory for iden- tifying robot dynamic parameters. One of the common criteria is the minimization of the condition number of the regression matrix. This minimization problem is dif- ficult to solve and usually is a very time consuming procedure. For this reason, the existing off-the-shelf input signals are investigated for experimental identification.

Summed multisinusoidal signal A sum of different sinusoidal signals is effective with relatively simple dynamic model requiring small number of parameters. It has a discrete set of point spectrum and can provide excitation at certain frequencies. Conceptually it is simple to generate the signals and has the advantage of long duration of excitation signal at each frequency. However, it demands many trial and error iterations to select the appropriate discrete frequencies and relative amplitudes of each frequency. The quality of identification can vary with the selected frequencies and amplitudes.

Chirp signal A chirp signal is a single sinusoid with a time varying frequency:

u(t) = sin(2πf(t)t), (3.56)

where f(t) is the time varying frequency. The common choice of f(t)maybe a linear or logarithmic function of time. The advantage of chirp signal is that the system can be excited over all specified frequency bands. One shortcoming may be the trade-off between the duration of excitation at each frequency and 31

the overall identification time. The overall identification time can be reduced with the fast sweep signal from the lowest to the highest frequency, which results in relatively short duration of the excitation at each frequency.

PRBS(Pseudo random binary sequence) A pseudo random binary sequence is a signal that shifts between binary level in a certain sequence. The difference from RBS (random binary sequence) is a periodic and deterministic characteristics and the exact sequence can be re- generated with same starting point. The PRBS can be generated by means of shift registers and Boolean algebra as in Figure 3.12. The PRBS is character-

Figure 3.12: Generation of PRBS.

ized by two parameters: the switching time (Tsw) and the number of registers

(nr). The switching time is the minimum time between changes in the binary level and the maximum length of a sequence is 2nr − 1.

The main usefulness of the PRBS signal lies in the fact that it resembles a white noise in discrete time and thus excites all frequencies equally well. The PRBS also uses the maximum power of input signals and consequently has a high signal-to-noise ratio. The shortcoming with PRBS may be that it is not possible to design an arbitrary spectrum profile of input signals such as multiple bands. For the mechanical systems, the exciting with PRBS may result in tendency of noisy signals in velocity and acceleration signals that can be obtained from numerically differentiation of measured position signals.

Schroeder-phased signal 32

Consider the periodic and deterministic multisinusoidal signal:

ns √ us(k)=λ 2αi cos(ωikT + φi), (3.57) i=1 where λ is the scaling factor to insure the bounded amplitudes in time-domain with ±usat, T is the sampling time, ns is the number of sinusoids in one signal 2πi period, αi is the relative power in each component, ωi = NsT , Ns is the number ≤ Ns of samples in one signal period (ns 2 ), and φi is the phase shifting of each harmonic signal. In order to minimize peaking of the excitation in time domain while maintaining the same frequency contents, Schroeder [48] proposed that the phase of each sinusoid has a form:

i φi =2π jαi. (3.58) j=1

The total power is normalized as

ns αi =1, (3.59) i=1 where the relative power {αi > 0,i =1, ..., ns} is specified. ns directly de- termines the order of the persistent excitation and Ns determines the low frequency end of the Schroeder-phased spectrum. The Schroeder-phased sig-

Figure 3.13: Schroeder-phased signal design. nal can produce almost flat spectral characteristics. The most distinguished 33

beauty against the other signals may be the capability of designing the arbi- trary desired power spectrum profiles.

(a) multisinusoid (b) chirp 1.5 1.5

1 1

0.5 0.5

0 0 u(t) u(t)

−0.5 −0.5

−1 −1

−1.5 −1.5 0 0.1 0.2 0.3 0.4 0 0.1 0.2 0.3 0.4 time(s) time(s)

(c) PRBS (d) Schroeder−phased 1.5 1.5

1 1

0.5 0.5

0 0 u(t) u(t)

−0.5 −0.5

−1 −1

−1.5 −1.5 0 0.1 0.2 0.3 0.4 0 0.1 0.2 0.3 0.4 time(s) time(s)

Figure 3.14: Comparison of input signals in time domain.

Figure 3.14 compares the input signals in time domain and Figure 3.15 shows the power spectra of each signal. As shown in Figure 3.15, the flat spectrum profile can be achieved with the Shroeder-phased signal. Input signals play an important role and have a major effect on parameter identifi- cation. Each signal described previously has the tuning parameters that need to be tweaked in order to excite the system with rich frequency contents. The Schroeder- phased signal provides several attractive properties compared to other signals, and therefore, the identification in this thesis was performed with this signal. 34

(a) multisinusoid (b) chirp 4000 500

400 3000

300

p 2000 p 200

1000 100

0 0 0 1000 2000 3000 4000 0 1000 2000 3000 4000 w(rad/sec) w(rad/sec)

(c) PRBS (d) Schroeder−phased 1400 140

1200 120

1000 100

800 80 p p 600 60

400 40

200 20

0 0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 w(rad/sec) w(rad/sec)

Figure 3.15: Comparison of power spectra of input signals.

3.5.2 Dynamic Models for Parameter Identification Two identification models have been developed for general robot dynamics. The first model is the differential model, which required the measurements of po- sition, velocity, acceleration, and joint torque. In order to carry out a practical identification using this model, accurate acceleration measurement has been key to the successful identification, but it is a substantial challenge. Most of robots are seldom equipped with accelerometers from which one can measure the acceleration directly due to the cost and installation problem. The alternative is a numerical reconstruction of acceleration by taking a second derivative of the joint positions at the expense of amplifying the noise. The second model is integral model, often referred to as the energy model because of 35 coming from the energy theorem, which only requires the measurement of positions, velocities, and joint torques. It means that this model does not depend explicitly on the accelerations. Figure 3.16 shows the principles of two identification methods.

Figure 3.16: Comparison of two identification methods.

3.5.2.1 Use of Differential Model The dynamic equation with friction of the robot’s rigid body model can be written as M(q)¨q + C(q,q˙)˙q + G(q)+N(q,q˙)=τ, (3.60) where N(q,q˙)isfriction. More compactly, it can be written as

τ = D(q,q,˙ q¨)X. (3.61) 36

X is the unknown dynamic parameter vector and is composed of the inertial param- eters and the friction parameters. D(q,q,˙ q¨) is the kinematic information matrix, which describes the motions of the robot. It has been referred to as the regression matrix or the observation matrix. If measurements are made at given time instants t1, ..., tm along a given trajectory, we may write    τ(t1)     .  τ¯ =  .  (3.62)   τ(tm)

   D(t1)     .  D¯ =  .  . (3.63)   D(tm)

Once we got enough measurements so as to obtain overdetermined Matrix D¯(q,q,˙ q¨), the unknown dynamic parameter vector X can be determined by the least-square estimation method: X =(D¯ T D¯)−1D¯ T τ,¯ (3.64) where (D¯ T D¯)−1D¯ T is the left pseudo-inverse matrix of D¯. Note that it is important to use the minimum set of (nonredundant) dynamic parameters referred as the base dynamic parameters by grouping parameters together such that the matrix D is always full-rank [47].

3.5.2.2 Use of Energy Model In order to eliminate any numerical derivation of velocity in the identification process to get the joint accelerations, the model based on the energy theorem has been used [49, 50, 55]. From the energy theorem, which states that the total of mechanical energy applied to the system is equal to the change of the total energy of the system, it is  tb T − τe qdt˙ = H(tb) H(ta), (3.65) ta where E(ti) is the Hamiltonian of the system at time instant ti and τe does not include friction torques. Since the total energy(kinetic and potential energy) is 37 linear in the dynamic parameters, it can be written that

 tb T − τe qdt˙ = H(tb) H(ta)=∆e(q,q˙)χ, (3.66) ta where, χ is the inertial parameter vector without the friction coefficients. In order to take friction torques, we can write

τ = τe + τf . (3.67)

Substitution of (3.67) into (3.66) results in

  tb tb T − T τ qdt˙ = W (tb) W (ta)=∆e(q,q˙)χ + τf qdt.˙ (3.68) ta ta

Then,  tb τ T qdt˙ =∆W =∆E(q,q˙)X, (3.69) ta where X is the dynamic parameter vector with friction parameters. If measurements are made for a number of N locations, we may write    ∆W (∆t1)     .  ∆W¯ =  .  (3.70)   ∆W (∆tm)

   ∆E(∆t1)     .  ∆E¯ =  .  . (3.71)   ∆E(∆tm)

The energy model in (3.69) has the same form as (3.61) and a left pseudo-inverse of ∆E¯ can be applied to get the least-square approximation of the dynamic parameters:

X =(∆E¯T ∆E¯)−1∆E¯T ∆¯W. (3.72)

The beauty of this approach is that the dynamic parameters are identified without the joint accelerations. 38

3.5.3 Experimental Identification of Dynamic Parameters It should be noted that it is possible to identify all the dynamic parameters by exciting all links simultaneously with carefully chosen input trajectories. However, from an implementation viewpoint of inverse dynamic control with assumption of neglecting the Coriolis term due to the low velocities of surgical robot or PD control with gravity and friction compensation, only the inertia, gravity, and friction terms have to be identified. Therefore it was decided to identify the dynamic parameters of each link independently while the other links remain in their synchronization positions to simplify identification process and avoid the dimension problems. By moving each joint, it is equivalent to identification procedure for one link manipu- lator. The dynamic equation of the one link robot manipulator based on friction model in (3.43) can be written as

Iq¨ + mgl sin(q)+Fcsgn(q ˙)+Fv(˙q)=τ, (3.73)

where I is the inertia term, Fv is the coefficient of viscous friction of the joint, and

Fc is the coefficient of Coulomb friction of the joint. With the differential model, the regressor vector can be written by

D(q,q,˙ q¨)=[¨qgsin(q)sgn(˙q)˙q]. (3.74)

Considering the total energy in order to get the energy model, we obtain

 tb T 1 2 2 τ qdt˙ = (˙q (tb) − q˙ (ta))I +(cos(q(tb)) − cos(q(ta)))mlg ta  2  tb tb 2 + |q˙| dtFc + q˙ dtFv ta ta    I           ml  = 1 2 − 2 − tb | | tb 2   . 2 (˙q (tb) q˙ (ta)) g(cos(q(tb) cos(q(ta))) ta q˙ dt ta q˙ dt      Fc 

Fv 39

By moving only each joint, we can get the following inertial parameters and the friction coefficients for each joint expressed by 4 × 1 vector:

T X =[X1 X2 X3 X4]=[ImlFc Fv]. (3.75)

With the assumption of neglecting the electrical dynamics for the permanent magnet brush DC motors, we can get the joint torques from the input control voltages in current drive mode:

τ = Ka · Kt · Kg · vc, (3.76) where Ka is the amplifier gain, Kt is the torque constant, which characterizes the electromechanical conversion of armature currents to the torques, Kg is the gear ratio, and vc is the input control voltages. Table 3.1 shows these parameters used in experimental identification.

Table 3.1: Parameters of the electrical system.

Parameters Joint 1 Joint 2 Joint 3 Joint 4 Ka [amp/volt] 0.375 0.375 0.375 0.375 Kt [Nm /amp ] 0.01342 0.01342 0.00652 0.00996 Kg 134 134 72.88 66

3.5.3.1 Identification of the Friction Coefficients in Energy Model The vectors in the regression matrix associated the Coulomb and viscous co- efficients are prone to be linearly dependent in the energy model. The functions |q˙| andq ˙2 are correlated and thus, a strong linear dependency between the vectors exists. This fact gives rise to the difficulty of distinguishing each contribution to regressor. Consequently, it is very difficult to identify the each coefficient separately in energy model. As an experimental verification, the Schroeder-phased input sig- nals were applied to the first joint of the EndoBot and the regression matrix was obtained. The comparison on the coefficient of cross correlation between column 40

Table 3.2: Coefficients of cross correlation.

column 1 2 3 4 1 - -0.8219 0.0004 0.0 2 - - 0.1651 0.2113 3 - - - 0.9442 4 - - - -

vectors in regression matrix gives an indication of correlation:

cov(X, Y ) rXY = , (3.77) σX σY where rXY is the coefficient of cross correlation between X and Y vectors, cov(X, Y )=

E[(X − E(X))(Y − E(Y ))] is the covariance , and σX and σY are the values of stan- dard deviation. The Table 3.2 shows the comparison on the coefficient of cross cor- relation between vectors. Clearly, the correlation between third and fourth column is greater than those between others. This problem can be solved by identifying the friction parameters first and then compensating the corresponding friction torques by subtracting them form the generalized torques. The identification of friction pa- rameters is performed by implementing the velocity control loop as in Figure 3.17.

The Table 3.3 shows the experimental results with friction parameters. Eij stands

Figure 3.17: Block diagram of velocity control loop. for jth link of ith EndoBot. 41

Table 3.3: Friction parameters of each link.

Link Fc [N · m] Fv [N · m · s] E11 0.127 0.044 E12 0.120 0.067 E13 0.029 0.847 E14 0.069 0.742 E21 0.149 0.029 E22 0.125 0.054 E23 0.019 0.795 E24 0.045 0.484

3.5.3.2 Experimental Results Figure 3.18 shows the input torques and measured positions and velocities for the first joint of the EndoBot. The estimation of dynamic parameters is plotted

1

0.5

0

position[rad] −0.5

−1 0 1 2 3 4 5 6 7 8 9 10 time 4

2

0

−2 velocity[rad/sec] −4 0 1 2 3 4 5 6 7 8 9 10 time 0.5

0 input torque[Nm] −0.5 0 1 2 3 4 5 6 7 8 9 10 time

Figure 3.18: Trajectories of the first joint. in Figure 3.19. Figure 3.19 shows that the dynamic parameters are converging into constant values as we expected. The results of the overall identification are summarized in Table 3.4. The data collected from the previous experiment was the joint positions and input joint torques at a sample rate of 100 Hz. The velocity was then calculated by using 42

1

X2 0.5

0

X1 −0.5

−1 parameters

−1.5

−2

−2.5

−3 0 1 2 3 4 5 6 7 8 9 10 time

Figure 3.19: Estimates of the dynamic parameters of the first joint.

Table 3.4: Estimates of the dynamic parameters of the EndoBots.

2 Link I[Kg· m ],[Kg] ml [Kg · m] Fc [N · m] Fv [N · m · s] E11 0.0487 0.1273 0.127 0.044 E12 0.0441 0.1122 0.120 0.067 E13 0.0006 - 0.029 0.847 E14 0.2451 - 0.069 0.742 E21 0.0528 0.1305 0.149 0.029 E22 0.0427 0.1155 0.125 0.054 E23 0.0005 - 0.019 0.795 E24 0.2605 - 0.045 0.484

the Washout filter. The washout filter is the velocity estimator consisting of a low- 1 pass filter, s +1 , and a differentiator, s, and therefore it provides both smoothing p and differentiating effects in one filtering procedure. The washout filter is the proper filter as follows s fwashout(s)= s , (3.78) p +1 43 where p determines where the roll-off occurs. The discrete washout filter was im- plemented by the backward approximation with the location of pole, p = 100:

p(z − 1) fwashout(z)= , (3.79) (pTs +1)z +1 where Ts =0.001s is the sampling time.

3.5.4 Validation of Parameter Identification The validation of the identified parameters is a crucial step before proceeding the controller design. In this research, the parameters are validated by comparing the simulation with the obtained parameters and real output with a given same torque trajectory. The difference between the measured outputs y(t)andtheestimated

1

0.5

0

input torque[Nm] −0.5

−1 0 2 4 6 8 10 12 14 16 18 20 time

1 simulated trajectory 0.5

0 position (rad) −0.5

measured trajectory −1 0 2 4 6 8 10 12 14 16 18 20 time

Figure 3.20: Comparison on measured and simulated output trajectories outputs φT θˆ is called the residual. The residual error analysis with the inputs was performed to see the independence of the input:

ε(t)=y(t) − yˆ(t|θˆ)=y(t) − φT (t)θ.ˆ (3.80) 44

The coefficient of cross correlation between the residual and input is very small (rεu = 0.0818). The result indicates that there is very small correlation between the residuals and the inputs and therefore the obtained model can be used. Figure 3.20 shows the comparison on simulated and measured output trajectories. There exists the small difference between these two trajectories resulted from wishful assumptions on model such as the ignored electric dynamics to simplify the analysis, the position dependent friction coefficients, which are assumed to have a position independent characteristics, and unmodeled nonlinearities. Therefore, it is desirable to design the robust controller that can withstand these variations. CHAPTER 4 SUTURING IN MINIMALLY INVASIVE SURGERY

4.1 Suturing Task Analysis From closing the wound with an ant’s head in ancient time to today’s laparo- scopic stitching with absorbable materials, suturing has a long history and is one of the most difficult tasks and uses a significant percentage of operating time in MIS. Although there is ongoing research on alternatives to suturing technique such as tissue gluing, stapling, and thermal tissue bonding, suturing is still the primary tissue approximation method. However, the laparoscopic suturing task demands high level of skills and endurance on surgeons. Despite the needs to be performed autonomously, no research efforts have been published. It is worth while to decom- pose the suturing task into the subtasks and figure out what technical difficulties is encountered with each subtask. There are several knot tying techniques in surgery [17] and two of them are shown in Figure 4.1.

Square Knot - This is the easiest and reliable method for all suture materials. Basically it consists of two half knots in opposing directions.

Surgeon’s Knot - This knot requires an additional loop in the first throw and thus the surface contact of suture is increase. It results in doubling the friction for initial half knot and therefore more stable than the square knot. This knot generally is used in open surgery and sometimes referred to as a friction knot.

Figure 4.1: Knot tying techniques.

45 46

The most common knot technique in surgery is the square knot. It is an ideal knot for laparoscopic surgery because it requires fewer steps than the surgeon’s knot [17]. For this reason, the only square knot technique has been considered for analysis in this thesis. Based on the observations of the manual suturing operations, the suturing task can be broken down into the following five subtasks:

1. stitch,

2. create a suture loop,

3. develop a knot,

4. place a knot, and

5. secure a knot.

The following are the technical difficulties related to each subtask.

Stitch The stitching subtask involves in entrance and exit bites, extracting the needle, and pulling out the suture. The task analysis shows that the greatest difficulty for this subtask is to manipulate the curved needle between two graspers without slipping or dropping the needle.

Create a suture loop In the subtask of loop creating, the wrapping motion is required to create a suture loop with the loop strand. It is the most difficult motion to be autonomously implemented because the trajectories of suture are not predictable due to the flexibility. The surgeon creates the loop based on visual feedback information and his personal experience.

Develop a knot Once created the loop, we need to pass either needle or short tail through the loop to develop a knot. The difficulty will be grasping the short tail with one of the grasper and pull the suture through the loop.

Place a knot Once developed the knot, the knot needs to be placed into the niche. Placing of the knot requires sliding of the knot. The difficulties will be to de- termine the sliding condition and obtain the optimal trajectory that minimizes the tearing trauma on the tissue. 47

Secure a knot Once completed the knot, a sufficient amount of tension must be applied to tighten the knot, but not too much as to damage the tissue. With the laparoscopic instruments like as long sticks, it is very hard to apply the precise tension so as not to tear off the soft tissue or loose the knot. The technical difficulty will be to regulate the tension along the desired direction.

From this analysis, the primary difficulties are 1) manipulating the curved needle 2) dealing with a flexible suture 3) getting a trajectory for knot placement 4) applying a proper tension. It should be pointed out that there is an attractive device for laparoscopic suturing, called a shuttle needle device. The basic idea is to transfer the needle between two jaws and lock the tapered tip of needle by one of jaws each time. A disposable shuttle needle device is available on the market by USSC(US Surgical Corp.). The advantage of this design is eliminating the difficulty on manipulating the curved needle. The usage of this device in MIS is continuously increasing. For this reason, we adopted this device as our suturing instrument and modified it to be operated by pneumatic power. Then, the remaining difficulties are how to create the knot with flexible sutures, place the knot into the niche, and apply the proper tension. First two difficulties will be discussed in following sessions and the last one will be covered in the tension control session.

Figure 4.2: Grasper with a needle in conventional suturing. 48

Figure 4.3: Shuttle needle device.

4.2 Suture Length Tracking During the autonomous suturing, tracking the effective length of suture is critical for pulling out the suture after exit biting or keeping the boundary of motion when securing the knot. In teleoperated surgery, the surgeon can track the length of suture through visual feedback. In autonomous suturing, tracking the suture length is more challenging. In this section, we are describing our solution to keeping track of the suture length in real-time without extra sensor information. In this analysis, we consider the suture attached to the middle of needle as shown in Figure 4.4. The suture end attached to the needle is referred to as the needle tail or long tail and similarly the end of suture the free tail or short tail. Then, the length of the suture is measured from the exit point to the middle point of needle. The basic concept behind the tracking the length of suture is that we can cal- culate the effective length of the suture from the exit point by tracking the position of the needle, which is available from the encoders and robot forward kinematics. If the suture is pulled continuously by moving the needle holder in one direction, the rate of increasing suture length would be the same as the rate of the movement of tip of the needle holder. However, if the direction is changed, the rate would no longer be the same. For example, if the needle is moving toward the exit point, the rate of suture length changing would be zero. This is due to the tension requirement 49

Figure 4.4: Components of a suture.

on the suture in order for the length to increase. Let Po be the position of the exit point and P be the current tip position of needle holder. Define ξ =(P − Po). P˙ is the velocity vector of the needle and L˙ (a scalar) is the rate of change of the suture length. To keep track of the suture length, we need to find a relationship between L˙ and P˙ .IfP˙ and ξ form an obtuse angle (P˙ ·ξ<0), then the suture is not in tension, therefore L˙ =0.IfP˙ and ξ form an acute angle (P˙ · ξ>0), we need to consider two possibilities. If ξ≥L, then the suture would be in tension. Otherwise, the suture would not be in tension and L˙ = 0. The rate of change for the suture length is summarized below:    0ifξ(t) · P˙ (t) ≤ 0orξ(t) 0.

4.3 Motion Control in the Stitching Task 4.3.1 Dynamic modeling of Stitching Whenever the robot is in the task of stitching, additional forces are are required to pull out the suture from the tissue. Let f denote the force applied to the end of suture. Then, the dynamic equation of the EndoBot during stitching can be written as M(q)¨q + C(q,q˙)˙q + G(q)=τ − J T f, (4.2) 50 where q is the joint displacement vector, M(q) is the inertia matrix, C(q,q˙)˙q is the vector function characterizing Coriolis and Centrifugal forces, G(q) is the gravita- tional force, J(q) is the Jacobian matrix, τ is the applied joint torque, and f is the suture tension force exerted by the robot at the end-effector.

4.3.1.1 Kinematics of the Suture Motion The suture extension is the displacement of the end of the suture from the stitch position on tissue and certainly it is the function of the joint angle. The peculiarity is that the suture extension has a directionality and the rate must be positive. Let the stitch position be in the origin and x(t) represent the current position in Cartesian coordinate from forward kinematic mapping, then,

    xT (t)x(t)ifintension (t)= (4.3)  p max otherwise,

p where max is the previous maximum extension. Then, the suture extension rate can be obtained  T T T  1 √2x (t)˙x(t) x (t)˙x(t) K(q(t)) Jq˙(t) 2 =  ( ) =  ( ( )) if in tension ˙ xT (t)x(t) x t K q t (t)= (4.4)  0 otherwise.

4.3.1.2 Static Model of the Suture Tension For simplicity, we assume that the suture is massless and hence has no dynam- ics. In practice, this is a good approximation since the suture has negligible mass. We further assume that the suture has inelasticity. During pulling out the suture, the force applied to the suture in tension is given by

x(t) f = (F sgn(l˙)+F l˙), (4.5) x(t) s v

˙ where, l is the suture extension rate as pulled out by the robot and Fs and Fv re- spectively denote the static and viscous friction coefficients between the suture and the tissue. The important point here is that the suture tension force f must be pos- itive because suture cannot generate negative tension force due to the positiveness 51 of ˙(t).

Then, the dynamics become

x(t) M(q)¨q + C(q,q˙)˙q + G(q)=τ − J T (F sgn(Jq˙)+F Jq˙). (4.6) x(t) s v

4.3.2 Problem Formulation During the autonomous stitching, the tension applied to the end of suture must be overbounded with fmax and the joint variables should be regulated about the desired position independently of the initial conditions. The control objective here is to choose the controller to solve the following regulation problem.

Given dynamic equation (4.6) and qd, d,andfmax with initial conditions of q(0), q(0)˙ = 0, and (0), determine a feedback control law, τ, so that the closed-loop system satisfies

q(t) → qd as t →∞

q˙(t) → 0ast →∞

(t) → d as t →∞

f(t) ≤ fmax ∀ t, where qd is the desired position, d is the desired suture length after stitching, and fmax is the force limit above which the tissue will be torn off.

4.3.3 Region of the Feasible Motion Once a needle has been passed through the tissue for new stitching, the geo- metric constraint from the finite suture length is created. The norm of tip position of the EndoBot must be radially overbounded with d:

K(q(t)) − K(q(0))≤d. (4.7)

It means that the all motions as well as the desired position must be in the domain of the feasible stitching motion. 52

4.3.4 Problem Simplification From the requirement of desired suture extension, the control objective can be broken down into the following two sequential steps. First, the needle must hit on the surface of region of feasible stitching motion while satisfying the tension inequality condition:

→ →∞ ≤ ∀ q(t) qd as t ,f(t) fmax t, (4.8)

where qd is the desired joint position with which the needle lies on the region and it is equivalent to (t) → d. Then, the robot can move to the desired position:

q(t) → qd as t →∞,f(t)=0∀ t. (4.9)

Obviously, the second problem is the purely position control due to the directional characteristics of the suture tension and the stability is already well proved.

The simplified control problem is given by

Given d and fmax, design a feedback control raw, τ, such that (t) → d as t →∞ and f(t) ≤ fmax ∀t.

4.3.5 Problem Transformation • Case I. It is worth while to take a look at the force applied to the suture in tension in (4.5). The tension force has a positive value only if the suture extension rate is positive and certainly the suture extension rate, ˙(t), comes from the motion of the robot. From this point of view, we can transform the force inequality

constraint into a motion constraint. Let fmax be the maximum tension force under which the suture is pulled out:

˙ ˙ fmax = Fssgn(max)+Fvmax (4.10) 53

− ˙ fmax Fs max = . (4.11) Fv Then, the simplified control problem described above might be written as fol- lows.

˙ Given d and max, design a feedback control raw, τ,suchthat ˙ ˙ (t) → d as t →∞ and (t) ≤ max(t) ∀t.

Clearly, the motion control problem with overbounded tension force can be transformed into the pure motion control problem with a constraint of upper- bounded velocity. What is the benefit from this transformation? In case of that the force information is not available, we still can bound the

tension force by fmax in order to do a stitch without the problem of tearing off the tissue.

• Case II.

Suppose we know the maximum tension force, fmax,andletfd be less than

fmax. Now, the control problem is a regulation problem with a desire position and a desired force.

Given d and fd, design a feedback control raw, τ, such that (t) → d

as t →∞ and f(t)=fd

4.4 Knot Tying In this section, we present results on autonomous suturing using a pair of the EndoBots. One of the EndoBot has a grasping tool and the other a stitching tool. Both tools are built from disposable tools made by US Surgical Corp. (USSC). The mechanical handles were cut and mated with pneumatic drives. The grasping tool can be commanded open or close. The stitching tool contains two jaws, each can lock in the needle. The tool can also pass the needle between the jaws. The challenges of suturing operation include,

• Only the needle position is known. The suture position is not directly mea- sured. 54

• The suture and the tissues are both flexible.

• The workspace is limited.

We will consider three algorithms of automatic ligation in this section. The first one uses a standard manual stitching tool instrumented for robotic operation. The second one modifies the grasping tool with a flexible hook to facilitate the knot tying process. The last one modifies the grasping tool to have an articulated finger. At the present, the first and second methods have been implemented and the last method is currently under development.

4.4.1 Ligation Algorithm 1 The key observation for tying a simple knot is that if the suture can be placed over the jaw carrying the needle, then a loop can be formed by passing the needle to the other jaw. For a human surgeon, this step is performed by putting the jaws over the thread and then pass the needle. This is not possible for the EndoBots since the thread is flexible and the position is not directly measured. Instead, we use the rigidity of the grasper to guarantee that the suture is placing over the jaw. Automatic tying a simple knot can then be accomplished through the following steps (shown schematically in Figure 4.5):

1. Make a single stitch near the wound and pull out the suture so that leave a small suture tail. This may be done manually by the surgeon using the manual mode or semi-autonomously. In the latter case, the surgeon manually grasps both sides of the wound with the grasper and uses the foot pedal to command the stitcher to make a stitch. The stitcher then retracts until a specified amount of suture has been pulled through the suturing point.

2. Grab the suture tail with the grasper tip. This may be done manually by the surgeon using the manual mode or semi-autonomously. In the latter case, the grasper predicts the location of the suture tail based on the location and direction of the suture performed in the previous step.

3. Move the stitcher so that the open jaw (the jaw without the needle) touches the front of the grasper stem. This location should be as far up the grasper 55

stem as the workspace would allow. Denote the point of contact between the stitcher jaw and grasper stem P .

4. Rotate the stitcher 180◦ about the axis OP where O is the center of the spherical joint. The angle form between the grasper and the stitcher would have to be sufficiently large to guarantee that the tip of the needle does not hit the stem of the grasper. At the completion of this step, the thread has to lay over the open jaw. Otherwise the surgeon would have to intervene manually.

5. Move the stitcher towards the grasper until the grasper stem is within the open jaw of the stitcher. The grasper stem could fit through the jaw opening, but it is too thick for the stitcher jaws to close. The stitcher therefore needs to move along the grasper stem until it reaches the grasper tool tip.

6. Rotate the grasper so that the narrow side faces the jaw opening. The stitcher can now close and pass the needle to the other jaw.

7. Retract the stitcher to tighten the knot.

The above procedure would create a simple knot. Two simple knots may be com- bined to form a square knot. However, the second simple knot must be the mirror image of the first. Otherwise, the knot is known as the granny knot, which is not secure. The only modification is that at Step 3, the stitcher should touch the back of the step and in Step 4, the rotation should be −180◦. The square knot procedure may then be repeated to form a surgeon’s knot. The above procedure works well most of the time in the laboratory, but has the following drawbacks:

• In the Step 4 above, a large angle is required between the two instruments. For knots near the center of the workspace, this may not be feasible.

• Because of errors in positioning, the thin suture thread could fall between the open jaw and the grasper stem.

• During retraction stage, the thread could get tangled. 56

Figure 4.5: Autonomous simple knot tying algorithm 1. 57

4.4.2 Ligation Algorithm 2 The first algorithm is a step toward automatic laparoscopic suturing, but suf- fers several drawbacks as to render the procedure less than robust. To address these issues, we made a small modification of the conventional grasping instrument. The key observation is that if we can hold on to any part of the suture at a known po- sition, the rotation of the stitcher would be unnecessary. To achieve this, we added a reciprocating actuator connected to a flexible hook over the grasper hinge so that it can be extended or extracted as needed (see Figure 4.6).

Figure 4.6: Flexible hook for catching the suture.

The simple knot algorithm is now modified as described below (shown schemat- ically in Figure 4.7 and pictures from the experiment in Figure 4.8):

1. Perform Steps 1–2 in Algorithm 1.

2. Extend the flexible hook. Move the stitcher over the hook from the front to back so that the suture hangs over the hook.

3. Perform Steps 5–6 in Algorithm 1.

4. Retract the flexible hook.

5. Retract the stitcher to tighten the knot.

Again, a mirror image of the first simple knot needs to be performed to ensure a secured square knot. It is done by replacing the motion over the hook to back to front (instead of front to back). This algorithm does not have the angle limitation as in Algorithm 1. The motion over the hook can be designed so the thread is 58 always snared by the hook, so the positioning requirement is not as tight as before. Finally, since the hook is close to the grasper tip, the grasper could retract by a small amount as to avoid thread tangling. 59

Figure 4.7: Autonomous simple knot tying algorithm 2. 60

Figure 4.8: Implementation of autonomous simple knot tying algorithm 2. 61

4.4.3 Ligation Algorithm 3 The two ligation algorithms described above require the use of the special stitching tool designed for endoscopic surgeries (specially, the EndoStitch by USSC). For many surgeries, for example, anastomosis (connection between blood vessels), a semi-circular type of needle is used with two graspers. In such cases, the algorithms presented so far are not applicable. In this section, we consider another type of grasper (such as in [11]), which contains an additional universal joint. Then forming a loop can be accomplished by just wrapping the thread around the bent arm. The detailed sequence for tying a simple knot is as described below (see Fig- ure 4.9):

1. Hold the needle between needle holder and make a single stitch near the wound. Pull out the suture so that leave a small suture tail.

2. Move the needle holder around the bent grasper tip to create a loop.

3. Move the two instruments together so that the bent grasper grabs the tail of the suture while maintaining the loop wrapping around the bent stem.

4. Retract the grasper to tighten the simple knot.

To form a square knot, the second simple knot needs to be done with the loop formed in the opposite direction. Note that the only reason that a bent tip grasper is needed is to ensure that the loop does not slip off the grasper.

Figure 4.9: Autonomous simple knot tying algorithm 3.

This algorithm has several advantages over the previous ones. There is no limitation on the relative positions between the two endoscopic instruments, and 62 it does not require the use of a special stitching tool (and the associated needles). This algorithm can also be extended to more general knots. For example, looping around the bent arm would make a friction knot. We are also currently evaluating the possibility of using the grasper with a hook described in Algorithm 2 to implement this algorithm. It should be noted that each step in above knot tying algorithms is performed autonomously under the surgeon’s supervision on suture entanglement and failure of suture tail grasping to confirm the advance to the next step. In the future, it might be anticipated to perform suturing task fully autonomously with additional sensing capability such as vision system with less intervention or self confirmation.

4.5 Placing the Knot 4.5.1 Problem Formulation Once developed the knot, the next step would be to place the knot on the tissue. Seating the knot on the tissue involves the magnitude of applying tension as well as the direction of tension. To investigate the motion of seating the knot, we can consider the simple knot tying in Figure 4.10. The knot forming procedure is illustrated in the left figure. Let a post strand represent the suture that the knot will be tied around and a loop strand represent the suture, which is looped around the post strand as shown in Figure 4.10. A simple knot can be developed by wrapping around the post strand with the loop strand. After developing the knot, applying the tension at both post and loop end results in a simple knot as shown in

Figure 4.10. Define pa represent the position where the needle enters to the tissue and pb the position where the needle exits from the tissue. Let pn be the position of knot and p0 the position at which the knot will be seated. It was reported that applying tension in nearly horizontal would be desirable in order not to get stuck in proceeding the placement of the knot. No report, however, was published about the sliding condition, which guarantees the proceeding the placement of the knot. The following section will consider the motion of a simple knot and derive the sliding condition. The problem of the placement of a knot can be formulated as follows. 63

Figure 4.10: Forming a simple knot.

• Given geometrical information of knot (pa,pb,pn,p0) and friction coefficient of suture (µ), determine the sliding condition in which the placement of a knot can be proceeded.

• Given a desired knot placing trajectory, pn(t), in terms of placing rate (s),

determine the trajectories of suture end, p1e(t) and p2e(t), that bring the knot

on trajectory of pn(t).

4.5.2 Sliding Condition of Knot Placement Placing the knot down on the tissue results from sliding of two strands rela- tively. To investigate the sliding condition, we need to have a model of the knot, which basically is consisted of two sutures wrapped around each one. The modeling of a knotted suture is quite difficult and very complicated. Due to the fact that the proceeding the placement of a knot is highly dominated by the friction between knotted sutures, the suture local deformation is not considered in this research. A simplified model of knotted sutures can be obtained from the two point contact model with assumption of maintaining the half knotted configuration where the equivalent normal forces and the friction forces pass through each point as shown in Figure 4.11. Then, the free-body diagram can be drawn in Figure 4.12.   F1b F2b The problem can be stated as follows: Given F1a, F2a, ,and ,de- F1b F2b F1a F2a F1b termine the sliding condition. Define e1a = , e2a = , e1b = ,and F1a F2a F1b F2b e1a+e1a e2b = and let en = be the unit vector along which the normal forces F2b e1a+e1a apply and et be the unit vector along which the friction forces apply. When the 64

Figure 4.11: Two point contact model of the knotted suture.

Figure 4.12: Free-body diagram for the knot sliding problem.

 knot is in equilibrium, F =0,so

F1ae1a + F2ae2a + F1be1b + F2be2b =0. (4.12)

It has two unknown variables, F1b and F2b, and two component equations as follows:

eu · (F1ae1a + F2ae2a + F1be1b + F2be2b) = 0 (4.13)

ev · (F1ae1a + F2ae2a + F1be1b + F2be2b)=0. (4.14)

It becomes

F1a cos β1 − F2a cos β2 − F1b cos γ1 + F2b cos γ2 = 0 (4.15)

F1a sin β1 + F2a sin β2 − F1b sin γ1 − F2b sin γ2 =0. (4.16) 65

From (4.16), F1a sin β1 + F2a sin β2 − F2b sin γ2 F1b = . (4.17) sin γ1 Substituting (4.17) into (4.15),

sin β1 sin β2 F1 (cos β1 − ) − F2 (cos β2 + ) a tan γ1 a tan γ1 2 F b = sin γ2 . (4.18) cos γ2 − tan γ1

The condition of occurring the impending motion cab be obtained from the free- body diagram of one strand as shown in Figure 4.13. In equilibrium, we have the

Figure 4.13: Free-body diagram of one strand. following force balance equations:

F1ae1a + F1be1b − f1anen + f1bnen − µ(f1an + f1bn)et =0. (4.19)

Then, the impending motion occurs when

(F1ae1a + F1be1b)et = µ(f1an + f1bn). (4.20)

We can obtain the explicit representation of sliding condition, but it is quite a heavy equation and very difficult to get the physical interpretation. Due to the desired 66 sliding trajectory, which lies in the normal direction to the tissue, henceforth the symmetric case will be considered as shown in Figure 4.14:

β = β1 = β2 (4.21)

γ = γ1 = γ2 (4.22)

F1a = F2a. (4.23)

Then,

Figure 4.14: Free-body diagram of one strand in symmetric tension.

F1a cos β − F1a cos β − F1b cos γ + F2b cos γ = 0 (4.24)

F1a sin β + F1a sin β − F1b sin γ − F2b sin γ =0. (4.25)

From (4.24) and (4.25),

sin β F1 = F2 = F1 . (4.26) b b sin γ a

It means that the tension in the bridge strand can be determined from only the applying force, Fa, and the geometric conditions, β and γ. 67

Moment equations about p and p give

f1an = F1a sin β (4.27)

f1bn = F1b sin γ. (4.28)

The coulomb’s law states the sliding occurs when

F1a cos β − F1b cos γ>µ(f1an + f1bn). (4.29)

It becomes

1 1 − > 2µ. (4.30) tan β tan γ

It gives the following sliding condition:

tan γ 0 <β

It means that the sliding begins when the β is sufficiently small to satisfy the in- equality condition dependent on the coefficient of friction between the strands and γ. The sliding condition of (4.31) can be represented geometrically as shown in Figure 4.15. It is a three dimensional shape, which can be obtained by extracting a cone from a cylinder shape. It means that the force lying in this shape can slide down the knot. Due to the constraint, which requires the applied tension to lie on

Figure 4.15: Geometric interpretation of the sliding condition.

the plane consisting of pa, pb and pn, the sliding condition can be represented with two right-angled triangles in a plane as shown in Figure 4.16. 68

Figure 4.16: Sliding condition in a knot tying plane.

4.5.3 Trajectories of Suture Ends for Placing a Knot The knot position can be determined from the end motion of the sutures due to the length invariant constraints. Let q be the current knot position and p1e and p2e be the current position of each end of strands. During motion, we assume that

Figure 4.17: Evolving trajectory of the knot. the length of each strand is constant:

  l1 + l1e = l1 + l1e = c1 (4.32)   l2 + l2e = l2 + l2e = c2. (4.33)

When the suture ends move to new positions, the knot position q can be obtained from the following length invariant constraint equations (which is imposed so that the strands remain in tension):

  −    −  p1e q + q p10 = c1 (4.34)   −    −  p2e q + q p20 = c2. (4.35) 69

By choosing the suture end position trajectories, we can shape the knot position trajectory. We focus on the case that the desired knot trajectory is a straight line normal to the tissue. This case can be achieved with only symmetric pulling. The problem becomes finding p1e(t) to move the knot along pn(t) as shown in Figure 4.18. Let ξ denote the unit vector along the desired direction of knot motion, and s be the desired sliding rate. The angle γ(t) and the distance between pb and pn evolve

Figure 4.18: Trajectory of the loop end for placing the knot. according to

pn(t)=pn0 +(st)ξ (4.36)

δ(t)=pn0 − pb−pn(t) − pb (4.37) p 0−st γ(t)=tan−1( n ). (4.38) pb

Due to the sliding condition, (βmin ≤ β(t) ≤ βmax(t)=f(γ(t),µ)), the trajectory of the suture end to just start knot motion is

p1e(t)=p1e0 + δ(t)(cos(β(t))u +sin(β(t))v). (4.39)

Excessive tension at the tissue at pa and pb could damage the tissue. The optimal trajectory in the sense of minimizing the tension force in the bridge strand can be obtained by using the minimum β(t)=βmin:

p1e(t)=p1e0 + δ(t)u +(st)ξ. (4.40)

Figure 4.19 compares the trajectories with β(t)=βmax(t)andβ(t)=0. 70

trajectories of loop end 4

3.5

3

2.5

2 v [cm] loop end 1.5

Beta* Pn 1

0.5 Beta=0

0 −1 −0.5 0 0.5 1 1.5 2 2.5 3 u [cm]

Figure 4.19: Trajectories of loop end.

4.6 Controller Requirement and Architecture In surgical robotic applications, it is not possible to get the models, which completely predict all dynamics of the system over the entire range of operating due to the non-deterministic nature of behavior of environments such as flexible sutures and soft tissues. In order to guarantee the safe operation in surgical tasks, the controller architecture reflects that the proceeding of task must evolve under surgeon’s supervision. This supervisory controller gives the capability of dealing with uncertainty and interactive commands from surgeons according to the real situation without increasing the complexity. In this section, the architecture of the EndoBot’s controller will be described.

4.6.1 Hybrid Dynamic System 4.6.1.1 Hybrid State From the planning perspective, suturing can be seen as sequencing states of the dynamic system whose states evolve continuously with time or discretely according to asynchronous events. Inherently it is a hybrid dynamic system in nature. In order to model the continuous time systems as well as the discrete event systems, 71 the hybrid system framework is introduced with the definition of hybrid state and the structure of hybrid automaton. The hybrid system can be viewed as a general framework of a discrete event system (DES), which contains the continuous state vector and a discrete state vector as

h =(x(t),s), (4.41) where h, x(t), and s represents the hybrid state, the continuous state, and discrete state, respectively. In robotic applications, the continuous state evolves with time according to motion planning. The discrete state can be updated with the occurrence of the corresponding events.

4.6.1.2 Hybrid Automaton Hybrid automaton provides a mathematical model consisting of the continuous dynamic system and the discrete dynamics of finite automaton. In this research, the hybrid automaton is described by a quintuple as

H =(S, X, E, Φ, Inv), (4.42) where S is a finite set of discrete states (locations), X is the continuous state space in which x is defined, E is a finite set of events (transitions), Φ is a set of activities (output functions), and Inv is a set of invariants of each s. In hybrid automata, the state x(t) evolves continuously according to a control law within each state until an event occurs. This continuous state must satisfy

x(t) ∈ Inv(s) for all t ∈ (0,δ), (4.43) where δ is a duration of the continuous evolution. In other words, the Inv can be viewed as continua where the normal evolution of continuous state can define a specified state, s, as shown in Figure 4.20. An unpredictable event can occur when the continuous state x(t) crosses a boundary during the evolution or when a human intervention occurs. When the evolution is over, a planned event occurs to transit 72

Figure 4.20: State invariant space. to next state. In robotic applications, the invariant space for continuous state x(t) can be defined from trajectory planner as shown in Figure 4.21. In Figure 4.21, ei

Figure 4.21: Hybrid state transition diagram. represents the planned event, which occurs when the evolution of continuous state in si−1 reached the previously defined state ande ˜i represents the unpredictable event due to uncertainty of the model.

4.6.2 Human Sharing Supervisory Controller A supervisory controller can be thought of as a discrete event system, which issues the transition of states and receives events, which indicate either the success- ful evolution of current state or the occurrence of an error condition. Due to the fact that all possible events cannot be predicted during the planning phase, the con- 73 troller architecture must reflect that the proceeding of task evolve under surgeon’s supervision. Figure 4.22 shows the controller architecture implemented in this re- search. In this research, the human sharing supervisory control is used to supervise

Figure 4.22: Human sharing supervisory controller. the surgical operations. It can give the systems decision making and fault recovery capabilities through the surgeon. The Autonomous Discrete Event System (ADES) takes the events and causes the state transition in autonomous mode. The Human Discrete Event System (HDES) takes the events which either the surgeon or the event generator issues in order to correspond to the error occurrence.

4.6.3 Planning of Suturing Task Each Endobot can have three states in high level as shown in Figure 4.23: Lock, manual, and autonomous states. The transition to the lock state can be started from issuing the lock event by either the surgeon or the ADES. In lock state, the robot manipulator is under a control law, which regulates the current position, and only two events for transition to the manual or autonomous state can 74

Figure 4.23: High level state transition diagram. be received. The Inv can be defined as

Inv(SL)={x(t) ∈ Inv(SL):x1(t)=xcp,x2(t)=0}, (4.44)

where x(t)=(x1(t),x2(t)), x1(t) represents the position state vector, x2(t) represents the velocity state vector, and xcp denotes the current position vector. In manual state, the surgeon can take all control commands and HDES (Human Discrete Event System) is active to correspond to the event issued by the surgeon. The manual state, SM , has substates, SMi, according to values of discrete states as shown in Figure 4.24. In contrary to the lock state in (4.44), the Inv of the manual substate can be viewed as a whole workspace:

Inv(SMi)={x(t) ∈ Inv(SM i):xmin ≤ x(t) ≤ xmax}, (4.45)

where xmin and xmax define the workspace. The autonomous sequencing of suturing algorithm can be evolving in the autonomous state under the surgeon’s supervision and the surgeon can issue the event, which causes the transition to the lock state in order to gain the control power for dealing with uncertainties.

In Figure 4.25, ei represents the planned event issued by the ADES when the evolution of the current state ends ande ˜i represents the unpredictable event during the evolution due to the real situation, which cannot be predicted. Thee ˜i makes the transition to the NOT BE state,s ˜i. In this case, it can transit to the previous state and try it again by generating the evente ˆi−1. The surgeon also can take the control in this case and he or she can recover the error in manual state and continue 75

Figure 4.24: Manual state diagram.

Figure 4.25: Autonomous state evolution. to evolve the algorithm by transiting into the autonomous state. Figure 4.26-4.29 show the state transition diagram for suturing task and Figure 4.30 shows the state evolution for creating a knot. Two local coordinates of the EndoBots, E1 and E2, are calibrated with 3D Digitizer with respect to the predefined base coordinate, B, such that the transformation between these frames can be achieved as shown in Figure 4.31.

4.6.4 Development Environment In this research, all controllers, the high level discrete event system controller and the low level motion controller, are implemented in Simulink with ARCSlink library (ARCS, Inc.), which contains the Simulink device drivers of the DSP Lighting 76

Figure 4.26: State transition diagram for stitching task.

Figure 4.27: State transition diagram for grasping suture tail.

Figure 4.28: State transition diagram for creating the knot. 77

Figure 4.29: State transition diagram for securing the knot.

Figure 4.30: State evolution for creating the knot.

Figure 4.31: Calibration of two coordinates. motion controller (ARCS, Inc.). Once created the Simulink block, C code can be automatically generated using Real-Time Workshop (MathWorks, Inc.) and can be compiled and linked with Texas Instruments C compiler so that it can be downloaded to the DSP motion controller through AIDE (ARCS, Inc.), which can provide the run time environment through ISA bus. Once downloaded the execution file, AIDE can communicate with the target system. Figure 4.32 shows the overview of these development environments. 78

Figure 4.32: Overview of the experimental environment. CHAPTER 5 MOTION CONTROL

5.1 Gravity Compensation in Manual Mode In the manual mode, the motors are back-drivable allowing the surgeon to manual move the tip of the tool in roll-pitch-yaw and translation directions. As early mentioned, most procedures in MIS are time consuming and in manual mode the surgeon might be tired to deal with operating two EndoBots. Since the manual operation is performed with low speed and the gravity term will tend to be the key external static force, the gravity compensation is applied to minimize the surgeon’s command input. In the manual mode, the joint torque is applied to compensate gravitational force and moment:

τ = τg, (5.1) where the gravitational vector τg is given by   − c − c  [m1l1 m2(l2 + q4)]gs1c2     c c   −[m1l1 − m2(l2 + q4)]gc1s2    τg =   . (5.2)    0  m2gc1c2

5.2 Motion Control in Autonomous Mode As the surgeon gains greater confidence in using the EndoBots, certain pro- cedures can be performed autonomously under the surgeon’s supervision. In this section, two motion control schemes implemented in the EndoBots system will be discussed. First, the output feedback controller is motivated to consider from the fact that only position state is possible to measure in the EndoBots system. The second control scheme is the state feedback controller with the velocity observer based on the global linearization of the nonlinear robotic systems.

79 80

5.2.1 Energy Based Output Feedback Control The dynamic equation of robot with Coulomb and viscous friction is

M(q)¨q + C(q,q˙)˙q + G(q)+N(˙q)=τ, (5.3) where

N(˙q)=Fcsgn(q ˙)+Fvq.˙ (5.4)

Fc represents the coefficient of Coulomb friction and Fv denotes the coefficient of viscous friction. To derive an energy based control law, consider the following positive definite quadratic Lyapunov function candidate:

1 1 V = q˙T M(q)˙q + eT K e, (5.5) 2 2 p where e = q − qd represents the error between the desired and the actual position. The Lyapunov function in (5.5) has an energy based interpretation. The first term is the the total kinetic energy and the second term has the interpretation of an artificial potential energy with a shape of global convexity with the desired position at the global minimum. The control objective here is to design a feedback control law, τ, such that e → 0and˙e → 0ast →∞. Differentiating (5.5) with respect to time: 1 V˙ =˙qT M(q)¨q + q˙T M˙ (q)˙q +˙eT K e. (5.6) 2 p Substituting M(q)¨q in (5.3) yields

1 V˙ =˙qT (τ − C(q,q˙)˙q − G(q) − F (˙q)+ M˙ (q)˙q)+e ˙T K e (5.7) 2 p 1 = q˙T (M˙ (q) − 2C(q,q˙))q ˙ +˙qT (τ − G(q) − F (˙q)) +e ˙T K e (5.8) 2 p T T =˙q (τ − G(q) − F (˙q)) +q ˙ Kpe. (5.9)

By choosing the feedback control law with compensation of gravitational and fric- tional terms:

τ = −Kpe − Kde˙ + G(q)+F (˙q). (5.10) 81

V˙ becomes a negative semi-definite:

T V˙ = −e˙ Kde.˙ (5.11)

Lyapunov’s direct method leads to only stability of the equilibrium point. By in- voking the Lasalle’s invariance principle, it can be shown that the largest invariant set S is given by S = {e, e˙ :˙e =0,e=0}. (5.12)

Consequently e = q − qd, tends to zero asymptotically. The beauty of this approach is that we can easily show the globally asymptotical stability of the the manipulator under PD control with gravitational and frictional compensation.

• Control Gain Selection The error dynamics are given by

M(q)¨e + C(q,q˙)˙e + Kde˙ + Kpe =0. (5.13)

It can be simplified with assumption of neglecting the Coriolis and centrifugal terms and an approximation of inertia matrix, M(q), by a constant diagonal inertia matrix M due to the high gear ratio:

Me¨ + Kde˙ + Kpe =0. (5.14)

Since the simplified error dynamics are linear differential equation, it is possible to tune the PD controller gains to achieve a desired performance specification.

The gain matrix Kp and Kd are chosen such that the error dynamics becomes

critically damped. Let ωe be the desired natural frequency of error dynamics, then for being critically damped error dynamics, this gives

2 Kp = we M (5.15)

 Kd =2 KpM. (5.16) 82

5.2.2 Experimental Evaluation of Joint Space Control For experimental evaluation, we choose the closed loop natural frequency to be 7 Hz and approximate the inertia term at the center of the workspace. For being critically damped error dynamics, this gives    97 0 0 0       0970 0   Kp =   (5.17)    001.93 0  0 0 0 193

   4.4000      04.40 0   Kd =   . (5.18)    000.088 0  00 08.8

• Friction Compensation To compensate the friction effect, the estimated amount of Coulomb friction is added into the input torque as follows

τf = Fˆcsgn(q ˙), (5.19)

where Fˆc is estimate for Fc. Since the joint velocities are not too high, the amount of compensated viscous friction is small enough to be neglected. In Figure 5.1, the solid line shows the friction approximation for compensation by only the Coulomb friction while dotted line the real friction. Figure 5.2 shows the experimental results on fric- tion compensation in the first two joints. Figure 5.2 (a) and (b) show that the friction leads to almost no motion in moments of changing of velocity direction without friction compensation. As seen in Figure 5.2 (a) and (b), after some time, the control output from the controller is big enough to overcome the friction and starts to catch up the desired motion. Figure 5.2 (c) shows the improved performance with the same reference trajectory with friction com- pensation as proposed above with first joint. Figure 5.2 (d) shows the same 83

Figure 5.1: Friciton approximation for compensation.

(a) (b) 15 15

10 10

5 5

0 0

position [deg] −5 position [deg] −5

−10 −10

−15 −15 0 2 4 6 8 0 2 4 6 8 time time

(c) (d) 15 15

10 10

5 5

0 0

position [deg] −5 position [deg] −5

−10 −10

−15 −15 0 2 4 6 8 0 2 4 6 8 time time

Figure 5.2: Friction compensation. (a) First joint position without friction compensation (b) Second joint position without friction compensation (c) First joint position with friction compensation (d) Second joint position with friction compensation 84

effect with second joint. Figure 5.3 shows the quadrants passing error due to

Circle Tracking 15

10

5

0 y [mm]

−5

−10

−15 −15 −10 −5 0 5 10 15 x [mm]

Figure 5.3: Cartesian space experimental result.

friction and how the performance of circle tracking is improved in Cartesian space. Table 5.1 quantitatively summarizes the performance of the friction compensation.

Table 5.1: Summary of friction compensation performance.

Compensation Max. Max. Mean Mean Cartesian Joint(1) Joint(2) Joint(1) Joint(2) Error Error Error Error Error (deg) (deg) (deg) (deg) (mm)

Off 1.060 0.994 0.425 0.314 0.970

On 0.354 0.463 0.108 0.082 0.466

• Experimental results of Circle Tracking in Task Space Figure 5.4 shows experimental results for a reference trajectory consisting of a circle in the Cartesian Space. The three series of circle tracking were 85

rad performed with three difference angular velocity of ω1 =0.1Hz(0.62 sec ), ω2 = rad rad 0.25Hz(1.57 sec ), and ω3 =0.5Hz(3.14 sec ) with radius r1 =2.5mm, r2 = 5mm, and r3 = 10mm. Gravity and Friction compensation were included in all experiments.

Circle Tracking 15

10

5

0 y [mm]

−5

−10

−15 −15 −10 −5 0 5 10 15 x [mm]

Figure 5.4: Experimental results of circle tracking.

Table 5.2 quantitatively summarizes the tracking performance of joint space controller. As shown in the table, higher angular velocity results in large tracking error.

Table 5.2: Circle tracking error.

Angular r1 r2 r3 Velocity (mm) (mm) (mm) ω1 0.409 0.463 0.487 ω2 0.419 0.551 0.588 ω3 0.445 0.544 0.757 86

5.2.3 Nonlinear Decoupled State Feedback Controller The motivation of this approach is to globally linearize the nonlinear robot dynamics with suitable choice of the input. This approach is attractive in the sense of availability of rich linear controller design schemes with linear system such as the optimal or robust controller.

5.2.3.1 Global Linearization of the Nonlinear Robotic System Let the rigid robot dynamics be given in general compact matrix form by

M(q)¨q + C(q,q˙)˙q + G(q)+N(˙q)=τ. (5.20)

The idea is to cancel the nonlinearities by choosing control input as

τ = Mˆ (q)u + Cˆ(q,q˙)˙q + Gˆ(q)+Nˆ(˙q), (5.21) where Mˆ (q), Cˆ(q,q˙), Gˆ(q), and Nˆ(˙q) denote the approximated matrix of real robot dynamics. Substituting (5.21) into (5.20) gives

−1 ˆ − −1 q¨ = u +( M(q) M (q) I) u +( M(q) ∆H(q,q˙) , (5.22) η1(q) η2(q,q˙) where ∆H(q,q˙)=(Cˆ(q,q˙) − C(q,q˙)) + (Gˆ(q) − G(q)) + (Nˆ(˙q) − N(˙q)). Then it becomes

q¨ = u + η1(q)u + η2(q,q˙), (5.23) where η(u, q, q˙)=η1(q)u+η2(q,q˙) represents the disturbance of the linearized robot dynamics due to the lumped model uncertainties. The state equation can be given by defining a new set of the state variables to be    q  x =   . (5.24) q˙

Then

x˙ = Ax + B(u + η1(x)u)+Bη2(x), (5.25) 87 where      0 I   0  A =   B =   . (5.26) 00 I

With assumption of neglecting the uncertainty term η(u, q, q˙), we have

x˙ = Ax + Bu. (5.27)

Clearly, this is globally linearized system. Hence, it can be concluded that the manipulator can be regarded as a set of fully decoupled second-order time-invariant linear systems. In practice, it is not possible to have the exact matrices of robot dynamics, and therefore the linear controller based on this approximated linear system, η(u, q, q˙) ≈ 0, needs to have a robustness against the inevitable uncertainties in terms of modeling and parameters.

5.2.3.2 Optimal State Feedback Control The objective of optimal control is to design the control law that minimizes a cost function while satisfying the equality constraints imposed by the differential equation of system dynamics:

 tf min J(x(t),u(t)) = φ(x(tf )) + L(x, u, t)dt subject tox ˙(t)=f(x, u, t). u(t) t0 (5.28) It is, in general, impossible to solve analytically so that we can use the explicit control law. In the case of the linear time-invariant system with the quadratic cost functions, the explicit control equation, which is basically the full state feedback controller, can be obtained. Given the system dynamics:

x˙(t)=Ax(t)+Bu(t) x(t0)=x0, (5.29) design a control law such that it minimizes the cost function:

 tf 1 T 1 T T J = x(tf ) Sx(tf )+ (x (t)Qx(t)+u(t) Ru(t))dt, (5.30) 2 2 t0 88 where S and Q are real symmetric positive semidefinite matrices, and R is a real symmetric positive definite matrix. This is a constrained optimization problem and it can be converted into an unconstrained optimization problem using Lagrange multipliers. Then, the augmented cost function is given by

1 T Ja = x(tf ) Sx(tf )+ 2  1 tf (xT (t)Qx(t)+u(t)T Ru(t)+λ(t)T (Ax(t)+Bu(t) − x˙(t)))dt 2 t0  tf 1 T 1 T = x(tf ) Sx(tf )+ (H(x, u, t)=λ(t) x˙(t))dt, (5.31) 2 2 t0 where H(x, u, t) denotes the Hamiltonian function and is defined

1 H(x, u, t)=L + λT f = (xT (t)Qx(t)+uT (t)Ru(t)+λT (t)(Ax(t)+Bu(t))). (5.32) 2

Using the theory of the calculus of variations, we have the Euler-Lagrange equations for the states, the costates, and the optimal control inputs as

∂H state equation :x ˙(t)= = Ax(t)+Bu(t) (5.33) ∂λ ∂H costate equation : λ˙ (t)=− = −Qx(t) − AT λ(t) (5.34) ∂x ∂H control equation : =0⇐⇒ u(t)=−R−1BT λ(t) (5.35) ∂u with the following the boundary conditions at t = t0 and t = tf :

t = t0 : x(t0)=x0 (5.36) ∂φ t = tf : λ(tf )= = Sx(tf ). (5.37) ∂x(tf )

Optimal control problem now becomes a TPBVP (Two Point Boundary Value Prob- lem), which has n boundary conditions at the initial time t0 and n conditions at the final time tf . The state and costate equations can be given by the Hamiltonian 89 matrix H:         −1  x˙(t)   A −BR BT   x(t)   x(t)    =     ≡ H   . (5.38) λ˙ (t) −Q −AT λ(t) λ(t)

It is a homogeneous linear differential equation and the solution can be written by      x˙(t)   x(t0)    = eHt   , (5.39) λ˙ (t) λ(t0) where λ(t0) is not given. It can be solved, in general, numerically by iterating the boundary condition, called a shooting method. It basically sequences the state and ˆ costate equations in order to get λ(tf ) with the guessed initial values of λ(t0)and perturbs λ(t0) until the boundary conditions at the final time are satisfied to the ˆ desired accuracy, λ(tf )=λ(tf ). In case of LTI system with the quadratic cost function, the problem can be solved analytically using the sweep method proposed by Bryson and Ho [61]. The idea of the sweep method is the assumption of a linear relationship between the costate and the state, which can be given by

λ(t)=P (t)x(t). (5.40)

This transforms the TPBVP with respect to x(t)andλ(t) into a single point bound- ary problem in P (t)as

−P˙ (t)=AT P (t)+P (t)A + Q − P (t)BR−1BT P (t) (5.41)

with the boundary condition P (tf )=S. This is called the time varying Riccati equation. It means that the solution to the Riccati equation gives rise to the optimal full state feedback control law:

u(t)=−K(t)x(t)=−R−1BT P (t)x(t), (5.42) which minimizes the cost function while satisfying the equality constraints. This optimal controller is called a linear quadratic regulator (LQR). Note that even for 90 the LTI system, the Kalman gain K(t)=R−1BT P (t) is time-varying for the finite time case. From the implementation perspective, it is not convenient and very expensive solution in terms of the memory requirement to store all time-varying gains. The steady-state optimal control gain can be used in place of the time- varying gain based on the assumption of ignoring the transient gain. It can be obtained with the following infinite horizon cost function:

 1 ∞ J = (x(t)T Qx(t)+u(t)T Ru(t))dt Q ≥ 0,R>0. (5.43) 2 0

Then, the LQ optimal gain is found to be

K = R−1BT P, (5.44) where P is the solution of the algebraic Riccati equation (ARE):

AT P + PA+ Q − PBR−1BT P =0. (5.45)

The existence of a solution to the ARE and the stability of the closed-loop system are ensured by the controllability and observability of the plant. Under the assumption on the control weight matrix, R = ρI,(ρ>0), the ARE can be manipulated to lead the return difference inequality condition on the robustness:

σmin(1 + L(jω)) ≥ 1 ∀ω, (5.46)

−1 where σmin denotes the minimum singular value and L(s)=K(sI − A) B is the loop transfer function matrix. For SISO systems, it can be given as

|1+L(jω)| = |L(jω) − (−1)|≥1. (5.47)

This means that the Nyquist locus must remain outside the unit circle centered at (-1,0) in the complex plane. In other words, the LQR control has the guaranteed stability with infinite gain margin, -6 dB gain reduction tolerance, and at least 60 deg phase margin. 91

5.2.3.3 Velocity Estimation Friction and velocity estimation are two main factors, which can affect the performance in motion control applications. The friction can be identified and compensated as mentioned before. In motion control applications, velocity can be measured directly with velocity sensors at the expense of high cost. In general, only position signals are available and it is true in our research. In order to implement the optimal state feedback controller, the state corresponding to the velocity should be estimated.

Numerical Differentiating The easiest way to get the velocity information is to numerically differentiate the position signals. Inherently numerical differenti- ating is very sensitive to the noise. From the filter perspective, the numerical differentiating has the following filter characteristics:

1 − e−sTS Fnd(s)= , (5.48) Ts

where Ts represents the sampling time. It is an irrational transfer function and can be approximated by creating a series expansion. With the second order expansion, it becomes

1 1 F (s) ≈ 1 − (1 − sT + s2T 2)=sT (1 − sT ). (5.49) nd s 2 s s 2 s

Clearly, it is an improper function and the Bode plot corresponding to the

second order approximated filter with Ts =0.001 is shown in Figure 5.5. As can be seed in Figure 5.5, it amplifies the signal in high frequency band where noise signal mostly lies in. In order to get around this problem, the low-pass filter can be used with the numerically differentiated signals.

Washout Filter The alternative is to combine the differentiating and smoothing into one filter, called a washout filter. The washout filter can be represented as s Fwo(s)= s , (5.50) p +1 where p determines the location of pole and in general can be located 2 ∼ 3 92

Bode Diagram 40

30

20

10

0

−10

−20

90

45 Phase (deg) Magnitude (dB)

0 2 3 4 10 10 10 Frequency (rad/sec)

Figure 5.5: Bode plot for the numerical differentiating filter.

times faster than the fastest system pole. It is a proper function and the Bode plot corresponding to the washout filter is shown in Figure 5.6. It is basically a high-pass filter with a slope of 20 dB per decade to corner frequency and a unit amplifying gain over the pass band.

Observer Design Once the system dynamics is given, the most effective method to estimate the states can be the dynamic state observer. When noise is present, the system can be represented as

x˙(t)=Ax(t)+Bu(t)+w(t) (5.51) y(t)=Cx(t)+v(t), (5.52)

where w(t) represents the process noise and v(t) represents the sensor(measurement) noise. The observer is constructed as follows

xˆ˙ (t)=Axˆ(t)+Bu(t)+L(y(t) − Cxˆ(t)), (5.53) 93

Bode Diagram 40

39.5

39

38.5

38

37.5

37

36.5

36

60

30 Phase (deg) Magnitude (dB)

0 2 3 4 10 10 10 Frequency (rad/sec)

Figure 5.6: Bode plot for the washout filter.

wherex ˆ(t) is the estimate of x(t). The observer poles can be chosen arbitrarily faster enough such that the controller poles can dominate the system dynamics. Too fast poles can amplify the measurement noise signals. As a rule of thumb, 2 ∼ 3 times faster poles than the closed system poles can be used. In an optimal manner, the observer can be designed so that it has an optimal balance of dependency on the model or the measurements.

The Kalman Filter Design The Kalman filter gain L can be obtained such that it minimize the mean square error:

J = E[(x(t) − xˆ(t))(x(t) − xˆ(t))T ]. (5.54)

Let Qo and Ro be the covariance matrices of the process and measurement noise:

T Qo = E[w(t)w(t) ],

T Ro = E[v(t)v(t) ]. 94

Then, the steady-state optimal gain is computed from

T −1 L =ΣC Ro , (5.55)

where Σ represents the intensity of the estimation error and it is the solution of the following the ARE:

T − T −1 AΣ+ΣA + Qo ΣC Ro CΣ=0. (5.56)

This result is the dual of that of the LQR design.

5.2.3.4 Linear Quadratic Gaussian (LQG) Control The difference between LQR and LQG can be represented as

LQR : u(t)=KLQRx(t)

LQG : u(t)=KLQGxˆ(t)=KLQG(f(y(t))).

The LQR has guaranteed stability margins, but a major limitation of the LQR is to require the measurement of all states. In contrary to the LQR, the LQG control provides the optimal control with partially available states. From the separation principle, the LQG optimal controller can be obtained:

T he Robot Dynamics : τ = M(q)¨q + C(q,q˙)˙q + G(q)+N(˙q) ˆ ˆ ˆ ˆ T he F eedback Linearization : τ= M(q)u + C(q,q˙)˙q + G(q)+N(˙q)   x˙(t)=Ax(t)+Bu(t)+w(t) TheLinearizedPlant:   y(t)=Cx(t)+v(t)     u(t)=Kxˆ(t) T he Optimal Controller : K = −R−1BT P    0=AT P + PA+ Q − PBR−1BT P    ˙ −  xˆ(t)=Axˆ(t)+Bu(t)+L(y(t) Cxˆ(t)) T he Optimal Observer : L =ΣCT R−1  o   T − T −1 0=AΣ+ΣA + Qo ΣC Ro CΣ. 95

The LQG controller is basically a regulator, which has a zero command input. In order to track a reference input, the state error feedback controller is considered via coordinate translation in this work. Define the error vector as

x˜(t)=x(t) − xd, (5.57)

where xd is the desired state andx ˙ d =¨xd = 0. Then,

x˜˙ (t)=Ax˜(t)+Bu(t) (5.58) y˜(t)=Cx˜(t). (5.59)

State feedback controller in the second-order system can be viewed as a generaliza- tion of proportional-derivative control. To eliminate the steady-state error, we can augment the system by taking the integral of the error as an additional state:

  

xi(t)= (y(t) − r)dt = (Cx(t) − r)dt = Cx˜(t)dt. (5.60)

The augmented state equation is then,          x˜˙ (t)   A 0   x˜(t)   B    =     +   u(t). x˙ i(t) C 0 xi(t) 0

It can be written with augmented error state denoted by adding a bar:

x¯˙ (t)=A¯x¯(t)+Bu¯ (t) (5.61) y¯(t)=C¯x¯(t). (5.62)

Figure 5.7 shows the block diagram of the LQG controller with augmented error state. Figures 5.8–5.11 show the experimental results of the LQG controller on the first joint. Figure 5.8 compares the tracking performance of LQG controller when the friction compensation is active and not active. As seen in the figure, friction leads to no motion in moments of changing of velocity direction without the friction compensation and the friction compensating improves performance. Figure 5.9 (b) 96

Figure 5.7: Observer based linear controller with feedback linearization. 97 compares the measured position error state and the estimated position error state and (c) shows the estimated velocity error state and the corresponding control input signal is shown in (d). Figure 5.10 compares the measured position error states according to different weighting factors on the position error state. Figure 5.11 shows the corresponding control input. As seen in Figures 5.10–5.11, the error state is decreased and the control input is increased by increasing the weighting factor on q1 as expected. .

40 desired without friction comp. with friction comp. 30

20

10

0 position[deg]

−10

−20

−30

−40 0 5 10 15 time

Figure 5.8: Experimental results of friction compensation in the LQG controller. 98

(a) (b) 40 4 actual estimated 20 2

0 0 position[deg]

−20 position error[deg] −2

−40 −4 0 5 10 15 0 5 10 15 time time

(c) (d) 15 15

10 10

5 5

0 0

−5 −5 control input [Nm] −10 −10

estimated velocity error[deg/sec] −15 −15 0 5 10 15 0 5 10 15 time time

Figure 5.9: Tracking performance of the LQG controller.

4 q1=1 q1=5 q1=10 3

2

1

0

position error[deg] −1

−2

−3

−4 0 5 10 15 time

Figure 5.10: Comparison on error states with the different weighting. 99

30 q1=1 q1=5 q1=10

20

10

0 control input

−10

−20

−30 0 5 10 15 time

Figure 5.11: Comparison on control inputs with the different weighting. CHAPTER 6 SHARED CONTROL

Shared control, as the name implies, enables the human operator and the computer control the manipulator in parallel. It means that the human operator controls some axes while the computer concurrently controls other axes. This would be useful in the following scenarios:

• Surgeons want to move the tool along the tool axis for drilling. The roll-pitch- yaw rotation of the docking station would be computer controlled while the surgeon manually controls the tool translational motion and operates the tool (for example, for drilling or cutting).

• The surgeon wants to control the tip of the tool along a straight line. For example, the surgeon may want to perform precision cutting and stitching. The computer would actively control the tool to stay in a “valley” along which the surgeon is free to move the docking station and operate the tool manually.

The surgeon may specify the direction of manual control through a 3D input device. Our current implementation allows the surgeon to manually operate one of the EndoBot as the pointing device and use a foot pedal to register the selected points. In the shared control case, the computer control algorithm needs to be modified to ensure only the deviation from the specified path is corrected, but not the motion along the path.

6.1 Constraints Description In shared control mode, we add artificial constraints to relieve the operator of some sub task that would be tiring for an operator to concentrate on such as tool alignments. These artificial constraints are assumed to be holonomic. In or- der to control robot systems with holonomic constraints, we have to represent the constraints mathematically. Let the forward kinematics (mapping from the joint coordinate, q =(q1,q2,q3,q4) to the end effector coordinate x =(rT ,φ)) be denoted

100 101 by x = k(q). Specifically, suppose that the surgeon specifies that the end effector position rT and roll angle φ to be constrained as C(rT ,φ) = 0. The constraints that the controller needs to enforce are C(k(q)) = 0. Writing them more compactly, the holonomic constraints can be represented on the task space variables x(t) ∈n as follows C(x)=0. (6.1)

Let the gradient of C(x)beJc(x):

∂C(x) J (x)= . (6.2) c ∂x

6.2 Control Objective In shared control, the control task is to restrict the motion of manipulator within the constraint space, which can be represented as mapping C(x) = 0. The control objective here is to choose the controller to solve the following regulation problem with constraints. Given dynamic equation and constraints, determine a feedback control law, τ, so that the closed-loop system satisfies

C(x(t)) → 0ast →∞. (6.3)

It means that the motion will be lie in the following Constraint manifold:

S = {x, x˙ : C(x)=0,Jc(x)˙x =0}. (6.4)

A suitable stabilizing controller would be

− T V = Jc KC(x), (6.5) where K is a PID or lead/lag type of controller. 102

6.3 Task Space Control Design The dynamic model in task space becomes

−T Mx(q)¨x + Cx(q,q˙)˙x + Gx(q)=J τ. (6.6)

Let xcd be the desired position of constrained variables in task space and xc be the current position of constrained variables. We apply the following a PD feedback control law with gravity compensation so that the constrained space error xc − xcd tends asymptotically to zero:

− T T τ = J Jc ( Kp∆xc + KvJcx˙ )+G(q), (6.7)

where ∆xc = xc − xcd.

6.3.1 Stability Choose the following positive definite quadratic form as a Lyapunov function candidate: 1 1 V = q˙T M(q)˙q + ∆x T K ∆x . (6.8) 2 2 c p c Differentiating (6.8) with respect to time,

1 V˙ =˙qT M(q)¨q + q˙T M˙ (q)˙q +∆˙xT K ∆x . (6.9) 2 c p c

Substituting M(q)¨q gives

1 V˙ =˙qT (τ − C(q,q˙)˙q − G(q)+ M˙ (q)˙q)+∆˙xT K ∆x (6.10) 2 c p c 1 = q˙T (M˙ (q) − 2C(q,q˙))q ˙ +˙qT (τ − G(q)) + ∆x ˙ T K ∆x (6.11) 2 c p c T − T =˙q (τ G(q)) + ∆x ˙ c Kp∆xc. (6.12)

Since

T T T τ = J F = J Jc Fc (6.13)

∆xc = xc − xcd (6.14) 103

= C(x) − xcd (6.15)

∆˙xc = Jcx,˙ (6.16) then,

˙ T T T T − T V =˙q J Jc Fc +(Jcx˙) Kp∆xc q˙ G(q) (6.17) T T T T − T =(Jq˙) Jc Fc +˙x Jc Kp∆xc x˙ JG(q) (6.18) T T − =˙x (Jc (Fc + Kp∆xc) JG(q)). (6.19)

This equation suggests the structure of the controller. By choosing the control law for constraint force:

− − −T Fc = Kp∆xc KvJcx˙ + Jc JG(q), (6.20)

where Kp and Kv are symmetric positive definite matrix. V˙ becomes ˙ − T T V = x˙ Jc KvJcx.˙ (6.21)

The function V˙ is only negative semi-definite, since

V˙ = 0 forx ˙ =0, ∀∆xc. (6.22)

Hence Lyapunov’s direct method leads to only stability of the equilibrium point. By invoking the Lasalle’s invariance principle, it can be shown that the largest invariant set S is given by

S = {x, x˙ :˙x =0, ∆xc =0}. (6.23)

Consequently the constrained space error, ∆xc = xc − xcd, tends to zero asymptot- ically. The feedback control law becomes

T T τ = J Jc Fc − T T = J Jc (Kp∆xc + KvJcx˙)+G(q). (6.24) 104

6.3.2 Controller Description The block diagram of the shared control algorithm is shown in Figure 6.1. Basically, the constraint Jacobian, Jc, decomposes the Cartesian forces into con-

Figure 6.1: Block diagram of task space shared control. strained space and unconstrained space. It is worth while pointing out that applying the constrained torques from (6.24) is equivalent to adding virtual mechanisms with passive springs and dampers normal to constrained space. Suppose that we have a constrained line parallel to z axis, and then applying shared control has same effect on adding virtual springs and dampers lying on the x-y plane. From this point of view, intuitively the closed-loop system with control law of (6.24) is always stable since the robotic system is passive. These virtual springs and dampers would be very useful for actively guiding the surgeon. When the surgeon is trying to pull out the suture, the constrained path will be helpful to guide the surgeon to indicate preferred direction with adjustable impedance.

6.3.3 Example To illustrate the procedures of shared control, we will consider two simple examples of constrained equations and show how to get the constraint Jacobian,

Jc(x). 105

• Move along the Straight Line Supposed that the end-effector is required to stay along a specific line. In this case, the surgeon can control the roll as well as translation. Let p(x) ∈4 represent the tip of the tool and x axis be the preferred line. Let n be the number of task space variables and m be number of constraints. Then, the constraint space can be described as the set of all points satisfying the following constraint equations: x =0,y=0. (6.25)

These constraints become    x  C(x)=  =0. (6.26) y

The constraint Jacobian becomes m × n matrix as follows   ∂C(x)  1000 Jc(x)= =   . (6.27) ∂x 0100

6.3.4 Experimental Evaluation of Task Space Shared Controller This section presents the experimental results that illustrate the performance characteristics of shared controller described in the previous section. For the purpose of evaluating the shared control algorithm experimentally, a straight line through the center of the spherical joint was selected as a constrained path. The straight line can be parameterized as x y z = = . (6.28) a1 a2 a3 The constraints can be represented by

x y C1(xT )= − = 0 (6.29) a1 a2

x z C2(xT )= − =0. (6.30) a1 a3 106

By combining, we obtain   x y  −   a1 a2  =0. (6.31) x − z a1 a3 The constraint Jacobian becomes 2 × 4 matrix:   1 − 1 00  a1 a2  JC (xT )=  . (6.32) 1 0 − 1 0 a1 a3

Table 6.1 shows the parameters used to specify the constraint line.

Table 6.1: Parameters for constrained line.

a1 a2 a3 1 1 -2

0

−0.02

−0.04

−0.06

−0.08

−0.1

z [m] Desired trajectory −0.12

−0.14

−0.16

−0.18

−0.2 0 0.05 0 0.1 0.05 0.15 0.1 0.15 0.2 0.2 x [m] y [m]

Figure 6.2: Desired constrained path.

Figure 6.2 shows the desired path, which can be obtained with the above constraint Jacobian. The surgeon can slide up and down the tip of tool along the 107 desired trajectory and roll the tool freely. In shared control, the operator can be interpreted as an actuator in the unconstrained space, but also can be a disturbance source in the constrained space. It should be noted that the performance of shared controller depends on the human operator. For this reason, the operator was trying to move the tool in all direction with same impedance for performance evaluation. Figure 6.3 shows the measured task space position during the motion with shared

0.1 0.1

0.08 0.08

0.06 0.06

x [m] 0.04 y [m] 0.04

0.02 0.02

0 0 0 2 4 6 8 0 2 4 6 8 time time

0 1.5

1 −0.05 0.5

−0.1 0 z [m] r [rad] −0.5 −0.15 −1

−0.2 −1.5 0 2 4 6 8 0 2 4 6 8 time time

Figure 6.3: Measured task space positions. controller. As can be seen, the resulting motion was completely satisfactory. For sliding up and down and rolling, the operator can easily move the robot. Figure 6.4 shows the constraint force from the shared controller and Figure 6.5 T T shows the constraint forces mapped into range space R(Jc ). The range space R(Jc ) represents the set of all possible task space forces that make the constraints. On the other hand, it represents the force exerted from virtual springs and dampers. Figure 6.6 shows the corresponding joint torques. The torques in first, second and fourth joint keep the tip of tool along the constrained path. Figure 6.7 demonstrates the performance of shared controller in Cartesian space. The dotted line shows the desired constrained path and the solid line rep- 108

10

5

0 Fc1 [N]

−5

−10 0 1 2 3 4 5 6 7 8 9 time

10

5

0 Fc2 [N]

−5

−10 0 1 2 3 4 5 6 7 8 9 time

Figure 6.4: Constraint forces.

10 10

5 5

0 0 Jc‘Fc1 [N] Jc‘Fc2 [N]

−5 −5

−10 −10 0 2 4 6 8 0 2 4 6 8 time time

10 10

5 5

0 0 Jc‘Fc3 [N] Jc‘Fc4 [N]

−5 −5

−10 −10 0 2 4 6 8 0 2 4 6 8 time time

T Figure 6.5: Constraint forces projected onto range space R(Jc ). 109

2 2

1 1

0 0 tau1 [Nm] tau2 [Nm]

−1 −1

−2 −2 0 2 4 6 8 0 2 4 6 8 time time

2 2

1 1

0 0 tau3 [Nm] tau4 [Nm]

−1 −1

−2 −2 0 2 4 6 8 0 2 4 6 8 time time

Figure 6.6: Measured joint torques. resents the measured task space positions from the shared controller. The shared control algorithm brings the tip of tool onto the desired trajectory. Table 6.2 summarizes the performance of shared control with respect to de- sired stiffness and actual stiffness getting from maximum deviation and maximum constraint force in constrained space without considering the dynamic effects.

Table 6.2: Desired and measured stiffness for shared control.

Max(|e|) Max(|Fc|) Measured Stiffness, Ka Desired Stiffness, Kd [m] [N] [N/m] [N/m] ec1 0.0047 4.680 1003 1000 ec2 0.0041 4.097 1002 1000

6.4 Joint Space Shared Control Design

Let the motion constraints in the task coordinate xT be specified as

xc = c(xT )=0, (6.33) 110

0

−0.02

−0.04 Actual trajectory −0.06

−0.08

−0.1

z [m] Desired trajectory −0.12

−0.14

−0.16

−0.18

−0.2 0 0.05 0 0.1 0.05 0.15 0.1 0.15 0.2 0.2 x [m] y [m]

Figure 6.7: Experimental result of task space shared control with con- strained line. where c is a constraint function. Let f(·) be the complement of c so that  

 c(xT )  d(xT )=  (6.34) f(xT ) is a diffeomorphism (i.e., d is one-to-one and onto, and d and d−1 are both con- tinuously differentiable). The basic concept of shared control implemented in this section is that it is suitable to choose the generalized coordinate vector in order to describe the constrained motions. Due to the presence of k constraints and the resulting lack of k degree of freedom, the description of constrained motions in

Cartesian coordinates is not efficient. Clearly, the mapping d(xT ) transforms the position vector expressed in Cartesian coordinates into generalized coordinate space, which marches on the constraint surface. With this generalized space, the motions can be easily decomposed into one in constrained space and the other in free space. The corresponding vector in Cartesian coordinates can be obtained with the inverse 111 mapping as shown in Figure 6.8 . The assumption of diffeomorphism ensures the bijection mapping of both position and velocity vectors. If we use the same joint

Figure 6.8: Concept of coordinates mapping. space controller as described earlier, shared control can be achieved by removing the free motion component in the desired task space trajectories. Specifically, let q be the measured joint positions. The desired free motions should be the same as the actual free motions, therefore, f(xTd )=f(k(q)). The constraint motions are required to be zero, therefore, c(xTd ) = 0. The desired task space set points can then be obtained from  

−1  0  xTd = d   , (6.35) f(k(q)) where d is from (6.34). The desired task space velocities can be found similarly:

 −1    Jc   0  x˙Td =     , (6.36) Jf Jf Jq˙

∂c ∂f where Jc = ∂x and Jf = ∂x.

Once xTd is found, the same joint space controller can be used:

−1 −1 τ = −Kp(q − k (xT d) − Kd(˙q − J (q)˙xT d)+G(q)+N(˙q). (6.37) 112

6.4.1 Controller Description

Figure 6.9: Block diagram of joint space shared controller.

Figure 6.9 shows a block diagram of the joint space shared controller described in previous section. In order to better understand the controller, it is worth while to take a look at the map, d(xT ). This composite mapping defines a relationship of the end-effector position between in task space and the composite space, which 4 consists of a constrained space and unconstrained space. Let xcf ∈ be a vector in composite space and k be a number of constraints and n be a degree of freedom of system. Then, the first k components of the vector xcf represent constrained motion of the end-effector and the last m = n − k components represents the free motion. This mapping decomposes the task space into constrained space and unconstrained space. From the desired constraints, it is straightforward to define the constrained map, c(xT ), but it is not clear to select f(xT ). How shall we choose the free motion function, f(xT )? As said before, d(xT ) is a diffeomorphsim. In other words, it is a differentiable homeomorphism and a map is a homeomorphism if it is one-to-one and onto, continuous, and has a continuous inverse. For being invertible, it should be nonsingular and the free motion function, f, may be chosen to be the function that is not linearly dependent on c. However, to avoid coupling, it is in general desirable to choose f to be orthogonal to c. 113

6.4.2 Examples • Move along the Straight Line Let the straight line be parameterized as

x y z = = . (6.38) a1 a2 a3

Then the constraint function is     1 x − 1 y =0 1 − 1 00  a1 a2   a1 a2  c(xT )=  =   xT . (6.39) 1 x − 1 z =0 1 0 − 1 0 a1 a3 a1 a3

The free motion function, f, can be chosen to be orthogonal to c. In this case, we may choose f to be    a1 a2 a3 0  f(xT )=  xT . (6.40) 0001

The Jacobian for constrained motion and free motion are   1 − 1 00  a1 a2  Jc =   (6.41) 1 0 − 1 0 a1 a3    a1 a2 a3 0  Jf =   . (6.42) 0001

• Move along the Circle Suppose that the tip of tool is needed to stay along a circle, which lies on x-y plan. Let the radius of this circle be r, and the center of the circle at the point

(x0,y0,z0). The constraint equations can be represented

2 2 2 c1(xT )=(x − x0) +(y − y0) − r (6.43)

c2(xT )=z − z0. (6.44) 114

Then, the constraint function is   2 2 2  (x − x0) +(y − y0) − r  c(xT )=  =0. (6.45) z − z0

The constraint Jacobian is now    2(x − x0)2(y − y0)0 0 Jc =   . (6.46) 0010

Let θ denote the angle between the x axis and the vector r:

x = r cos(θ) (6.47) y = r sin(θ). (6.48)

The free motion can be described by

−1 y f1(x )=θ =tan ( ). (6.49) T x

Let the tip of tool be free to rotate about an axis parallel to the tool axis, then

f2(xT )=φ. (6.50)

Jf becomes   −y x  r r 00 Jf ==   . (6.51) 0001

6.4.3 Experimental Evaluation of Joint Space Shared Controller • Linear Trajectory For the purpose of evaluating the joint space shared con- troller experimentally and comparing with the task space controller, same straight line used in evaluating the task space shared control was selected. Figure 6.10 shows the measured task space positions. The tip of tool is sliding down and up and the position on x and y axis is increased and decreased cor- responding to position of z axis. Figure 6.11 shows the desired and measured 115

0.1 0.1

0.08 0.08

0.06 0.06

x [m] 0.04 y [m] 0.04

0.02 0.02

0 0 0 2 4 6 8 0 2 4 6 8 time time

0 1

0.8 −0.05

0.6 −0.1 z [m] r [rad] 0.4

−0.15 0.2

−0.2 0 0 2 4 6 8 0 2 4 6 8 time time

Figure 6.10: Measured task space positions. positions in joint space and Figure 6.12 shows the corresponding controller outputs.

Figure 6.13 demonstrates the performance of shared controller. The dotted line shows the desired constrained path and the solid line represents the measured task space position from the shared controller. The constraint force in the direction normal to the free motion can be obtained as follows:

T T τ = J Jc Fc, (6.52)

where Fc is the constraint force in task space:

T + −T Fc =(Jc ) J τ. (6.53)

Then, the effective stiffness in constrained space can result in

Kc = max |Fc| /max |xc| . (6.54) 116

0.5 0.5 qd qd q q qe qe

0 0 joint 1 [rad] joint 2 [rad]

−0.5 −0.5 0 2 4 6 8 0 2 4 6 8 time time

1 0.5 qd qd q q 0.5 qe qe

0 0 joint 3 [rad] joint 4 [rad] −0.5

−1 −0.5 0 2 4 6 8 0 2 4 6 8 time time

Figure 6.11: Desired and measured joint position

2 2

1 1

0 0 tau1 [Nm] tau2 [Nm]

−1 −1

−2 −2 0 2 4 6 8 0 2 4 6 8 time time

2 2

1 1

0 0 tau3 [Nm] tau4 [Nm]

−1 −1

−2 −2 0 2 4 6 8 0 2 4 6 8 time time

Figure 6.12: Measured joint torque 117

0

−0.02

−0.04

−0.06 Actual trajectory −0.08

−0.1 z [m] Desired trajectory −0.12

−0.14

−0.16

−0.18

−0.2 0 0.05 0 0.1 0.05 0.15 0.1 0.15 0.2 0.2 x [m] y [m]

Figure 6.13: Actual path with joint space shared controller

Table 6.3 summarizes the performance of joint space shared controller with the linear constrained motion.

Table 6.3: Effective constraint stiffness for linear trajectory

Max(|e|) Max(|Fc|) Effective Stiffness, Ke [m] [N] [N/m] ec1 0.0013 8.1838 6268 ec2 0.00062 9.763 15570 118

• Circular Trajectory Figure 6.14 shows the desired circle trajectory, which is centered at the (0, 0, -0.15 m) with radius of 0.05m. The experimental data of the joint shared controller with the circular trajectory are illustrated in Figures 6.15–6.18. Table 6.4 summarize the performance of joint space shared controller with the circular motion.

0

−0.02

−0.04

−0.06 Desired trajectory −0.08

−0.1 z [m] −0.12

−0.14

−0.16

−0.18

−0.2 0 0.05 0 0.1 0.05 0.15 0.1 0.15 0.2 0.2 x [m] y [m]

Figure 6.14: Desired circle trajectory

Table 6.4 summarizes the performance of joint space shared control with linear motion.

Table 6.4: Effective constraint stiffness for circular trajectory

Max(|e|) Max(|Fc|) Effective Stiffness, Ke [m] [N] [N/m] ec1 0.0032 102.02 31400 ec2 0.002 3.49 1770 119

0.1 0.1

0.05 0.05

0 0 x [m] y [m]

−0.05 −0.05

−0.1 −0.1 0 2 4 6 8 0 2 4 6 8 time time

0 1

0.8 −0.05

0.6 −0.1 z [m] r [rad] 0.4

−0.15 0.2

−0.2 0 0 2 4 6 8 0 2 4 6 8 time time

Figure 6.15: Measured task space position

0.5 0.5 qd qd q q qe qe

0 0 joint 1 [rad] joint 2 [rad]

−0.5 −0.5 0 2 4 6 8 0 2 4 6 8 time time

1 0.5 qd qd q q 0.5 qe qe

0 0 joint 4 [m] joint 3 [rad] −0.5

−1 −0.5 0 2 4 6 8 0 2 4 6 8 time time

Figure 6.16: Desired and measured joint position 120

2 2

1 1

0 0 tau1 [Nm] tau2 [Nm]

−1 −1

−2 −2 0 2 4 6 8 0 2 4 6 8 time time

2 2

1 1

0 0 tau3 [Nm] tau4 [Nm]

−1 −1

−2 −2 0 2 4 6 8 0 2 4 6 8 time time

Figure 6.17: Measured joint torque

0

−0.02

−0.04

−0.06 Actual trajectory Desired trajectory −0.08

−0.1 z [m] −0.12

−0.14

−0.16

−0.18

−0.2 0 0.05 0 0.1 0.05 0.15 0.1 0.15 0.2 0.2 x [m] y [m]

Figure 6.18: Actual trajectory of the end-effector in the task space for the circular constraint CHAPTER 7 TENSION CONTROL

7.1 Securing a Knot After the knot is formed, securing the knot can be performed by applying the tension. For the square knot, two throws are required and each throw must be done in opposite direction and with a different tension. For the first throw, the proper tension should be applied to place a simple knot on the wound surface such that it will not contribute on tissue strangulation and also it would have enough traction force to remain the knot as it is laid down. Applying too much tension might cause breaking out the tissue. After the second throw, applying the precise tension would be necessary so as not to break the knot or loose the knot.

7.1.1 Principle of Direction of Securing a Square Knot Consider the suturing line locating on the tissue plane in Figure 7.1. Let the point pa and pb be the entry point and exit point of suture, respectively. For simplicity without the loss of generality, the base coordinate frame B can be attached to the point where the suturing line and the line connecting two suturing point pair.

Define ha and hb to be the unit vectors heading point pa and pb in the frame B. After the needle is drawn through the tissue, the loop end, which holds the needle, is at the exit point pb andthepostendisatpa. Define ple and ppe to represent the position of the loop end and post end and pn to represent the knot position. Let − e = ple−pn and e = ppe pn be the unit vectors of the directions along which the l ple−pn p ppe−pn tensions apply. In the current configuration of the Endobots, ple represents the tip position of stitcher and ppe represents the tip position of grasper.

• Direction of the first throw of a square knot

For the first throw of the square knot, the stitcher holding the needle is forced to move along the direction ha and the grasper holding the suture tail is required to

121 122

Figure 7.1: Suturing line.

move along the hb asshowninFigure7.2:

en · ha =1 (7.1) et · hb =1.

Figure 7.2: Direction of securing in the first throw.

• Direction of the second throw of a square knot

After the second throw, the direction of securing the knot should be opposite to the

first throw. The stitcher is forced to move along the direction hb and the motion of grasper is required to be done in the direction ha as shown in Figure 7.3:

en · hb =1 (7.2) et · ha =1.

These requirements can be written by

iT en(iT ) · hb =(−1) (7.3) iT et(iT ) · ha =(−1) , 123

Figure 7.3: Securing the knot.

where iT represents each throw, iT =1, 2. If these requirements are not satisfied, the desired square knot is not achieved. For example, if the tension is applying in the opposite direction during the second throw as

en(2T ) · hb = −1 (7.4) et(2T ) · ha = −1, it yields the following unstable a two half hitch knot as shown in Figure 7.4.

Figure 7.4: An unstable two half hitch knot with reverse-directed tension during the second throw.

7.2 Tension Measurement 7.2.1 Basic Idea Measuring a tension in the suture in minimally invasive surgery is very difficult because of the geometric constraints on mounting sensor and the sterilizing problem. Strain gauge transducers are the most widely used sensors in the applications of force measurements. They can be mounted on the laparoscopic instruments and measure the strains of the instruments according to the external forces as shown 124 in Figure 7.5. These sensors have high sensitivity, accuracy, and bandwidth with simple electronic circuit. With these sensors, the tension can be measured directly. The main drawback of using the strain gauge sensors in minimally invasive surgery is that they are very sensitive to temperature variation. The contact with the human soft tissues during the surgery may cause drift on the outputs. Another problem is that the surgical instruments go through in high temperature process in order to be sterilized for the repeated uses. It forces to have a disposal type strain gauge sensor. Calibrating with the each tool should be considered.

Figure 7.5: Strain gauge transducer.

Vision sensors can be used with the pre-modeled and pre-marked sutures. If the sutures have the marks at predetermined spacing, the tension can be estimated by measuring the elongation between two marks as shown in Figure 7.6. The diffi- culty with this method is that the mark may not be visible due to the contamination of blood or the occlusion of the sutures for the instruments. It is also difficult to have a high bandwidth and resolution. The idea to measure the tension in this research is to put a force/torque sensor in the base of the manipulator, called a base sensor, as shown in Figure 7.7. It has two main benefits such as

• The sensor does not need to be sterilized,

• Force measurement (and hence control) is independent of the tools.

Beside with these benefits, we can use the sensor information to compensate the 125

Figure 7.6: Vision sensor.

Figure 7.7: Base force/torque sensor. frictions on the joints such that precise motion control can be achieved [43, 44]. It can also be used in system identification [84]. Main drawback of base sensor method is that the sensor outputs have the configuration dependent gravity and motion- induced components. In order to compensate these dynamic and static effects from the signals, the precise dynamic models and the dynamic parameters are required.

7.2.2 Tension Estimation with a Base Force/Torque Sensor Consider that a robot manipulator is mounted on the base force/torque sensor and interacts with the environments as shown in Figure 7.8. The sensor output is a generalized force vector consisting of force components and moment components. This generalized force vector can be represented with a wrench: 126

Figure 7.8: Transformation of wrenches.

   τx       τ   y       τz  F =   . (7.5)    fx       f   y  fz

Then, the measured wrenches from the base sensor can be expressed as sum of the following components:

Fs = Fm + Fg + Fb, (7.6) where Fm is the motion-induced dynamic wrench, Fg is the configuration-dependent static wrench, and Fb is the base wrench exerted by environments on the end-effector.

In tension control where the motion is relatively slow, Fm can be neglected. The configuration-dependent static wrench due to gravity can be determined with the mass and center of mass of each link in real-time. Then, the first step on estimating tension is to compensate the gravity wrench:

Fb = Fs − Fg. (7.7)

Then, next step is to calculate the equivalent wrench applied at the end-effector. The measured base wrench should be transformed from base frame to tool frame. 127

Transformation of the wrench between two coordinate frames can be achieved with the adjoint transformation, Φ : 6 →6,    R 0  Φ=  . (7.8) pRˆ R

Consider B is an inertial frame and E is a frame attached to the rigid body. Let

Fe represent an applied wrench at the origin of E with respect to the E coordinate frame. Then, the wrench in B’s coordinate frame is given by

T Fb =ΦebFe.

Thus, the wrench in the end-effector frame, Fe, is calculated from the measured base wrench: T Fe =ΦbeFb.

It can be expanded as  

 Reb −Rebpˆbe  Fe =   Fb. (7.9) 0 Reb

As a result, the corresponding joint torque can be obtained as

T T T T τ = J Fe = J ΦbeFb =(Jb) Fb, (7.10)

where Jb =ΦbeJ is the base Jacobian matrix.

7.3 Force and Position Control 7.3.1 Problem Formulation As mentioned before, the tension applied to the end of suture should be reg- ulated about the desired value and the end-effector of robot is forced to lie on the straight line. In order to maintain the direction we can impose artificial constraints so that the motion can be only occurred along the constraint line. The control ob- jective here is to design a controller to solve the following problem. Given dynamic model for robot manipulator, environment model and 128

fdes, C(x)=0with initial condition of f(0) = 0, q(0), q˙(0) = 0, determine a feedback control law, τ, so that the closed-loop system satisfies

f(t) → fdes as t →∞

C(x(t)) → 0 ∀ t

where fdes is the desired tension, C(x) = 0 represents the holonomic constraint on the task space variables and defines the desired direction along which the tension would be applied. Clearly, the tension control problem during securing the knot can be turned into force and motion regulation problem, which will be reviewed in the following section.

7.3.2 Hybrid Force/Position Regulating Control Since artificial constraints are imposed, securing the knot can be considered as a geometrically constrained problem. When the whole information on environment geometry is available, an effective strategy to embed the capability of position and force control would be a hybrid force/motion control scheme, also referred to as a hybrid control. The basic concept is the partition of the task space into direction of motion and force based on orthogonality. To this goal, the overall control torques can be split into two components:

τ = τm + τf (7.11)

where τm is the torque vector applied to motion control and τf isthetorquevector for force control. The hybrid controller, which will be addressed here, is similar in many respect to the dynamic hybrid control approach. Consider a simple problem of the three-link planer manipulator in Figure 7.9. The constraint can be expressed by C(x) = 0, where x ∈2 is the generalized position vector in Cartesian space. To express the end-effector position on the constraint line, a free motion function, f(x), can be chosen such that c(x)andf(x) can be mutually independent. Then d(x)can be defined by d(x)=[c(x)T f(x)T ]T and it represents the end-effector position on the constraint hypersurfaces. Let Jc(x)=∂c(x)/∂x be the Jacobian of the constraint 129

Figure 7.9: Constrained motion.

equation, and Jf (x)=∂f(x)/∂x be the Jacobian of the free motion equation. In classical dynamic hybrid control literatures, the objective is to design a feedback controller such that it can move the end-effector along the line, Jf , while applying a constraint force in the normal direction to the constraint, Jc. In other words, the

Jacobian Jc and Jf represent the force and position control directions, respectively. In knot securing case, the objective is to find a feedback control law so that it can regulate the force along the direction of Jf while constraining the motion in the direction of Jc. It can be considered as a problem with planar manipulator moving in a slot with attached spring. The objective is to regulate the spring force while applying no force against the side of the slot. The constraint equation can be

Figure 7.10: Constrained motion for securing the knot. 130 written in joint space as

T Φ(q)=C(k(q)) = 0 Φ(q)=[Φ1(q), ..., Φm(q)] , (7.12) where k(q) represents the forward kinematics. Then,

∂Φ(q) ∂c(x) ∂k(q) m×n JΦ(q)= = = JcJ JΦ(q) ∈ . (7.13) ∂q ∂x ∂q

Since there is no applied force against the side of the virtual slot, the required force in the Cartesian space to generate the desired tension can be written as

T ∈n F = Jf ff F , (7.14)

n−m where ff ∈ is the force generating tension in the force controlled subspace.

Then, the joint torque τf generating the force, ff ,alongtheJf direction is given by

T T T T τf = J F = J Jf ff = JΦ ff . (7.15)

The force ff can be obtained from the measured generalized force Fe:

 −1  Jc  ff =[0 In−m]   Fe. (7.16) Jf

Then, the overall feedback control law is given by

τ = τm + τf (7.17)

τm = M(q)um + G(q) (7.18)    −1   −1 −1  0  −1  Jc   0  um = −Kp(q − k (d ( ))) − Kd(˙q − J     Jq˙) (7.19) f(k(q)) Jf Jf    0  T T   τf = J Jc . (7.20) uf 131

The position regulation to satisfy the constraint is already verified in shared control section, therefore hereafter only force regulation problem designing the force, uf , will be considered. The existing force control scheme can be grouped into two categories. The first category is implicit force control, which achieves desired compliant behavior through position error based control and is suitable to avoid excessive force buildup. Compliance control and impedance control are belonged to this category. The second category is explicit force control control. As the name implies, explicit force control has an explicit closure of a force feedback loop such that the output force can be regulated. Since the tension should be regulated about the desired value and the measurement of tension is available, the explicit force control is suitable.

7.3.3 Explicit Force Control The general explicit force control scheme used here is

 ˙ ˙ uf = Kf fr + Kp(fr − fm)+Kd(fr − fm)+Ki (fr − fm)dt − Kvx˙ m, (7.21)

where fr is the reference force, fm is the measured force, Kf is the feedforward force gain, Kp is the proportional force gain, Kd is the derivative force gain, Ki is the integral force gain, Kv is the velocity gain for the active damping, andx ˙ m is the measured velocity. In order to achieve the compliant transition during a contact period, damping needs to be added to the systems. With damping, systems will be more stable and can reduce the spike of force response. The derivative force gain can be used to obtain damping with incorporating of numerical differentiating the force signals. However, due to presence of noise in the force signals, implementing of derivative force control essentially is not feasible. Filtered derivative control may be used with incorporating low-pass filter, which can reduce the noise at the expense of phase lag. An alternative is to actively add the damping to the systems. Active damping method is widely used to deal with impact problem and it is much considerably effective for soft environment same as a suture model. For soft environment, the velocity trajectory is considerably smoother than for stiff one. For this reason, active damping component is added. In addition, to ensure steady state performance and 132 robustness, it is advisable to endow the controller with an integral action. The feedforward component usually needs to be added for the case of time-varying desired force tracking problem, but in this study it is not considered. Finally, the explicit force control law is given by

 − − − uf = Kp(fr fm) + Ki (fr fm)dt K v x˙ m . (7.22)

proportional active integral force damping force feedback forward feedback

7.4 Stability Analysis 7.4.1 Proportional plus Integral Control with Active Damping

Figure 7.11: Simplified second order environment model.

In this research, the suture is modeled as a second order system as shown in Figure 7.11. F is the control input and m, b,andk represent the modeled environ- mental mass, damping, and stiffness. The interacting force fm canbeassumedto be measured as a position dependent linear relation:

fm = kx. (7.23)

The environmental dynamics can be described as

mx¨(t)+bx˙(t)+kx(t)=F (t). (7.24) 133

With PI control, it becomes



mx¨(t)+bx˙(t)+kx(t)=−Kp(fm(t) − fr(t)) − Ki (fm(τ) − fr(τ))dτ, (7.25)

where fr is the desired reference force. After substituting with fm = kx,takingthe Laplace transformation becomes

kK K {ms2 + bs + k(1 + K )+ i }X(s)=(K + i )F (s). (7.26) p s p s r

The transfer function of this system is

Figure 7.12: Explicit PI force control.

X(s) Kps + Ki = 3 2 . (7.27) Fr(s) ms + bs + k(1 + Kp)s + kKi

The characteristic polynomial of system is

3 2 Π(s)=ms + bs + k(1 + Kp)s + kKi. (7.28)

The Routh criterion yields the stability bound on the values of Ki and Kp:

b 0

Note that the stability condition does not contain the environment stiffness, k. The stability can be achieved as long as the gains are bounded. From this stability condition, since the impedance parameters, m, b, are fixed, the way to increase the range of the integral gain Ki is to either increase the proportional gain Kp or add 134

Root Locus 10

8

6

4

2

0 Imag Axis

−2

−4

−6

−8

−10 −5 −4 −3 −2 −1 0 1 2 3 4 5 Real Axis

Figure 7.13: Root locus under explicit PI force control with Kp =1. the active damping to the system. The root locus of this system is shown in Figure 7.13 with parameters selected for simulation purpose(m =1,b =1,k = 10), the proportional gain Kp =1andthe range of integral gain, 0 ≤ Ki ≤ 10. As shown in the root locus plot, the increasing the Ki gain causes the poles moving to the left and eventually makes the system unstable with Ki > 2 as predicted. Figure 7.14 shows the root locus for the system with Kp = 10. As can be seen in the plot and expected, the stability conditions are always satisfied with the range of 0 ≤ Ki ≤ 10 and the poles never across over the imaginary axis. When the active damping is added, the equation of motion is



mx¨ + bx˙ + kx = −Kp(fm − fr) − Ki (fm − fr)dt − Kvx.˙ (7.30)

The transfer function of this system is given by

X(s) Kps + Ki = 3 2 . (7.31) Fr(s) ms +(b + Kv)s + k(1 + Kp)s + kKi

The characteristic polynomial of system is

3 2 Π(s)=ms +(b + Kv)s + k(1 + Kp)s + kKi. (7.32) 135

Root Locus 20

15

10

5

0 Imag Axis

−5

−10

−15

−20 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 Real Axis

Figure 7.14: Root locus under explicit PI force control with Kp =10.

Figure 7.15: Explicit PI force control with active damping.

It is Hurwitz if and only if the gains are chosen such that

(b + K ) 0

Clearly, the upper bound of Ki increases with active damping as expected and it can be seen in Figure 7.16. Figure 7.16 shows the root locus of the system with the active damping under same parameters of Figure 7.13 where the crossover of j-axis can be observed as Ki is increased without the active damping term. However, the active damping provides the large upper bound of Ki and thus the stability condition is satisfied. Note that another stationary pole is not shown in the plot for enlargement of root locus. 136

Root Locus 0.4

0.3

0.2

0.1

0 Imag Axis

−0.1

−0.2

−0.3

−0.4 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 Real Axis

Figure 7.16: Root locus under explicit PI force control with the active damping Kv = 1000 and Kp =1.

7.4.2 Stability of Time Delayed System The time delay causes phase lags in control systems and consequently affects on the stability of the system. Due to the signal processing or transportation lag, there exists a time delay on force measurement. To investigate the influence of delayed force measurement, the equation was modified with a time delay in measured force term. The transfer function of time delay is given by the transform of the delayed impulse response with T sec:

Figure 7.17: Explicit PI force control with time delay.

Lh(t)=Lδ(t − T )=e−sT . (7.34) 137

Clearly, the transfer function of a time delay is not a rational function. With a delayed force measurement, the closed loop system is

 mx¨(t)+bx˙(t)+kx(t)=−Kp(fm(t − T ) − fr(t)) − Ki (fm(τ − T ) − fr(τ))dτ.

If we take the Laplace transform, we obtain

kK K (ms2 + bs + k + kK e−sT + i e−sT )X(s)=(K + i )F (s). (7.35) p s P s d

The irrational transfer function of a time delay can be approximated with Taylor series expansion s2T 2 s3T 3 e−sT =1− sT + − + ···. (7.36) 2 6 Taking a first order approximation gives a rise to

kK K [ms2 +(b − kK T )s + k(1 + K − K T )+ i ]X(s)=(K + i )F (s). (7.37) p p i s P s d

The transfer function of this system is

X(s) Kps + Ki = 3 2 . (7.38) Fr(s) ms +(b − kKpT )s + k(1 + Kp − KiT )s + kKi

The characteristic polynomial of system is

3 2 Π(s)=ms +(b − kKpT )s + k(1 + Kp − KiT )s + kKi. (7.39)

The Routh criterion yields the following stability conditions:

b (i)(b − kK T ) > 0 −→ K < p P kT

(ii) kKi > 0 −→ Ki > 0 (7.40) b − kK T (iii)(b − kK T )k(1 + K − K T ) − mkK > 0 −→ K < p (1 + K ). p p i i i m +(b − kK T )T p p λ 138

Thus, we conclude that the stability bounds of a time delayed system can be obtained as b K < , 0

The upper bound gain of Ki, λ, depends on the the impedance parameters m, b, k as well as the time delay T. Let λ be the reciprocal of λ,then

 1 m λ = = + T, (7.42) λ b − kK T p >0 where b − kKpT is always positive due to the first stability condition of (7.40).  Clearly λ is increased as T increased, and consequently the upper bound of Ki, λ, is decreased. Thus the effect of a time delay is decrease the stability range of the proportional gain Kp as well as Ki. To show the effect of the time delay on stability of the system, the root locus with Kp = 10 is shown under small time delay, T =0.002s, which is selected to satisfy the first stability condition in (7.40).

As mentioned before, the system is always stable with Kp =10and0≤ Ki ≤ 10 in case of without a time delay. Introducing the small time delay decreases the upper bound of Ki to 8.82, and gives rise to violation of stability condition with Ki beyond that value. The stability bounds of the time delayed system with the active damping Kv can be obtained by substituting b with b + Kv:

b + K K < v , 0

 1 m κ = = + T. (7.44) κ (b + K ) − kK T v p >0

It is seen that the effect of adding the active damping for the time delayed system is to increase the range of Ki by providing the wide range of Kp as expected. The resulting root locus is shown in Figure 7.19. As can be seen in the root locus plot, 139

Root Locus 20

15

10

5

0 Imag Axis

−5

−10

−15

−20 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 Real Axis

Figure 7.18: Root locus of the time delayed system under explicit PI force control with Kp =10.

the active damping make the system more stable by setting the upper bound of Ki further high.

Root Locus 0.4

0.3

0.2

0.1

0 Imag Axis

−0.1

−0.2

−0.3

−0.4 −0.2 −0.18 −0.16 −0.14 −0.12 −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 Real Axis

Figure 7.19: Root locus of the time delayed system under explicit PI force control with active damping Kv = 1000 and Kp =10.

7.5 Experimental Study 7.5.1 Experimentation Environment A six axis force/toqrue sensor (ATI Model 15/50) with force range of 65N and torque range of 5Nm is mounted between the base plate and the mounting 140 block as in Figure 7.20. The sensor outputs are interfaced to the PC through

Figure 7.20: Experimental testbed for tension control. the serial communication and then, through the Ethernet, to a DSP-based motion controller from ARCS Inc. The suture, which is used in this experimental study, is USP (the United States Pharmacopeia) 2-0 suture (coated, braided silk suture). Table 7.1 summaries the suture diameters according to suture sizes with knot-pull tensile strength [82]. The knot-pull tensile strength is the limited average tension

Table 7.1: Suture diameter-strength relationship.

USP Metrix Diameter [mm] Knot-Pull Tensile size size min max Strength(Kg) 0 3.5 0.35 0.399 2.16 2-0 3 0.30 0.339 1.44 3-0 2 0.20 0.249 0.96 4-0 1.5 0.15 0.199 0.60 5-0 1.0 0.10 0.149 0.40 6-0 0.7 0.07 0.099 0.20 strength by USP at which breaking failure occurs in knotted suture. Seeking to find an optimal tension value during the suturing was failed because of lack of the information and the value selected from surgeon’s personal experience. For regarding the suture pullout value from the various tissues, the experimental results can be known and summarized in Table 7.2 [83]. The suture pullout value are defined as 141 the tension value, which causes the tissue failure. The free tail of the suture is fixed

Table 7.2: Suture pullout values.

Tissues tension(N) Fat 1.96 Muscle 12.45 Skin 17.85 Fascia 36.98

with a fixture and the needle tail is attached to one of the EndoBot as shown in Figure 7.20. The joint velocities are estimated with the washout filter and then transformed to the Cartesian velocities,x ˙ m = J(q)˙q. The discrete controller was implemented at 2ms sampling rate.

7.5.2 Experimental Evaluation of Tension Controller This section presents the results of the implementation of the explicit force control strategy described in the previous section. The basic tension control used in these experiments consisted of first bringing the end-effector close to a certain position where the suture is not in tension. The hybrid control scheme was then started and the desired force commanded in the direction of pulling the suture. This caused the manipulator to come into in tension and exert the desired tension. The data from each control loop was captured at the sampling rate (100Hz) and stored by the PC for off-line analysis. The force reference in these experiments is stepped to −15N. The performance of each controller was quantified using the following measures:

 N ( ( )− )2 k=1 fm k fr RMS force error eRMS = N

Maximum force error e∞ = fm(k)∞

The first set of experiments was performed to see the effect of the active damping and the resulting force trajectories are plotted in Figures 7.21–7.23. As expected and seen in Figure 7.21, the integral force control gave rise to the zero steady state 142 error to the step such that the actual force converged to the desired value of -15 Newtons. However, the initial force spike was observed during the transient contact period due to the integral action and lack of the damping. Next two figures show the effect of adding the active damping into the system in order to reduce the initial force spike. With same integral gain, the active damping seems very effective to achieve the compliant contact transition. The improvement in the performance of this controller was made by changing the active damping gain, Kv,asshownin Figure 7.23. Table 7.3 summarizes the performance of these controller. From these

Table 7.3: Comparison of force controller with active damping.

Gains eRMS Ne∞ N Ki =0.5, Kv = 0 5.92 71.91 Ki =0.5, Kv = 1000 4.76 31.45 Ki =0.5, Kv = 2000 4.70 27.21

results, it is clear that the active damping seems promising method to reduce the initial force spike at the cost of increasing the settling time. Introducing the propor- tional gain, Kp achieved the faster response at the expense of slightly increasing the force spike as shown in Figure 7.24. Figure 7.25 shows the effect of increasing the proportional gain, Kp to the stability bound. As can be seen in plot, the bouncing phenomenon was observed and the performance was degraded as shown in Table 7.4.

Table 7.4: Effect of the proportional gain.

Gains eRMS N e∞ N Kp =0.3, Ki =0.5, Kv = 2000 3.68 31.24 Kp =0.5, Ki =0.5, Kv = 2000 4.84 33.19

The bouncing phenomenon was also observed as decreasing the active damp- ing gain, Kv, and the experimental results can be seen in Figures 7.26–7.28. As predicted, reducing the active damping gives rise to lower stability bounds of the control gain and consequently shows the tendency of being unstable. A significant 143 improvement in performance of force control was achieved by modifying the force reference trajectory proposed in [56]. From the notion that small initial force error prevents the integrator from excessively winding up, the force reference trajectory was modified such that the desired force value can be increased with a slop according to the current measure force value. Clearly, the initial force error is small and conse- quently the initial force spike was reduced considerably as shwon in Figure 7.29. It is observed that the reduction of the force spike on the force trajectory is apparent from the plot and the performance in terms of the RMS force error and maximum force value is in Table 7.5.

Table 7.5: Performance of force controller with a modified force reference trajectory.

Gains eRMS N e∞ N Kp =0.2, Ki =0.3, Kv = 2000 1.93 15.83

10 Desired Force Measured Force 0

−10

−20

−30

Force(N) −40

−50

−60

−70

−80 0 5 10 15 20 25 30 time(s)

Figure 7.21: Experimental data from integral control with Ki =0.5. 144

10 Desired Force Measured Force 0

−10

−20

−30

Force(N) −40

−50

−60

−70

−80 0 5 10 15 20 25 30 time(s)

Figure 7.22: Experimental data from integral control plus active damping with Ki =0.5 and Kv = 1000.

10 Desired Force Measured Force 0

−10

−20

−30

Force(N) −40

−50

−60

−70

−80 0 5 10 15 20 25 30 time(s)

Figure 7.23: Experimental data from integral control plus active damping with Ki =0.5 and Kv = 2000.

10 Desired Force Measured Force 0

−10

−20

−30

Force(N) −40

−50

−60

−70

−80 0 5 10 15 20 25 30 time(s)

Figure 7.24: Experimental data from PI control plus active damping with Kp =0.3, Ki =0.5,andKv = 2000. 145

10 Desired Force Measured Force 0

−10

−20

−30

Force(N) −40

−50

−60

−70

−80 0 5 10 15 20 25 30 time(s)

Figure 7.25: Experimental data from PI control plus active damping with Kp =0.5, Ki =0.5,andKv = 2000.

10 Desired Force Measured Force 0

−10

−20

−30

Force(N) −40

−50

−60

−70

−80 0 5 10 15 20 25 30 time(s)

Figure 7.26: Experimental data from PI control plus active damping with Kp =0.2, Ki =0.75,andKv = 2000.

10 Desired Force Measured Force 0

−10

−20

−30

Force(N) −40

−50

−60

−70

−80 0 5 10 15 20 25 30 time(s)

Figure 7.27: Experimental data from PI control plus active damping with Kp =0.2, Ki =0.75,andKv = 1000. 146

10 Desired Force Measured Force 0

−10

−20

−30

Force(N) −40

−50

−60

−70

−80 0 5 10 15 20 25 30 time(s)

Figure 7.28: Experimental data from PI control plus active damping with Kp =0.2, Ki =0.75,andKv = 500.

10 Desired Force Measured Force 0

−10

−20

−30

Force(N) −40

−50

−60

−70

−80 0 5 10 15 20 25 30 time(s)

Figure 7.29: Experimental data from PI control plus active damping with Kp =0.2, Ki =0.3, Kv = 2000, and the modified force reference trajectory with Fdes = −(f +5)N. CHAPTER 8 CONCLUSIONS

The goal of this thesis is to develop a robotic system for laparoscopic suturing task. This chapter concludes the thesis by providing a summary of results obtained in the preceding chapters and suggestion of areas for future research.

8.1 Summary The focus of this thesis is supervisory autonomous robotic system for MIS su- turing. A new surgical robotic system built in-house for minimally invasive surgery is presented. Based on the analytical model, parameter identification is carried out with various input signals. For the robotic suturing using the EndoBot, we carried out the analysis on the suturing task and developed and implemented robotic su- turing algorithms. For knot placement, the sliding condition based on a two-point contact model of a knotted suture is developed. In order to guarantee the safe op- eration in the suturing task in which dynamics of the system cannot be completely predicted over the entire range of operating due to the non-deterministic nature of behavior of environments, a human sharing supervisory controller is implemented with the corresponding state transition diagram for suturing task. For autonomously evolution of suturing task, an energy based output feedback controller and an opti- mal state feedback controller with the Kalman filter based on the globally linearized system are implemented. When human operators interact with robot systems, the augmentation plays a key role in order to enhance the human’s capability. When a surgeon and a robot share the different control aspect, shared control proposed in this thesis can augment the human’s capability by imposing the artificial constraints. Finally, a base force/torque sensor method is presented for tension measurement and a hybrid force/position strategy is implemented to effectively regulate the tension while achieving the desired motion profile.

147 148

8.2 Future Work This session describes several possible areas of future research.

Manipulator Enhancement Medical palpation plays an important role in both diagnosis as well as therapy. The ability to sense the tactile information can enhance the performance of robustness on the suturing tasks [86, 87]. A major extension of this research related to palpation can be to develop the MIS instruments with tactile sensors. With tactile sensors, a closed-loop feed- back control for regulating the gripping force can be implemented. Clearly, it can be useful in handling soft tissues and securely grasping the suture. For performing the cooperative tasks with two the EndoBots, calibration between two local frames is required. At the present, kinematic calibration is a tedious and time consuming procedure. It is desirable to develop a more automatic calibration method. One possibility is to equip with supporting arms with position sensors.

Vision Feedback Vision-based control can enhance the performance of the sutur- ing task in autonomous operations. The primary advantage of vision sensors is their ability to provide information on suture trajectory. From the prelim- inary experiments on the suturing task with the EndoBot system, grasping the suture tail is a difficult and challenging task. Since it is not possible to predict the trajectory of the suture tail, integrating the vision sensor in the feedback loop can enhance the robustness of the suturing algorithms. Calibra- tion with two cameras to get 3D depth information and visibility due to blood can be technical challenge issues. Vision sensors can be used to generate the local map so as to provide an active guidance in manual mode. With active deformable models generated from vision information, surgeons can teach the niche for surgical procedures such as knot tying and cutting.

Quantification of Knot Quality Following directly from this research, it will be important to find the values of the optimal tension for securing the knot. The optimal tension values can be different to the strength of tissues and the tensile strength of the sutures, but no publication was reported. Therefore, 149

it is necessary to obtain the optimal tension values through the experiments with in vivo tissues. Another thrust of the research on suturing will be to quantitatively evaluate the knot quality. It might be necessary to construct the evaluation criteria such as the size of the knot and the magnitude of knot breakage force.

Surgical Simulator The other trust to surgical applications can be to develop a virtual reality simulator for the suturing tasks in minimally invasive surgery [88, 89, 90]. A dynamic suture modeling is required to train surgeons for the suturing tasks [85].

Surgical Robot System for the Beating Heart Beating heart bypass surgery can eliminate many of the complications associated with stopping the heart and using a heart-lung machine [92]. Beating heart bypass surgery requires surgeons to stitch together vessels on the beating heart. Motion synchroniza- tion is the key technical difficulty [91]. LITERATURE CITED

[1] H. Paul, B.D. Mittelstadt, W.L. Bargar, B.L. Musits, R.H. Taylor, P. Kazanzides, J.F. Zuhars, and B. Williamson, “A Surgical robot for total hip replacement surgery,” in Proceedings of the 1992 IEEE International Conference on Robotics and Automation, vol. 1, pp 606-611, Nice, France, 1992. [2] T.C. Kienzle, S.D. Stulberg, M. Peshkin, A. Quaid, and C-H. Wu,“An integrated CAD-robotics system for total knee replacement surgery,” in Proceedings of the 1993 IEEE International Conference on Robotics and Automation, vol. 1, pages 889-894, Atlanta, GA, 1993. [3] M. Fadda, T. Wang, M. Marcacci, S. Martelli, G. Marcenaro, M. Nanetti, C. Paggetti, A. Visani, and S. Zaffagnini, “Computer-assisted knee arthroplasty at Rizzoli institutes,” In Proceedings of the International Symposium on Medical Robotics and Computer Assisted Surgery, pages 26-30, 1994. [4] Ho, S.C., R.D. Hibberd, and B.L. Davies, “Robot assisted knee surgery,” IEEE Engineering in Medicine and Biology Magazine Sp. Issue on Robotics in Surgery, May/June, pages 292-300, 1995. [5] R.E. Ellis, “From scans to sutures: robotics methods for computer-enhanced surgery,” in 9th International Symposium of Robotics Research, pp203-210, Snowbird, Utah, 1999. [6] I. W. Hunter, T. D. Doukoglou, S. R. Lafontaine, P. G. Charette, L. A. Jones, M. A. Sagar, G. D. Mallinson, and P. J. Hunter, “A teleoperated microsurgical robot and associated virtual environment for eye surgery,” Presence, vol. 2, pages 265-280, 1993. [7] Y. S. Kwoh, J. Hou, E. A. Jonkeere, and S. Hayati, “A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery,” IEEE Transactions on Biomedical Engineering, 35(2):153-160, 1988. [8] B.Davies, S.Starkie, S.J. Harris, E. Agterhuis, V.Paul, and L.M. Auer, “ Neurobot: a special-purpose robot for neurosurgery,” in Proceedings of the 2000 IEEE International Conference on Robotics and Automation,vol.4, pages 4104-4109, San Francisco, CA, 2000. [9] A. Faraz and S. Payandeh, “A Robotic Case Study: Optimal design for laparoscopic positioning stands,” in Proceedings of the 1997 IEEE International Conference on Robotics and Automation, pages 1553-1560, Albuquerque, NM, 1997.

150 151

[10] J. Funda, B. Eldridge, K. Gruben, S.Gomory, and R. Taylor,“Comparison of two manipulator designs for laparoscopic surgery,” IEEE Engineering in Medicine and Biology Magazine, Vol.14, No.3, pages 289-291, 1994., Telemanipulator and Telepresence Technologies, SPIE, Vol.2351, pages 172-183, 1994. [11] A.J. Madhani and J.K Salisbury, “Wrist mechanism for surgical instrument for performing minimally invasive surgery with enhanced dexterity and sensitivity,” U.S. Patent No. 5,797,900, 1997. [12] M.C. Cavusoglu, F. Tendick, M. Cohn, and S.S. Sastry, “A laparoscopic telesurgical workstation,” IEEE Transactions on Robotics and Automation, Vol.15, No. 4, pages 728-739, 1999. [13] A. Faraz and S. Payandeh, Synthesis and workspace study of endoscopic extenders with flexible stems,ERLTechnicalReport. [14] A. Faraz, S. Payandeh, and A. Nagy, “Issues and design concepts in endoscopic extenders,” in Proceedings of 1995 IFAC Man/Machine Systems, pages 231-236,1995. [15] C. Reboulet and S. Durand-Leguay, “Optimal design of redundant parallel mechanism for endoscopic surgery,” in Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems,Vol.2,pages 1432-1437, 1999. [16] C. Scott-Conner, The Sages Manual: Fundamentals of laparoscopy and GI endoscopy, Springer, 1998. [17] Z. Szabo and A. Cuschieri, Tissue approximation in endoscopic surgery,ISIS Medical Media, 1995. [18] C. Bernard, H. Kang, S.K. Singh, and J.T. Wen, “Robotic system for collaborative control in minimally invasive surgery,” Industrial Robot, 26(6), Nov/Dec, pages 476-484, 1999. [19] A.Melzer, M.O. Schurr, M. Lirich, B. Klemm, D.Stockel, and G.Buess, “Future trends in endoscopic suturing,” End. Surg., pages 78-82, 1994. [20] A.J. Madhani, G. Niemeyer, and J.K. Salisbury, “The black falcon: a teleoperated surgical instrument for minimally invasive surgery,” in Proceedings of the 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 2, pages 936-944, 1998. [21] C. Cao, C. MacKenzie, S. Payandeh,“Task and motion analyses in endoscopic surgery,” in ASME IMECE Conference Proceesings: 5th Annual Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pages 583-590, Atlanta, GA, 1996. 152

[22] J. Funda, R. Taylor, K. Gruben, and D. La Rose, “Optimal motion control for teleoperated surgical robots,” in SPIE Symposium on Optical Tools for Manufacturing and Advanced Automation, Vol. 2057, pages 211-222, Boston, MA, 1993.

[23] J. Funda, R. Taylor, B. Eldridge, S. Gomory, and K. Gruden, “Constrained cartesian motion control for teleoperated surgical robots,” IEEE Transactions on Robotics and Automation, Vol.12, No. 3, pages 453-465, 1996.

[24] Y. Wang, D.R. Uecker, K.P. Laby, J. Wilson, S. Jordan, and J. Wright, “Method and apparatus for performing minimally invasive cardiac procedures,” U.S. Patent No. 5,762,458, 1998.

[25] M. Mitsuishi, T. Watanabe, H. Nakanishi, T. Hori, H. Watanabe, and B. Kramer, “A tele-microsurgery system across the internet with a fixed viewpoint/operation-point,” in Proceeding of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems,Vol.2,pages 178-185, 1995.

[26] S. Charles, H. Das, T. Olm, C. Boswell, G. Rodrigues, R. Steele, and D. Istrate, “Dexterity-enhanced telerobotic microsurgery,” in International Conference on Advanced Robotics, Monterey, CA, 1997.

[27] F. Tendick and M.C. Cavusoglu,“Human-machine interfaces for minimally invasive surgery,” in Proceedings of the IEEE International Conference on Engineering Medical Biology Society, Chicago, IL, 1997.

[28] F. Tendick, M.S. Downes, M.C. Cavusoglu, and L.W. Way, “Development of virtual environments for training skills and reducing errors in laparoscopic surgery,”in Proceedings of the SPIE International Symposium on Biological Optics (BIOS’98), San Jose, CA, 1998.

[29] H. Kazerooni and J. Guo, “Human extenders,” Journal of Dynamics Systems, Measurement and Control, Vol. 115, pages 281-290, 1993.

[30] T.B. Sheridan, Telerobotics, automations, and human supervisory control, Cambridge, MIT Press, 1992.

[31] P.G. Backes and K.S. Tso, “UMI: an interactive supervisory and shared control system for telerobotics,” in Proceedings of the 1990 IEEE International Conference on Robotics and Automation, pages 1096-1101, Cincinnati, OH, 1990.

[32] S.A. Hayati and S.T. Venkataraman, “Bilevel shared control for teleoperators,” U.S. Patent No. 5,086,982, 1992. 153

[33] A. Douglas and Y. Xu, “Real-time shared control system for space telerobotics,” Journal of Intelligent and Robotic Systems: Theory and Applications, vol. 13, pages 247-262, 1995.

[34] R. Kumar, P. Berkelmen, P. Gupta, A. Barnes, P. Jensen, L.L. Whitcomb, and R.H. Taylor, “Preliminary experiments in cooperative human/robot force control for robot assisted microsurgical manipulation,” in Proceedings of the 2000 IEEE International Conference on Robotics and Automation, 2000.

[35] R. Kumar, P. Jensen, and R.H. Taylor, “Experiments with a steady hand robot in constrained compliant motion and path following,” In Proceedings of IEEE RO-MAN99, pages 92-97, Pisa, Italy, 1999.

[36] R. Kumar, T.M. Goradia, A.C. Barnes, P. Jensen, L.L. Whitcomb, D. Stoianovici, L.M. Auer, and R.H. Taylor, “Performance of robotic augmentation in microsurgery-scale motions,” In Proceedings of Medical Image Computing and Computer Assisted Intervention, pages 1108-115, Cambridge,UK, 1999.

[37] G.J. Hamlin, and A.C. Sanderson, “A novel concentric multilink spherical joint with parallel robotics applications,” in Proceedings of the 1994 IEEE International Conference on Robotics and Automation, pages 1267-1272, 1994.

[38] R.M. Murray, Z. Li, and S.S. Sastry, A mathematical introduction to roboti manipulation, CRC Press, Boca Raton, FL, 1994.

[39] B. Armstrong, “Friction: Experimental determination, modelling and compensation,” in Proceedings of the 1988 International Conference on Robotics and Automation, pages 1422-1427, 1988.

[40] B. Armstrong-Helouvry, Control of machine with friction, Kluwer Academic, 1991.

[41] P. Lischinsky, C. Canudas-de-Wit, and G. Morel, “Friction compensation for an industrial hydraulic robot,” IEEE Control Systems Magazine, vol. 19, no. 1, pages 25-32, 1999.

[42] S.C.P Gomes and J.P. Chretien, “Dynamic modeling and friction compensated control of a robot manipulator joint,” in Proceedings of the 1992 International Conference on Robotics and Automation, pages 1429-1435, 1992.

[43] G. Morel and S. Dubowsky, “The precise control of manipulators with joint frcition: a base force/torque sensor method,” in Proceedings of the 1996 IEEE International Conference on Robotics and Automation, pages 360-365, 1996.

[44] M. Meggiolaro, P. Jaffe, and S. Dubowsky, “Achieving fine absolute positioning accuracy in large powerful manipulators,” in Proceedings of the 154

1999 IEEE International Conference on Robotics and Automation, pages 2819-2824, 1999.

[45] J.T. Wen and D.S. Bayard, “A new class of control laws for robotic manipulators, Part I: Non-adaptive case,” International Journal of Control, 47(5), pages 1361-1385, 1988.

[46] J.T. Wen and S.H. Murphy, “Position and force control of robot arms,” IEEE Transactions on Automatic Cotnrol, 36(3), pages 365-374, 1991.

[47] C. Canudas-de-Wit, B. Siciliano, and G. Bastin, Theory of robot control, Springer, 1996.

[48] M.R. Schroeder, “Synthesis of low-peak-factor signals and binary sequences of low autocoorelation,” IEEE Transactions on Information Theory, 16, pages 85-89, 1970.

[49] K. Koztowski, Modeling and identification in robotics, Springer, 1998.

[50] C.H. An, C.G. Atkeson, and J.M. Hollerbach, “Estimation of inertial parameters of rigid body links of manipulators,” in Proceedings of the 24th IEEE Conference on Decision and Control, pages 990-995, 1985.

[51] B. Armstrong, “On finding exciting trajectories for identification experiments involving systems with nonlinear dynamics,” International Journal of Robotics Research, Vol.8, pages 28-48, 1989.

[52] M. Gautier and W. Khalil, “Exciting trajectories for the identification of base inertial parameters of robots,” International Journal of Robotics Research, Vol.11, pages 362-375, 1992.

[53] M. Gautier, “Optimal motion planning for robot’s inertial parameters identification,” in Proceedings of the 31th IEEE Conference on Decision and Control, pages 70-73, 1992.

[54] C. Presse and M. Gautier, “ New criteria of exciting trajectories for robot identification,” in Proceedings of the 1993 IEEE Inernatinal Conference on Robotics and Automation, pages 907-912, 1993.

[55] M. Gautier, “A comparison of filtered models for dynamic identification of robots,” in Proceedings of the 35th IEEE Conference on Decision and Control, pages 875-880, 1996.

[56] L.S. Wilfinger, J.T. Wen, and S. Murphy, “Integral force control with robustness enhancement,” IEEE Control System Magazine, 14(1), pages 31-40, 1994. 155

[57] M. Takegaki and S. Arimoto, “A new feedback method for dynamic control of manipulators,” ASME J. Dynamic Systems, Measurement and Control, 103, pages 119-125, 1981.

[58] S. Arimoto and F. Miyazaki, “Stability and robustness of PID feedback conrol for robot manipulators of sensory capability,” In Proceeding of the 1st International Symposium of Robotics Research, pages 783-799, 1983.

[59] J. Luh, M. Walker, and R. Paul, “Resolved acceleration control of mechanical manipulators,” IEEE Transactions on Automatic Control, 25, pages 468-474, 1980.

[60] R. Paul, Robot manipulators: Mathematics, programming, and control, Cambridge, MIT Press, 1981.

[61] A. Bryson and Y.C. Ho, Applied Optimal Control , Halsted, 1975.

[62] E. Freund, “Fast nonlinear control iwth arbitrary pole-placement for industrial robots and manipulators,” International Journal of Robotics Research, 1, pages 65-78, 1982.

[63] K. Kreutz, “On manipulator control by exact linearization,” IEEE Transactions on Automatic Control, 34, pages 763-767, 1989.

[64] G.L. Luo, G.N. Saridis, and C.Z. Wang, “A dual-mode control for robotic manipulators,” In Proceeding of IEEE conference on Decision Control, pages 409-414, 1986.

[65] R. Johansson, “Quadratic optimization of motion coordination and control,” IEEE Transactions on Automatic Control, 35(11), pages 1197-1208, 1990.

[66] J.T.Wen, “A unified perspective on robot: The energy Lyapunov function approach,” International Journal of Adaptive Control and Signal Processing, 4(6), pages 487-500, 1990.

[67] F. Lin and R.D. Brandt, “An optimal control approach to robust control of robot manipulators,” IEEE Transactions on Robotics and Automation, 14(1), pages 69-77, 1998.

[68] C. Abdallah, D. Dawson, P. Dorato, and M. Jamshidi, “Survey of robust control for rigid robots,” IEEE Control Systems Magazine, 11(2), pages 24-30, 1991.

[69] H.G. Sage, M.F. De Mathelin, and E. Ostertag, “Robust control of robot manipulators: a survey,” International Journal of Control, 72(16), pages 1498-1522, 1999. 156

[70] H. Berghuis, P. Lohnberg, and H. Nijmeijer, “Tracking control of robots using only position measurements,” in Proceeding of Conference on IEEE Decision and Control, pages 1039-1040, 1991.

[71] B. Bona and M. Indri, “Analysis and implementation of observers for robotic manipulators,” in Proceeding of the IEEE International Conference on Robotics and Automation, pages 3006-3011, 1998.

[72] N. Hogan, “Impednace control: An approach to manipulation. PartsI-III,” ASME Journal of Dynamic Systems, Measurement, and Control, 107, pages 1-24, 1985.

[73] M. Spong and M. Vidyasagor, Robot dynamics and control, John Wiley and Son, New York, 1989.

[74] B. Waibel and H. Kazerooni, “Theory and experiments on the stability of robot compliance control,” IEEE Transactions on Robotics and Automation, 7(1), pages 95-104, 1991

[75] M. Raibert and J. Craig, “Hybrid position/force control of manipulators,” ASME Journal of Dynamic Systems, Measurement, and Control, 103, pages 126-133, 1981.

[76] T. Yoshikawa, “Dynamics hybrid position/force control of robot manipulators- description of hand constraints and calculation of joints driving forces,” in Proceeding of the IEEE International Conference on Robotics and Automation, pages 1393-1398, 1986.

[77] O. Khatib, “A unified approach for motion and force control of robot manipulators: The operational space formulation,” IEEE Journal of Robotics and Automation, pages 43-53, 1987.

[78] T. Yoshikawa, “Force control of robot manipulators,” in Proceeding of the IEEE International Conference on Robotics and Automation, pages 220-226, 2000.

[79] S. Chiaverini and L. Sciavicco, “The parallel approach to force/position control of robotic manipulators,” IEEE Transactions on Robotics and Automation, 9(4), pages 361-373, 1993.

[80] L.S. Wilfinger, J.T. Wen, and S. Murphy, “Integral force control with robustness enhancement,” in Proceeding of the IEEE International Conference on Robotics and Automation, pages 100-106, 1994.

[81] N. Mandal and S. Payandeh, “Force control strategies for compliant and stiff contact: An experimental study,” in IEEE International Conference on Systems, Man, and Cybernetics, 2 , pages 1285-1290, 1994. 157

[82] U.S. Department of Health and Human Services, Guidance for surgical suture, 2000.

[83] S.Y. Lai, “Sutures and needles,” eMedicine Journal, 2(9), 2001.

[84] G. Liu, K. Iagnemma, S. Dubowsky, and G. Morel, “A base force/torque sensor approach to robot manipulator inertial parameter estimation,” in Proceeding of the IEEE International Conference on Robotics and Automation, pages 3316-3321, 1998.

[85] J. Phillips, A. Ladd, and L.E. Kavraki, “Simulated knot tying,” in Proceeding of the IEEE International Conference on Robotics and Automation, 2002.

[86] B. Hannaford, J. Trujillo, M. Sinanan, M. Moreyra, J. Rosen, J. Brown, R. Leuschke, and M. MacFarlane, “Computerized endoscopic surgical grasper,” in Proceeding of Medicine Meets Virtual Reality, pages 111-117, 1998.

[87] G. Moy, C. Wagner, and R.S. Fearing, “A compliant tactile display for teletaction,” in Proceeding of the IEEE International Conference on Robotics and Automation, pages 3309-3415, 2000.

[88] J. Rosen, B. Hannaford, C.G. Richards, and M.N. Sinanan, “Markov modeling of minimally invasive surgery based on tool/tissue interaction and force/torque signatures for evaluating surgical skills,” IEEE Transactions on Biomedical Engineering, 48(4), pages 579-591, 2001.

[89] K. Montgomery, L. Heinrichs, C. Bruyns, S. Wildermuth, C. Hasser, S. Ozenne, and D. Bailey, “Surgical simulator for hysteroscopy: a case study of visualization in surgical training,” in Proceeding of the Visualization, pages449-452, 2001.

[90] C. Sharma and T. Kesavadas, “Investigation of haptic framework for, quantitative design analysis in a virtual, environment,” in Proceeding of the International Conference on Virtual Systems and Multimedia, pages844-853, 2001.

[91] Y. Nakamura, K. Kishi, and H. Kawakami, “Heartbeat synchronization for robotic cardiac surgery,” in Proceeding of the IEEE International Conference on Robotics and Automation, pages 2014-2019, 2001.

[92] F. Devernay, F. Mourgues, and E. Coste-Maniere, “Towards endoscopic augmented reality for robotically assisted minimally invasive cardiac surgery,” in Proceesings of the International Workshop on Medical Imaging and Augmented Reality pages 16-20, 2001. APPENDIX A INVERSE KINEMATICS USING SUBPROBLEMS

Since the EndoBot has a simple kinematic configuration, the inverse kinematics can be solved with conventional algebraic or geometric method. But for consistency purpose, the exponentials formula is introduced to solve the inverse kinematics. The exponentials formula can solve the inverse kinematics problem with the Paden- Kahan subproblems whose solutions are known in [38] and has a geometric meaning. The inverse kinematics can be solved from the following steps:

Step 1 (solve for the translational distance, q4) From the forward kinematics derivation above, a given position vector from the base frame to tool frame

p0T can has the following form:

ˆ ˆ ˆ h2q2 h1q1 h3q3 p0T = e e e p34 (A.1)

where, p34 = q4h3. Since the position vector p34 lies on the axis of h3, it yields

ˆ ˆ h2q2 h1q1 p0T = e e p34. (A.2)

Taking the norm, it gives the translational distance, q4:

  ˆ ˆ  h2q2 h1q1  p0T  = e e p34 = p34 = q4. (A.3)

Step 2 (solve for first two angles, q1 and q2) Since q4 is known and h1 and h2 intersect, (A.2) can be solved using Subproblem 2 (Rotation about two sub-

sequent axes) and it gives q1 and q2 from (A.2). Due to the multiple solutions property of Subproblem 2, two possible solution might be existed, but it can be easily determined from the working range consideration.

Step 3 (solve for the roll angle, q3) The joint angle of the axis 3, q3,issimply the roll angle φ.

158 APPENDIX B LAGRANGIAN FOR A ROBOT MANIPULATOR

B.1 Kinetic Energy of a Robot Manipulator To calculate the kinetic energy of a robot manipulator, consider the ith link th in Figure B.1. Define the coordinate frame, Oi, attached to the i joint. Let hi

Figure B.1: Kinetic energy of a rigid body.

th th be the unit vector of i rotational direction, ci represent the center of mass for i C th th link, pi denote the position vector of the i center of mass from the i coordinate c th frame expressed in Oi, Ii be the inertia tensor of the i link expressed in the frame th attached at the center of mass, and mi be the mass of i link. The kinetic energy of the ith link is

      T c 1  ωi   Ii mipˆi   ωi  Ti =       , (B.1) 2 − c vi mipˆi miI vi

Mi

c c c T − c2 th where Ii = Ii +mi(pi (pi ) pi I) denotes the inertia tensor of the i link expressed in Oi frame, wi is the instantaneous angular velocity and vi the linear velocity of the th th i link with respect to the instantaneous i coordinate frame. Mi is the generalized inertia matrix for the ith link. The total kinetic energy of the manipulator with nth joints is n T = Ti. (B.2) i=1

159 160

To represent the kinetic energy with the generalized coordinates, the instantaneous angular velocity and the linear velocity can be written by  

 ωi    = Ji(q)˙q, (B.3) vi i where Ji(q) is the partial Jacobian:  

 h1 ··· hi−1 hi 0 ···0  J (q)=  , (B.4) i ˆ ˆ hip1i ···hi−1pi−1,i 00···0 where all vectors are represented in ith frame. Then, the total kinetic energy of the manipulator can be given by

n 1 T T T = q˙ Ji (q)MiJi(q)˙q 2 i=1 1 n = q˙T ( J T (q)M J (q)) q.˙ (B.5) 2 i i i i=1 M(q)

It gives 1 T = q˙T M(q)˙q, (B.6) 2 where M(q) ∈n×n is the symmetric, positive definite mass inertia matrix.

B.2 Potential Energy of a Robot Manipulator The potential energy of the ith link is

T c Pi = mig (p0i + pi )0, (B.7)

c th where (p0i + pi )0 represents the height of the i center of mass in the base frame and g =[0 0 − 9.8]m/s2 is the gravity vector. Then the total potential energy is

n T c P = mig (p0i + pi )o. (B.8) i=1 APPENDIX C LEAST SQUARE METHOD

The equation of motion of a robot manipulator can be given with m measurements at given time instants t1, ..., tm τ¯ = φθ¯ +¯ε, (C.1) where φ¯ ∈m×n is the matrix in which each row represents regressor at each time, m m τ¯ ∈ is a vector with τ(tk), andε ¯ ∈ is a vector with ε(tk). The least square solution can be interpreted as a minimum error norm solution through the orthogonal mapping of vector spaces as shown in Figure C.1. A matrix

Figure C.1: Orthogonal decomposition.

φ¯ takes a vector θ into the column space, R(φ¯), but due to the noise and unmodeled dynamics,τ ¯ is outside of the column space of φ¯. Consequently, there exists no solution on θ.Letθ¯ be a fictitious solution and p be a mapped vector through p = φ¯θˆ. Then, p is in the column space of φ¯ and the errorε ¯ can be drawn as shown in Figure C.2. From the orthogonal decomposition diagram, it always has a

161 162

Figure C.2: Interpretation on the least square solution. minimum error norm solution when

ε¯ =¯τ − φθ¯ ∗ ∈ N(φ¯T ). (C.2)

It knocks out the column space of error vector as

φ¯T (¯τ − φθ¯ ∗)=0. (C.3)

It becomes φ¯T φθ¯ ∗ = φ¯T τ.¯ (C.4)

The least square solution is given by

∗ ¯T ¯ −1 ¯T θLS =(φ φ) φ τ.¯ (C.5) APPENDIX D PERSISTENT EXCITATION AND OBSERVABILITY GRAMIAN

A regression matrix φ¯ is called persistent exciting if there exist positive constants

α1, α2 and δ such that  t+δ ¯T ¯ αaI ≤ φ φdτ ≤ α2I. (D.1) t This condition can be interpreted as uniformly completely observable condition with observability gramian. Consider the time-varying system with no inputs:

x˙(t)=A(t)x(t),x(0) = x0 (D.2)

y(t)=c(t)x(t) (D.3) and let Φ(t, t0) be the transition matrix:

d Φ(t, t0)=A(t)Φ(t, t0). (D.4) dt

n Define the observability operator Lo :  −→ L2 by

T Lo : x0 → C (t)Φ(t, t0)x0. (D.5)

With this observability operator,

y(t)=Lox0. (D.6)

The observability problem is to determine the initial condition value x0 by observing ¯ the y(t)andLo as identify the unknown parameters fromτ ˆ and φ in identification process. The system is said to be uniformly completely observable if the observability

163 164 gramian is uniformly bounded and positive definite:

  tf tf ≤ T T T ≤ αaI Lo Lodτ = Φ C CΦdτ α2I. (D.7) t0 t0

In discrete-time invariant system, the observability gramian can be given by

T Go = O O, (D.8) where O represents the observability matrix. This discrete-time observability gramian can be interpreted as the persistent matrix in system identification. The system can be said to be observable if only if the observability gramian is nonsingular in control system and the unknown parameter can be identified if only if the persistent matrix is nonsingular. APPENDIX E DIFFEOMORPHISM

Diffeomorphism A map between manifolds is called as a homeomorphism if it is bijective (one-to-one and onto), continuous, and has a continuous inverse. A homoemorphism which is differentialble and has a differentiable inverse is a diffeomorphism.

Bijective A transformation which is one-to-one and onto is called as bijection. In other words, it allows every state to be accessed (onto) and has precisely one pre-image for each state (one-to-one).

• A function f from A to B is called one-to-one (or injection) if ∀x, y ∈ A (f(x)=f(y)) → x = y.

Figure E.1: One-to-one and not onto

• A function f from A to B is called onto (or surjection) if (∀y ∈ B ∃x ∈ A) f(x)=y.

Figure E.2: Onto not one-to-one

165 166

• A function is called a bijection if it is both an injection and surjection.

Figure E.3: One-to-one and onto