ALGORITHMS FOR INTELLIGENT ROBOTIC

SURGICAL SYSTEMS

by

RUSSELL C JACKSON

Submitted in partial fulfillment of the requirements

for the degree of Doctor of Philosophy

Department of Electrical Engineering and Computer Science

CASE WESTERN RESERVE UNIVERSITY

January, 2016 CASE WESTERN RESERVE UNIVERSITY

SCHOOL OF GRADUATE STUDIES

We hereby approve the dissertation of Russell C Jackson

candidate for the degree of Doctor of Philosophy*.

Committee Chair M. Cenk C¸avu¸so˘glu PhD

Committee Member Wyatt Newman PhD

Committee Member Roger D Quinn PhD

Committee Member Gregory S Lee PhD

Committee Member David Wilson PhD

Date of Defense

September 24, 2015

*We also certify that written approval has been obtained

for any proprietary material contained therein. Copyright c 2015 Russell C Jackson

All rights reserved. To my dad, Jay Jackson, who inspired me to become an engineer, and also to my wife, Michele Mumaw, who supported and encouraged me through all the ups and downs of graduate school. Contents

Contents i

List of Tables v

List of Figures vi

Acknowledgments viii

Abstract 1

1 Introduction 3

1.1 SurgicalSubtaskAutomation ...... 7 1.2 Contributions ...... 9 1.3 Dissertation Outline ...... 9

2 Surgical Automation 11

2.1 RoboticAutomation ...... 11

2.1.1 SurgicalSubtaskAutomation ...... 12 2.2 Robot-EnvironmentInteraction ...... 16 2.2.1 Perception...... 17 2.2.2 Planning...... 21

2.2.3 Control ...... 23

i CONTENTS

3 Real-Time Visual Tracking of Dynamic Surgical Suture Threads 26

3.1 Introduction...... 27 3.2 LiteratureReview...... 28 3.3 SutureDetection ...... 32

3.3.1 SutureThreadSegmentation...... 33 3.4 SutureThreadNURBSModel ...... 36 3.5 Initializing the Suture ...... 37 3.6 NURBSCurveIteration ...... 41 3.6.1 ImageEnergy ...... 41

3.6.2 EndPointEnergy...... 42 3.6.3 PointForceAction ...... 43 3.6.4 PointwiseUpdate...... 44 3.6.5 3-DimensionalDeprojection ...... 45

3.7 ExperimentalValidation ...... 47 3.7.1 QuantativeAccuracy ...... 50 3.7.2 CalibratedPatternTracking ...... 51 3.7.3 Qualitative Tracking Results ...... 53

3.8 DiscussionandConclusions ...... 56

4 Catadioptric Stereo Tracking for Three Dimensional Shape Mea-

surement of MRI Guided 58

4.1 Introduction...... 59 4.2 Background ...... 61 4.2.1 CatheterControlandTracking ...... 61

4.2.2 CatadioptricStereo...... 62 4.3 CatadioptricHardwareSetUp...... 65 4.3.1 Catadioptric Calibration Process ...... 67 4.4 ExperimentalValidation ...... 70

ii CONTENTS

4.5 CatheterTracking...... 71

4.5.1 CatheterModel ...... 72 4.5.2 CatheterImagingandTracking ...... 72 4.6 ConclusionsandFutureWork ...... 74

5 Modeling of Needle-Tissue Interaction Forces During Surgical Su- turing 76

5.1 Introduction...... 77

5.2 NeedleForceModeling ...... 78 5.3 SutureNeedleMotionModel...... 80 5.3.1 IdealNeedleMotion ...... 80 5.3.2 NonIdealNeedleMotion...... 80 5.3.3 AreaSweep ...... 82

5.4 SutureNeedleForces ...... 85 5.4.1 FrictionForces ...... 85 5.4.2 AreaForces ...... 86 5.4.3 CuttingandStiffnessForces ...... 87

5.4.4 Torque Calculations ...... 87 5.5 Results...... 88 5.5.1 ExperimentalMethods ...... 88 5.5.2 ForceDataPostProcessing ...... 89

5.5.3 MeasuredForceData...... 90 5.5.4 ParameterFitting...... 90 5.6 ConclusionsandFutureWork ...... 97

6 Needle Path Planning for Autonomous Robotic Surgical Suturing 98

6.1 Introduction...... 98 6.2 BestPracticesofSuturing ...... 100

iii CONTENTS

6.2.1 Quantification of the Suturing Guidelines ...... 101

6.3 Needle Path Planning Algorithm ...... 103 6.3.1 NeedleApproach ...... 104 6.3.2 NeedleBite ...... 105 6.3.3 Needle Reorientation ...... 105

6.3.4 NeedleRegrasping ...... 110 6.3.5 Needle Follow Through ...... 110 6.3.6 NeedlePathInputList...... 111 6.4 EmpiricalPathEvaluation ...... 111 6.4.1 Needle Drive Results ...... 113

6.5 ResultsandDiscussion ...... 114 6.6 ConclusionsandFutureWork ...... 115

7 Conclusions 117

7.1 SutureThreadTracking ...... 117 7.2 Catadioptric Stereo Tracking of an MRI Guided ...... 118 7.3 Suture Needle-Tissue Interaction Force Modeling ...... 118

7.4 SutureNeedlePathPlan ...... 119 7.5 FutureResearchProblems ...... 119

Bibliography 122

iv List of Tables

3.1 SutureThreadLengthTable...... 50 3.2 CalibrationCurveResultsSummary ...... 53

4.1 TestPatternTrackingAccuracy ...... 70 4.2 TrackedCatheterLength...... 74

6.1 NeedleDriveParameters ...... 111 6.2 NeedleDriveForceSummary ...... 114

v List of Figures

1.1 MISPatientPortalConstraint...... 4 1.2 MISSurgicalTool...... 4

1.3 PrototypeRAMISWrist[10]...... 5 1.4 da Vinci Si System c 2009 Inuitive Surgical Inc...... 6

2.1 SutureNeedleKit...... 15 2.2 MedicalForceSensor ...... 18 2.3 StateActionTransition...... 24

3.1 SutureNeedleandThread ...... 30 3.2 ThinFeatureSegmentation ...... 36 3.3 SegmentedRegionGrowth ...... 40 3.4 SutureModelOverlay ...... 48

3.5 XY-LinearStage ...... 49 3.6 ThreadCalibrationPattern ...... 52 3.7 CalibrationPatternOverlay ...... 53 3.8 Initializing Intersecting Threads ...... 54

3.9 TrackingaKnotTie ...... 55

4.1 CatadioptricMirrorGeometry...... 64 4.2 CatadioptricImagingSystemComponents ...... 65 4.3 CatadioptricMRIDiagram...... 67

vi LIST OF FIGURES

4.4 Catadioptric Validation Pattern Tracking ...... 68

4.5 CatheterLengthMeasurement...... 72 4.6 CatheterMRIView...... 73

5.1 CanonicalNeedleMotion...... 81 5.2 IdealNeedleMotion ...... 81

5.3 NeedleMotionAreaSweep...... 85 5.4 NonIdealNeedleAreaSweep ...... 86 5.5 ExperimentalSutureApparatus ...... 91 5.6 ExperimentalForceData...... 92 5.7 LinearForceModel...... 93

5.8 TorqueModel ...... 94 5.9 ForceModelBreakdown ...... 95 5.10 TorqueModelBreakdown ...... 96

6.1 SutureCrossSection ...... 101

6.2 NeedleTissueBitePose ...... 102 6.3 NeedleDepthCalculation ...... 103 6.4 Holonomic Needle Reorientation ...... 104 6.5 Non Holonomic Needle Reorientation ...... 108

6.6 NeedleDriveTimeLapseComparison...... 109 6.7 HolonomicNeedleDrive ...... 110 6.8 NonHolonomicNeedleDrive...... 111 6.9 NeedleDriveForces...... 112

vii Acknowledgments

I would like to thank my advisor Dr. M. Cenk C¸avu¸so˘glu for his guidence and patience. Whatever my new idea was, Cenk always listened and helped me focus my thoughts and mentored me to become a better researcher. Thank you for all of the

time, advice, and support that enabled me to grow as a successful robotics engineer. I would also like to thank all of the members of the MeRCIS lab. In particular, I would like to thank Der-Lin Chow, Tipakorn Greigarn, Taoming Liu, and Mark Renfrew who helped me with lab projects or discussing research ideas. I would also like to thank the undergraduate students whom I have mentored during my graduate student career. I hope they learned as much from me as I did from them. Finally, I want to thank my family and friends for their support throughout my academic career. My mom, Barbara Jackson, for her support and encouragement as I trained to become an engineer. My dad, Jay Jackson, while he did not get to see me attend graduate school, passed on invaluable advice that only someone who attended graduate school would know. I would also like to thank Tom and Kathy Mumaw for being supportive of my graduate studies and their thoughtful questions which helped me think about the potential of my research.

Most importantly my wife, Michele Mumaw, who was always there for me when I was working late, frustrated by broken software, or needed a friend. Thank you Michele for attending graduate school with me and for your encouragement and wis- dom that enabled me to finish.

viii Algorithms for Intelligent Robotic Surgical Systems

Abstract by RUSSELL C JACKSON

Robotic surgical assistants provide a novel way to improve the versatility and effec- tiveness of minimally invasive but are still a maturing technology. There are many limitations associated with these robots which include a lack of haptic feedback, constrained nonintuitive workspace, and visual distortion.

It is difficult to directly address many of the above limitations as the dynamics and kinematics of the robotic arms are significantly different than human anatomy. The research in this dissertation aims to overcome some of the above limitations by addressing problems related to the automation or surgical subtasks such as suturing.

Automating subtask completion would overcome many limitations of robotic surgical assistants and allow the surgeon to complete procedures faster and with less fatigue. Ultimately, the surgeon would rely on the robot to perform common surgical subtasks, enabling the surgeon to focus on the overall surgical procedure. This work decomposes the problem of automated surgical subtasks into three pri- mary parts: perception, planning, and control. Advances are made in both perception and planning. The first contribution of this work is the successful tracking of dynamic suture thread using stereo imaging. The tracking is capable of operating in real time and is robust against changes in the suture thread length. Visual tracking algorithms are also used to track a Magnetic Resonance Imaging (MRI) guided catheter. The tracking, completed using a catadioptric stereo system, is required for modeling and control of the catheter.

The second contribution of this work is successful lumped modeling of the suture needle-tissue interaction forces. The lumped model is fast and accurately predicts the

1 Abstract

forces sensed during a circular needle drive.

These lumped models are used in conjunction with best practices to plan out the needle drive trajectory. The goals of the trajectory are to successfully drive the suture needle such that the tissue trauma and scarring are minimized while the suture strength is maximized.

The contributions of this dissertation focus on automating subtasks commonly used during surgical procedures. By automating the minutia of the surgical proce- dure, the robot may enable the surgeon to optimize the overall procedure without distraction.

2 Chapter 1

Introduction

When Minimally Invasive Surgery (MIS) was introduced in the 1980s, MIS represented a significant opportunity to improve surgical procedures for both the patient and the surgeon[1]. Since completing a major procedure using MIS techniques can reduce the postoperative hospital stay from 2 weeks to several hours, MIS procedures have become commonplace throughout the developed world [2]. Even though many pro- cedures benefit from laparoscopic surgery, open procedures may still be the preferred course of treatment in some instances [3, 4, 5, 6, 7]. This is due to the disadvantages that minimally invasive procedures have when compared to open procedures. These limitations include a nonintuitive workspace as well as difficulty in assessing the tissue manipulation forces. The limitations stem from the patient portal constraint of MIS.

The portal constraint creates a pivot point that the MIS tools must pass through. A typical MIS tool, shown in Fig. 1.2, is long and a small motion at one end could pivot about the portal and result in a large motion in the opposite direction at the opposite end. The portal also introduces friction which interferes with the surgeon’s ability to operate by feel. The introduction of surgical robotic assistants has mitigated some of the limitations of MIS and has lead to the development of Robotically Assisted Minimally Invasive Surgery (RAMIS).

3 CHAPTER 1. INTRODUCTION

Figure 1.1: The portal constraint introduces severe limitations on the mobility of the MIS tools. The tissue at the portal must not be torn and consequently creates a fulcrum which limits the tool tip motion inside the body to only 4 degrees of freedom as shown in the diagram. The four degrees of freedom consist of one translational and three rotational degrees of freedom.

Figure 1.2: A typical MIS tool is long and thin for use in MIS procedures. The length and portal constraint fulcrum mean that a small force at the tip of the tool may not be felt at the base of the tool. The limited haptic feedback, coupled with the non intuitive work space necessitate the need for cameras so that the surgeons can perceive the surgical environment and act on that information.

4 CHAPTER 1. INTRODUCTION

Figure 1.3: This surgical wrist developed in the MeRCIS lab at Case Western Reserve University [10] has an extra two degrees of freedom in the form of a universal joint on the distal end of the tool shaft.

There are many advantages that robots have over human operators. In addition to the improved dexterity inside the patient (Fig. 1.3), robots are fast, precise, and tire- less [8, 9]. While surgical robots present a new opportunity to complete a wide array of complex procedures using MIS techniques, current RAMIS systems are teleoper- ated and consequently many challenges of MIS are still present and limit the ubiquity of RAMIS [11]. Some procedures take a significantly longer time to complete when using a RAMIS system when compared to open surgery. The most significant techni- cal limitations of RAMIS include a lack of haptic feedback, motion dexterity, a non intuitive human machine interface, and a limited situational awareness of the surgical environment. In some instances, complications may even necessitate conversion of the RAMIS procedure to an open procedure. The process of removing the robot from the surgical setting can add a significant amount of time to the procedure. Another factor that limits deployment of surgical robotic assistants is the widely held percep- tion among surgeons and hospitals that the current state of the art in robotic surgery does not justify its cost when compared to conventional methods. Unless there is a

5 CHAPTER 1. INTRODUCTION

Figure 1.4: The da Vinci Si System, introduced in 2009, is a very common RAMIS platform c 2009 Inuitive Surgical Inc..

significant reason to adopt the surgical robots, surgeons are unlikely to accept the paradigm shift that would be required in order to complete a procedure utilizing a robotic assistant instead of more conventional techniques. Since the current state of the art in RAMIS still has some significant limitations, there is little reason to adopt expensive robots when they currently do not justify their value. Due to the significant regulatory hurdles and intellectual property restrictions, only two RAMIS systems have been FDA approved and available in the United States namely the da Vinci (Intuitive Surgical Inc. Sunnyvale, CA USA) and the ZEUS (dis- continued and decertified) robotic systems (formerly Computer Motion Inc. bought by Intuitive Surgical Inc. Sunnyvale, CA USA) [12]. The Intuitive da Vinci Si system shown in Fig. 1.4 is one of the most widely deployed RAMIS systems available. While currently there are few active surgical robots in the healthcare industry, many other solutions and products are under active development and may enter the market in the next few years [13]. Some examples of surgical robots include the Single Port Orifice Robotic Technology (SPORT TM) Surgical System (Titan Medical

6 CHAPTER 1. INTRODUCTION

Inc, Toronto, Ontario Canada) as well as the RIO Robotic Arm Interactive System

(Mako Surgical Corp., Ft. Lauderdale, Florida U.S.A). Many surgical robots share similar kinematics that allows them to satisfy the patient portal constraint while increasing the degrees of freedom of the tool tip from a constrained 4 degree of freedom subspace of the Special Euclidean Group 3 to the full 6 degree of freedom Special

Euclidean group 3. While they are successful because of the increased dexterity when compared to traditional MIS, these early systems suffer from many of the drawbacks that one would associate with novel technology. In particular, surgeons often find the telesurgical interface to be particularly challenging. Human operators are superior at many aspects of the surgical procedure (e.g visual perception and cognition and a deep understanding the objectives of the procedure) while robots are very precise and repeatable. The greatest limitation suffered by RAMIS systems is that they do not significantly synergize the skills of the robotic system with those of the surgeon. Ideally, a fully integrated human-machine surgical system would leverage the advantages of human cognitive skills with machine preci- sion to create a surgical solution that is adaptable, precise, and fast. This will be accomplished if the robot is transformed from a passive surgical device to an active surgical assistant that is capable of completing surgical subtasks while allowing the surgeon to focus on different aspects of the current procedure.

1.1 Surgical Subtask Automation

While fully autonomous robotic surgery is still considered science fiction, automating subtasks during RAMIS is an actively researched problem that could see significant breakthroughs within the next few years. Shifting the focus of medical robots from the current paradigm towards that of competent assistance combines the robots’ skills with those of the operating surgeons. This also allows robots to actively mitigate the

7 CHAPTER 1. INTRODUCTION

limitations that MIS and RAMIS impose on highly trained surgeons. Pairing the best

that robots have to offer with the best that humans have to offer in the healthcare industry has the potential to transform long complex into simple outpatient procedures. The number one bottleneck of RAMIS procedures is the procedure length. If a robot enables significantly faster procedures without decreasing their effectiveness, then the robot would rapidly gain more acceptance [14]. This is because faster proce- dures significantly reduce the possiblity of post operative complications. Additionally, the improved speed of the procedure will decrease surgeon fatigue and may allow them to increase their throughput.

Robots have already demonstrated their utility in mass production facilities where robots can help construct cars thousands of times over the production run of a partic- ular model. The robots are effective because the production space is designed around the robot (i.e. the car parts are specified to be at certain locations and are guar- enteed to be at that location during construction). This tight environmental control is not available inside the patient. Organs may be different shapes, tissues may be of different stiffness, the exact patient portal location may also vary. Any autonomous subtask that a robot attempts must be robust to the significant variation and sudden obstacles that will occur during subtask execution. The surgeon can indicate their desires to the robot in order to initiate these subtasks, but the robot must then be flexible and adaptable in order to complete these subtasks. While surgical task automation may seem unrealistic, Laser-Assisted in situ Ker- atomileusis (LAsiK) procedures are one example of a medical procedure that already uses robotic automation. During LAsiK, the surgeons use computer processing to help plan and complete the procedure [15]. Since surgical automation has already proven its usefulness in one procedure, automation can be expanded and diversified into other healthcare applications.

8 CHAPTER 1. INTRODUCTION

The surgeon knows what needs to be done during specific procedures. It is more

important that the robot understands how to complete subtasks that are common across surgeries than to understand the specifics of each procedure. If a clear com- munication channel exists between the surgeon and the robot, the surgeon can tell the robot what needs to be done while the robot can optimize the task trajectory in

order to perform it quickly, precisely, and robustly. Some examples of these subtasks include automated suture needle driving, automated needle insertion, automated su- ture tie offs, and automated tissue retraction. The capabilities required to complete these tasks under guidance are being researched and discovered. The surgeon can then lead the robot while allowing the robot to complete the actual surgical subtasks.

1.2 Contributions

This dissertation expands the domain of surgical subtask automation in the following areas: i) Addressing the problem of tracking surgical suture to assist in automated knot tying and suturing; ii) Tracking a cardiac catheter as it deflects inside an Mag- netic Resonance Imaging machine using a catadioptric stereo camera imaging sys- tem; iii) Modeling suture needle-tissue interaction forces and torques; iv) Creating algorithms that can generate suture needle motion plans based on best practices of suturing. The techniques and methods developed address current gaps in RAMIS. Applying these algorithms to surgical robots will bring RAMIS one step closer to semi-autonomy and making surgical robots an indispensable component of modern healthcare facilities.

1.3 Dissertation Outline

The remainder of the dissertation is laid out as follows: Chapter 2 gives an overview of the technical problems associated with autonomous surgical robots as well as po-

9 CHAPTER 1. INTRODUCTION tential solutions and systems that have been developed. The algorithms developed in Chapter 3 demonstrate successful tracking of suture threads in a dynamic envi- ronment. A Magnetic Resonance Imaging machine compatible catadioptric stereo tracking system is used to track the deflection of a magnetically guided catheter in Chapter 4. The force models developed Chapter 5 are used to predict the outcomes of surgical suture needle drives. These force models are used in conjunction with best practices in Chapter 6 to develop suture needle path plans. This work concludes with a discussion of the next steps required for the successful introduction and adoption of semi-autonomous surgical robots.

10 Chapter 2

Surgical Automation

Despite their limitations, MIS and RAMIS spurred advances in surgery and improved the outcomes of many surgical procedures. As human-robot interaction is a rapidly

expanding field, many researchers are exploring methods and ideas that will improve the utility of robots in a surgical setting. Automating surgical subtasks is one ap- proach that will greatly improve the versatility and relevancy of robots in a surgical setting. This chapter introduces the background behind task automation and dis-

cusses the progress that has been made towards automating surgical subtasks with a particular emphasis on frameworks and methodologies.

2.1 Robotic Automation

The concept of a robot started with automatons: simple machines that would perform simple tasks. The word automation shares the same etymological history with au-

tomaton and it is no surprise that one of the major world wide applications of robots is automation. Robots have been used to automate industrial machining, assembly, navigation or even win games like chess. In order to function autonomously, robots must be capable of interacting with their environment in a repeatable fashion (e.g. put a bolt in the same spot on a car

11 CHAPTER 2. SURGICAL AUTOMATION

frame once a minute for the life of the production run). In order to generate this

repeatable motion, all robot automation must generate a path plan. This path plan can be generated through preplanning or instruction. Once a path plan is generated, it also possible to optimize it online as new infor- mation becomes available to the robot. The remainder of this chapter discusses the work that has been completed on tasks automation with an emphasis on applications to surgical subtask execution.

2.1.1 Surgical Subtask Automation

Surgical robots are controlled through teleoperation which severely restricts the amount of assistance that these so called robotic “assistants” can provide. While it is true that the robot will automatically enforce portal compliance, the surgeon still has to perform all of the robotic motions themselves. Automating completion of surgical subtasks will transition the robotic systems from the master-slave interface to a more active apprenticeship role. In many ways, automating surgical subtasks is more demanding because of the higher uncertainty as well as the stringent safety requirements of medical devices [9, 8]. In addition to regulatory hurtles, surgical robots must be proven assets to hospitals to buy, doctors and surgeons to use, patients to want, and insurance companies to pay for. Currently, full surgical automation will not happen for the foreseeable future, nor is it realistic to expect that such a disruptive technology will rapidly gain widespread acceptance. While full autonomy is considered to be the purview of science fiction, a more feasible goal of surgical robotic research is to automate subtasks which are common across many surgical procedures. Bonfe et al. laid foundational groundwork that can be used to generate surgical subtask primitives that could be automated[16]. In this formulation, the robotic system was decomposed into three different sections:

12 CHAPTER 2. SURGICAL AUTOMATION

Surgical Interface (SI) that links the surgeon and robot, Robot Control (RC) that

directly controlls the robot end effectors, and finally Sensing Reasoning (SR) and Situational Awareness (SA) which identify faults or events during the procedure. Any subtask undergoing automation must be able to address these different sections during subtask execution. Some common surgical subtasks include: tissue retraction,

surgical suturing, and needle targeting. Tissue retraction involves moving healthy tissue out of the way in order to operate on the non-healthy tissue. Surgical suturing (i.e. stiching) is where the surgeon repairs damaged tissue and closes the patient wounds. Needle targeting is where a long needle is inserted into the tissue in order to either remove tissue for a biopsy or deliver therapies to a specific location. All

of the tasks above are important in the automation of robotic surgical assistants. Automating these subtasks has the potential to revolutionize surgical procedures because they will allow the robot to plan out and execute subtasks effectively while enabling the surgeon to focus on the procedure objectives instead of the minutia of

the subtasks. When starting a surgical procedure, the surgeon will often need to perform tissue retraction. The operation target during a procedure is often behind healthy tissue. In order to operate on the unhealthy tissue the healthy tissue must be moved out of the way. In the case of open surgery this is done by separating the ribs in order to access the heart. During appendectomy, healthy intestines may need to be moved in order to access the appendix. Careless tissue manipulation can damage the otherwise healthy tissue. Additionally, tissue retraction may also involve cutting through tissue membranes so that they may be moved out of the way. High quality modeling and planning of the tissue cutting will enable the robot to accomplish necessary retrac- tions effectively while minimizing patient trauma. Automating tissue retraction could reduce tissue damage by using models of the material properties and deformation in order to generate the appropriate plan that minimizes both the cutting and stress

13 CHAPTER 2. SURGICAL AUTOMATION

applied to the tissue. This will likely reduce the healing time of the retraction sight.

Once the healthy tissue is moved out of the way, the surgeon, with the assistance of the robot, can perform the next step of the procedure which often requires suturing. Suturing is commonly required during a surgical procedure. Sutures are used to re-attach ligaments, graft blood vessels, and close the access wounds upon completion

of the procedure. Suture needles are often purchased as prepackaged kits. An example of a suture needle kit is shown in Fig. 2.1. Surgical suturing is a common task that may be completed many times over the course of a RAMIS procedure. Successful autonomous surgical suturing encompasses many facets of robot control and perception. There are many approaches taken by researchers and private com-

panies with regards to surgical suturing. Iyer et al. demonstrated that it is possible using fiducial markers to complete an automated circular needle drive [17]. In this demonstration, a robotic arm was used to drive a needle with no thread attached. While this work utilized a laparoscopic needle holder, the actual needle drive does not obey the patient portal constrained as outlined by Nageotte et al. [18]. In an earlier study, Nageotte et al. [19] presents a path planning method for a laparoscopic suture needle through tissue membranes using a limited degree of freedom laparoscopic in- strument. During needle driving, it is important to minimize the tissue trauma that is caused by the needle. To that end researchers have studied the forces generated by surgeons during suturing so that it may be better modeled [20, 21]. It is also impor- tant to understand how surgeon currently drive needles so that the needle drive plan results in a robust suture [22]. An additional important component of suturing is the tying off of the suture thread. This can be accomplished with specialized hardware such as the endo360 (EndoEvolution, LLC. Raynham, Massachusetts U.S.A) [23]. The specialized hardware then enables the surgeon to complete sutures and other tasks quickly and consistently. In some situations however, specialized hardware may not be practical to use and in those instances generalized grippers can be used in order

14 CHAPTER 2. SURGICAL AUTOMATION

Figure 2.1: The package (top) is sealed for single use. The plastic cage (middle) keeps the suture thread from tangling and allows for quick use. The actual suture kit (bottom) has a semi-circular needle crimped onto a violet suture thread. The semicircular taper point needle is one of a wide variety of different needle types and shapes used by doctors and surgeons during their work. This particular kit was manufactured by Ethicon (part number J341). to complete the suture autonomously [24]. Executing an autonomous suture melds the task plan with scene tracking in order to complete the suture reliably in a variety of different environmental conditions. Aspects of surgical suture planning have been demonstrated already: automated circular needle drives [17] and suture knot tying

[24] have already been completed. In order to make the above motion plans robust and repeatable during a surgical procedure, algorithms capable of tracking the thread, tissue, and needle must be used. While it isn’t traditionaly performed during RAMIS, there is a significant amount of needle insertion modeling using biopsy needles.

Biopsy needles are long thin and flexible needles that can be used to obtain a tissue sample for testing or delivering therapies directly into a target location. Modeling the biopsy needle insertion forces allows the robot to estimate tissue depth [25]. Additionally the tissue forces allow the robot to estimate tissue deformation and more accurately position the needle [26]. The work described in this dissertation does

15 CHAPTER 2. SURGICAL AUTOMATION

not focus on biopsy needles, but much of the work on force modeling is applicable to

the modeling of suture needle insertion forces. While the majority of this dissertaion discusses laparoscopic minimally invasive surgery, there are other types of procedures that can be considered minimally inva- sive. Cardiac catheterization is a minimally invasive procedure where a catheter is inserted into the heart through blood vessels. As a result, only one site is needed to insert the catheter into a major or vein. Once the catheter is inside the heart, one of numerous different cardiac treatments can be completed. One catheter procedure that can benefit from robotic automation is cardiac ablation, a treatment for atrial fibrillation. During cardiac ablation, the catheter burns away heart muscle in order to prevent reentry pacing signals from triggering an irregular heartbeat. The quality of the ablation path is a strong indicator for the efficacy of the procedure [27]. Automating the ablation subtask would improve the consistency of the ablation and enhance the patient outlook by simultaneously maximizing the efficacy while minimizing the risk of complications. The successful completion of any of the above subtasks require that the robot is capable of understanding the subtask’s objectives as well as how to interact with the surgical environment.

2.2 Robot-Environment Interaction

Subtask execution can be expanded into three main pillars of robotics: Perception, Planning, and Control. It is important to understand how all three pillars work together in solving the overall problem. The motivation is to develop techniques that allow existing robotic assistants to autonomously complete subtasks [10, 28, 12].

Perception lays the groundwork for the autonomous task completion because it helps to identify the information channels available to the robot from the surgical

16 CHAPTER 2. SURGICAL AUTOMATION

environment. Planning requires modeling how the environment and the robot will

interact. Control is defined by the policy that the robot uses to generate each of its actions from the sensed world information.

2.2.1 Perception

Perception is analogous to sensing and enables the robotic system to understand both itself and the environment it interacts with. Sensing is required for reliable closed loop

subtask execution. There are several sensing modes that are available to the surgical robot: forces, computer vision, and surgeon feedback are a few sensing modalities.

Force Sensing

Some industrial robots are equipped with force sensors. When combined with a real time controller, compliant controllers can be designed that are gentle and safe to

interact with. This compliance can be used to finish parts such as turbine blades [29]. The advantage to using compliant robots is that they will move to adjust their path automatically as to not tear or damage the tissue. The force sensor measurement can also be used to estimate tissue properties [30]. Sensed forces can also be used to insert haptic feedback into the remote workstation of the surgical robots [31]. The da

Vinci does not currently have haptic feedback because haptic systems can be unstable which is a significant safety hazard. Since there is no robotic automation or haptic feedback, RAMIS systems do not typically include distal force sensors on their end effectors.

Successful surgical task automation will likely require integrated forces sensors inside the tools to aid in sensing the success or failure of surgical subtask execution. One example of a commercial 6-axis force sensor is shown in Fig. 2.2. This force sensor, while relatively small, is too big for use in MIS applications.

17 CHAPTER 2. SURGICAL AUTOMATION

Figure 2.2: The ATI Nano17 (ATI Industrial Automation, Apex, NC USA) is one of the smallest commercial 6-axis force sensors in the world. One of its applications is for dental research. While this force sensor is only 17 mm in diameter, surgical applications often require even smaller tools to fit inside the patient entry ports. To that end, researchers have looked into designing and testing force sensors integrated into the surgical tools that are approximately 10 mm in diameter [32].

Computer Vision

While many surgical robot systems do not include active force sensing, they do include a stereo laparoscopic camera system. This system provides a full stereo vision system that can be used to detect and triangulate objects in 3 dimensions.

Computer vision based algorithms and techniques are capable of improving the robot’s perception of the surgical environment. Since stereo cameras are already included in commercially available systems, algorithms utilizing stereo vision can be readily integrated with current robot technology.

Many researchers have utilized computer vision to demonstrate aspects of surgical subtask automation: including suture needle tracking, visual servoing, thread track- ing, and automated needle driving [33, 34, 35, 36, 17]. Objects are detected relative to the camera frame. If the object is rigid, the frame is given by the transformation

G SE(3) (Special Euclidean Group 3). Since the surgical environment will never ∈ be the same across multiple patients, any computer vision based algorithm must be robust and capable of functioning in a wide range of scene lighting and coloration. An important distinction in computer vision is the difference between segmenta- tion and tracking. Segmentation highlights a feature of interest based on the image properties or object model while tracking requires identifying the location of the ob-

18 CHAPTER 2. SURGICAL AUTOMATION

ject image. The object model is then tracked as it moves in the image sequence.

There are many different methods available for image segmentation. Between simple edge detectors such as the sobel or canny edge detector, color segmentation, or even more complex feature detectors such as the ‘vesselness’ detector laid out by Frangi et al. there is no shortage of segmentation approaches [37, 38, 39, 40]. Special care must be used in the surgical environment because of limitations in illumination and environmental coloring. There is no ambient lighting so the laparoscopic camera includes its own light source which creates hard shadows and reflections that hinder segmentation [41, 42]. In addition to poor lighting, objects under detection might change color if they get blood on them. One mechanism to improve segmentation is to use the tracked object model as a cue in order to aid in the segmentation and tracking [43]. Object tracking using geometric models allows the vision system to naturally track an object and realisticly predict the results of manipulation. An object being tracked

by a camera in 3D space can be associated with some geometric model [44]. The image of the model can be computed using the transform from the rigid object frame to the camera frame. Once the object model is in camera coordinates, the model is projected into the camera using the pinhole camera model. The visible object points

then create an image of the object in the camera frame. Additionally, the Jacobian of the object points in the camera frame can be computed with respect to the transform

Glo. Using the projected point information and derivative, rigid object trackers can be used to help localize the objects in the camera image. Kalman filters are an example of a robust stochastic tracking that can be applied to tracking rigid objects in the

camera frame[45]. Since some objects in the surgical environment are deformable (e.g. tissue), other mapping techniques need to be utilized in order to align object models to the current object image. One aligment technique commonly used for matching medical images to laparoscopic images is image registration (i.e. aligning). There

19 CHAPTER 2. SURGICAL AUTOMATION

are many ways to complete image registration which is commonly used to align and

fuse different medical images together (e.g. align MRI images with CT images or a patients head scan with a reference scan for anatomical identification) [46]. Tracking a non rigid object model can become intractable due to the large degree of freedom needed to model both an objects shape and location in space. Non Uniform Rational

B-Spline (NURBS) models are one relatively low dimension technique used to model deformable objects [47] Due to the immense computational complexity that many of the above algorithms present, specialized approaches must be explored such that the image processing can be computed in real time for online use inside the patient. Since many image pro- cessing and segmentation operations are highly parallelizable, utilization of simplified parallel processors such as graphic processors enable much faster operation of many vision based algorithms [48].

Other sensing modes

While force sensing and computer vision comprise the bulk of the surgical robots’ sens- ing capabilities, there are many others that can be leveraged by a semi-autonomous surgical system. Some examples include direct interaction with the surgeon and their assistants [49]. Depending on the needs of certain applications, inclusion of patient vitals (e.g. heart rate, blood pressure, temperature) could allow the robot to actively inform the surgeon of impending complications. As an example, Slater et al. showed that Cerebral Oximeter can be used during a procedure to help predict cognition im- pairment in cardiac patients post operatively [50]. Advances in artificial intelligence and machine learning could allow the surgical robots to sense and predict upcoming complications, inform the surgeon, and recommend a course of action based on simply

sensing the patient vitals during a procedure. If a surgical robot is able to thoroughly sense the surgical environment, then it can generate an effective and efficient motion

20 CHAPTER 2. SURGICAL AUTOMATION

plan that maximizes measurable criteria for success.

2.2.2 Planning

Once the surgical environment is modeled and detected, the suturing plan can be generated. This plan is based on the best a priori estimate of how the environment will respond to robotic motion. This includes tissue force and deformation modeling. A common area of tissue force modeling is tissue needle interaction forces.

There has been significant work on modeling the needle tissue interaction forces during either straight or flexible needle insertion [25]. Many of these different models are for brachytherapy (which involves the precise placement of radioactive beads that will irradiate the surrounding tissue and kill any nearby cancerous cells) or collecting tumor biopsies. For example, Chentanez et al. have modeled the tissue deformation of the prostate gland during the insertion of a straight hollow needle [51]. The modeling is performed using a three dimensional Finite Element Model (FEM) where the element mesh updates dynamically as necessary. This can be a very accurate method for modeling both material deformation and the forces generated during the deformation. One disadvantage of using a complex three dimensional FEM is that it can be difficult to solve the FEM in real-time such as would be needed for an automated needle path plan. Alterovitz et al. have worked on similar modeling of the prostrate gland [26]. Their models use a two dimensional FEM instead of a

three dimensional one. This is one way of improving computational efficiency at the cost of model accuracy. Additionally many published papers use FEM for modeling needle tissue interaction forces. Since material properties important to the FEM calculations can vary significantly between tissue types, Maghsoudi et al. publish

a work that analyzes the sensitivity of the FEM algorithm to parameter deviation. This could include properties such as Young’s Modulus and the Poisson Ratio [52]. A significant amount of the force that a needle experiences is concentrated at the tip.

21 CHAPTER 2. SURGICAL AUTOMATION

This means that it is important to model the tearing event that the tissue undergoes

[53]. Okamura et al. used a lumped force model to simulate the axial forces a straight rigid needle would experience when it is inserted into a liver [54]. In the lumped force model each f represents a contribution to the net force from a different source.

fneedle(x)= ffriction(x)+ fcutting(x)+ fstiffness(x) (2.1)

Compared to FEM based analysis, the lumped needle force is computationally effi- cient. A detailed analysis of needle tissue interaction forces sensed during surgical suturing is unavailable in the literature. Many of the previous works model the forces that are experienced by needles used for biopsies and therapeutic applications. These needles are long, straight, hollow, and potentially flexible. The needles used for per- forming a suture are short, curved, rigid and solid. Since suture needles are inflexible, they will not comply with the tissue during a suture. Most lumped models available in the literature do not model off axial forces resulting from tissue displacement. This force is very critical for suture applications [55]. As tissue is manipulated, its mechanical properties can be modeled and extracted.

Boonvisut et al. completed work on modeling the mechanical properties of tissue. The mechanical properties are estimated using both the tissue deformation, and external forces [30, 56]. This allows the robot to update its motion plan to suite the properties of the tissue while minimizing the potential for tissue damage. Another application of soft-tissue force modeling is for surgical simulators [57]. These simulators are often used for training surgeons as well as testing surgical manipulation planning [58]. Ultimately the goal is to ensure that when manipulated, the robot is able to accurately estimate the tissue response. This is required for accurate surgical performance and simulation of tissue retraction.

22 CHAPTER 2. SURGICAL AUTOMATION

2.2.3 Control

The robot plan is controlled using a control policy, π. This control policy must obtain the desired results robustly. All path plans are based on the generalization that the robot executes a motion policy π based on the current world state ,s, and the action space, A. The formal function definition of π follows:

π : S S A. (2.2) × →

The policy maps from the current state s S and the target space s′ S to the ∈ ∈ action a A (i.e. the action a will transition the world state from s to s′). This is ∈ shown in Fig. 2.3. While industrial automation has allowed the robots to operate with little to no built in sensing mechanisms, surgical subtask automation requires detailed knowledge of the surgical environment that the robot can act on. This increases the dimension- ality of the world state considerably. The uncertainty during surgical procedures also means that the action generated by π must behave robustly even under stochastic actions and positions. The surgical robot policy must integrate both the robotic in- formation with the environmental information in order to generate the action a that results in the target outcome.

In order to improve the performance of robotic surgical assistants, many au- thors have introduced methods and techniques to plan and control surgical tasks autonomously. There are different methods that can be used to develop the motion plan π and automate different robotic tasks. It is sometimes possible to model the task being accomplished and then program the robot accordingly, but this method can result in intractible problems or path plans which are sensitive to environmental deviation. Robotic learning, another approach also known as Learning from Demon-

23 CHAPTER 2. SURGICAL AUTOMATION s

a s′

Figure 2.3: The policy of a robot must transition successfully from state s S to s′ S using the action a A. The state space of the world includes both the robotic∈ ∈ ∈ and environment state information. The action state includes the robot joint motions. stration (LfD), is one automation technique that has recently been introduced.

Robot learning algorithms record robotic motion as an operator is using it to perform a task. The recorded data might include the robot joint positions, forces, camera frames, and operator commands. Repeated recordings of operators performing subtasks enables the robotic system to identify key features of the subtask action plan.

The key features can be refined to generate the appropriate action policy, π, leading to task execution. When properly trained, the robot can even execute a wide variety of real world tasks including surgical subtasks in super human time [59, 60, 61]. Training techniques include discrete Markov stochastic modeling[62], learning from demonstration [63, 64], and direct task modeling [35]. These methods and techniques allow robots to be trained to complete surgical subtasks autonomously. There are also earlier studies on planning algorithms for percutaneous needle insertion, such as for biopsy, brachytherapy, etc (e.g. [26, 65, 66, 67]). During a tissue biopsy, long straight needles are used. Sometimes the needles are flexible. As the needle is inserted, the tissue will deform just as the needle might deform. In these cases, path planning

24 CHAPTER 2. SURGICAL AUTOMATION

is primarily concerned with hitting a biopsy target that may move as the needle

does. Many of the above works are focused on automating specific surgical subtasks. With careful objective functions and data recording, many additional subtasks can be automated this way. The framework for autonomous surgical subtask already exists, but many of the above methods and techniques fail to complete a subtask. They only perform piece- wise completion. The final challenge is to integrate these disparate components into full subtask automation.

25 Chapter 3

Real-Time Visual Tracking of Dynamic Surgical Suture Threads1

In order to realize many of the potential benefits associated with robotically assisted minimally invasive surgery, the robot must be more than a remote controlled device. Currently, using a surgical robot can be challenging, fatiguing, and time consuming. Teaching the robot to actively assist surgical tasks, such as suturing, has the potential to vastly improve both patient outlook and the surgeon’s efficiency. One obstacle to completing surgical sutures autonomously is the difficulty in tracking surgical suture threads. This chapter presents novel stereo image processing algorithms for the seg- mentation, initialization, and tracking of a surgical suture thread. A Non Uniform Rational B-Spline (NURBS) curve is used to model a thin, deformable, and dynamic length thread. The NURBS model is initialized and grown from a single selected point located on the thread. The NURBS curve is optimized by minimizing the im- age matching energy between the projected stereo NURBS image and the segmented thread image. The algorithms are evaluated using a calibrated test pattern. Addi-

1This chapter has been submitted to the IEEE Transactions on Automation Science and Engi- neering (T-ASE) and is under review[68]. A preliminary version of this work was presented at ICRA 2015

26 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

tionally, the accuracy of the algorithms presented are validated as they track a suture

thread undergoing translation, deformation, and length changes. All of the tracking is in real-time.

3.1 Introduction

Robotic surgical systems used in Minimally Invasive Surgery (MIS) present an oppor- tunity to pair human surgical expertise with the precision and repeatability of robots.

Autonomous robotic execution of low-level surgical manipulation tasks would allow the robot to perform tedious and time consuming tasks quickly and accurately while relieving the surgeon to concentrate on high level tasks and goals of the procedure. Surgical suturing is one task that can benefit from the synergy of surgeons and surgi-

cal robots. Once the surgeon specifies key aspects of the suture, the rest of the task can be completed by the robot. The surgeon plans the next step of the procedure while supervising the quick and precise autonomous completion of the suture. In order to complete autonomous tasks during MIS, it is necessary to localize and track task critical elements (e.g. suture needle, thread, and tissue for autonomous

suturing) [16]. This chapter focuses on the problem of localizing and tracking the suture thread. In the proposed method, the suture thread model is initialized as a 3-dimensional Non Uniform Rational B-Spline (NURBS) curve. The NURBS suture thread model is projected into a stereo image pair and the projected model is opti-

mized using an energy based stereo image algorithm. The stereo optimizations are then recombined to update the 3-dimensional NURBS curve. This approach allows the suture thread to be tracked in real-time as it is deformed and the method is robust to changes in thread length, self intersection, and knot formation.

This work introduces a new segmentation method that allows the local thread direction to be estimated in the image space. It also introduces a NURBS based

27 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

tracking algorithm that is capable of tracking curvilinear structures (threads) as they

are manipulated in real time. The tracking is robust to changes in the length of the tracked thread as well as self intersections and tying of knots. The outline of the chapter is as follows, section 3.2 summarizes previous related research. Section 3.3 discusses how the suture thread is segmented. Section 3.4 describes how the shape of the suture thread is modeled. Section 3.5 explains how the NURBS model is seeded from a single point. Section 3.6 discusses how the suture thread model is updated in real-time. Section 3.7 demonstrates the capabilities of the suture thread initialization and tracking algorithm. Finally, section 5.6 presents the discussions and conclusions.

3.2 Literature Review

In order to improve the performance of robotic surgical assistants, many authors have introduced methods and techniques to plan and control surgical tasks autonomously. These methods include discrete Markov stochastic modeling [62], learning from demon- stration [63, 64], direct task modeling [35], and evaluation methods to determine the success or failure of autonomous procedures [16]. These methods and techniques can be applied directly to the task of surgical suturing. Surgical suturing is a common task in MIS that may be completed many times over the course of a surgical procedure. Successful autonomous surgical suturing encompasses many facets of robot control and perception. While specialized hardware solutions are capable of completing a suture autonomously, [23], due to costs and tool swap time, specialized hardware may not always be practical to use. In many instances it is more practical to use generalized grippers in order to complete the suture autonomously. Executing an autonomous suture melds the task plan with scene tracking in order

28 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

to complete the suture reliably in a variety of different environmental conditions.

Aspects of surgical suture planning have been demonstrated already. Iyer et al. demonstrated that it is possible, using tissue markers, to complete an automated circular needle drive [17]. Chow et al. showed how a suture knot can be tied once the thread ends are grasped [24]. In order to make the above motion plans robust and

repeatable during a surgical procedure, algorithms capable of tracking the thread, tissue, and needle must be used. Since the surgical environment will never be the same across multiple patients, sensors must be in place to detect the unique environment. Osa et al. introduced mechanisms that use visual servoing to detect and operate within the surgical envi-

ronment [35]. The most important targets to detect include the suture needle, suture thread, and the tissue requiring the suturing. Iyer et al. used tissue markers to track the tissue when completing an automated circular needle drive [17]. This drive is completed while tracking the needle. Previous authors have also validated suture

needle tracking performance [33, 34]. The needle is detected as a rigid body with an position relative to the camera given by the transformation G SE(3). In Iyer’s ∈ implementation, the needle was not attached to any thread [17]. This contrasts to actual suture needle kits where the thread and the needle are packaged as a single

unit. This is shown in Fig 3.1. Once the needle is driven, the suture thread must be pulled through and tied off. Since suture thread, as shown in Fig. 3.1, can be 70 cm long, a single suture kit is often used for multiple sutures. Tying the suture threads efficiently will rely heavily on the performance of the thread detection algorithm. Detecting and manipulating the suture thread is an important component of com- pleting a suture autonomously. This has been recognized by other researchers [35]. The problem of tracking suture threads can be decomposed into two subproblems: initialization and tracking. Initializing a 1-dimensional curve from an image is required when there may be

29 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

Figure 3.1: A sample suture needle kit includes a needle crimped directly to a colored suture thread. This particular kit was manufactured by Ethicon (Cincinnati, Ohio) (part number J341).

many potential suture threads in an image and only one needs to be tracked. Once the initial point of a suture thread is selected, there is a abundance of research on detecting thin features in images. Kass et al. introduced snakes which are widely considered to be a important aspect of image feature initialization and tracking [69]. Kaul et al. demonstrated how the level set method can be used to highlight a crack in cement [70]. Steger et al. completed work on detecting roads in satellite images [71]. Unfortunately, the nature of the problems addressed by [70, 71, 72, 73] do not utilize

stereo vision. Since the thread image thickness is generally not known, stereo vision is required for the 3-dimensional tracking of the suture threads. Many of the curve initialization algorithms are applied to static images to detect different thin features. They are ill-suited, due to speed, for tracking the suture thread in real time. Once the curve is initialized, it must be tracked as it deforms and moves. Previous

works have demonstrated the feasibility of detecting a suture thread as well as similar objects. Javdani et al. [74] demonstrated that it is possible to model suture threads as Dynamic One-dimensional Objects (DOO). Here, each suture thread is modeled as a collection of connected . The incident and twisting angles between the

bones result in an internal deformation energy of the thread. By minimizing the total internal energy coupled with the energy of the image matching, the suture thread is identified and tracked. In this example, the suture thread is segmented from the image using Canny edge detection [74, 37]. This approach requires knowledge of the suture

thread material model. An alternative approach by Schulman et al. [75] identified

30 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING various deformable objects, such as a rope, using a manipulatable point cloud. Similar to [74], actively deforming a physical model increases the internal energy so that the energy of the image matching can be minimized. The final goal is to minimize the total internal energy summed with the image match energy. Currently, MIS robots do not employ the same RGB plus depth imaging device that is available to researchers, as a result, it is more practical to rely exclusively on the stereo imaging systems that are deployed in existing robotic surgical systems. Padoy et al. used multiple approaches to detect curvilinear structures, [76, 77]. In [76], Padoy used a textured thread to complete the tracking. The texturing utilized fixed length colored segments of the suture thread. Since the thread maybe partially occluded during tracking, the fixed length segment assumption no longer holds. The other method, [77], used NURBS curves to describe the shape of the detected curve. The control points of the curve were optimally positioned by using a Markov Random Field (MRF) model that was iterated using a fastPD optimization algorithm [78]. Heibel et al. also used

NURBS curves to optimize the Maximum A Priori (MAP) estimation of a NURBS model using a MRF [79]. Alternatively, active contours (Snakes) have been used in conjunction with closed NURBS to optimize an image fit [80]. All of these online tracking algorithm are missing small but important elements required in order to track the suture thread while a knot is being tied. Previous algorithms for suture thread detection assume a known physical model, used a point cloud, required that a constant length thread, or used fixed length textures. While these are important consideration, these limitation make it difficult to tracking the thread end points successfully. In an actual surgical environment, the suture thread may only have one end grasped. The thread may not even be grasped at all. Due to the nature of the image energy functions in previous works [74],[77] mismatches are not penalized if the image of the suture thread model is a subset of the actual suture thread image. This allows for the suture thread model to ‘shrink’

31 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

with no apparent penalty. To combat this, a penalty was introduced where the cost

would increase as the length of the model changed [77, 79]. This can cause problems if a portion of the suture thread is occluded and is slowly exposed causing the thread image to lengthen. This chapter presents an algorithm that can be used for both initialization and tracking of a suture thread using an initial seed point. The tracking algorithm is capable of tracking the suture thread in real time as it moves, deforms, changes length, and tightens knots.

3.3 Suture Detection

Automatic suture thread detection requires top level knowledge of the surgical proce- dure. There may be multiple suture threads in the surgical workspace. Some sutures might even have already been tied off. Detecting the suture thread of interest requires a directed pruning of irrelevant sutures. Supervisory selection will maximize the per- formance of the image processing while keeping the surgeon in complete control of the procedure. Identification of the target suture thread can be quickly and easily performed by the surgeon provided that there exists an adequate user interface that enables the surgeon to communicate his/her intentions. Surgeons rely on increasingly sophisticated interfaces when working with surgical robots [81, 82]. Consequently, there are many potential methods, including aug- mented reality based ones [83], that a surgeon can utilize to input a seed point of a suture thread. The seed point, when coupled with the segmented image allows the initial thread to be identified.

32 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

3.3.1 Suture Thread Segmentation

The discrete 2 dimensional image domain is defined to be Ω. The image is a map Il, I : Ω R+ where the subscript indicates the left (l) or right (r) image respectively. r → The purpose of segmentation is to create maps V , V : Ω R2 that highlights the l r → suture thread from the rest of the image. The outputs of segmentation, Vl, Vr, are vector valued and describe the magnitude of the segmentation as well as the direction. There are many different methods that can be used to segment thin objects or edges from an image. Classical methods include the edge detection gradient oper- ators, (e.g. Sobel, Roberts, or Prewitt), as well as the Canny edge detector [37]. Many methods are also based on Gaussian convolution kernels such as difference of Gaussians, and the Laplacian of Gaussians [84]. Many of these methods incorporate image scale space where the same image is filtered using a 2-dimensional Gaussian

filter kernels with a range of specific covariance matrices. Since a suture thread is more of a ridge (or valley) in the image space, edge detectors might detect both the rising and the falling side of the thread in the image. To avoid this biphasic detection, a thin feature detection algorithm is employed. Since thin feature detection has many applications in diverse branches of research, a large body of work has been completed with regard to thin feature detection [71, 79, 38, 85]. In this implementation, a thin feature enhancement algorithm developed by Frangi et al. for blood vessel detection is utilized [38]. While this filter is also scalable, the thread is a uniform thickness and consequently, only a single scale size was used for computational speed. The algorithm is modified to include directionality information using a method similar to one explored by Steger when analyzing images of roads. [71]. Thin feature enhancement begins by convolving a gray scale image with a two dimensional Gaussian distribution second derivative. The variance of the Gaussian function corresponds to the scale space size of the image filter. The result is a scale space Hessian matrix of the original gray images, H , H : Ω R2×2. Each image l r →

33 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

pixel maps to a 2 2 real symmetric matrix. This matrix has real eigenvalues ( λ < × | 1| λ ) and orthogonal eigenvectors v v R2. The magnitude and ratio of these | 2| 1 ⊥ 2 ∈ eigenvalues are used to generate the output image magnitude. A large magnitude eigenvalue indicates that the local image region aligns well with the second derivative of a Gaussian function in the corresponding eigenvector direction. If one eigenvalue is large and the other is small, the local image region looks like a ridge (or a section of suture thread). This information is used to generate the maps V ′, V ′ : Ω R2. l r → ′ The output vector for each pixel location Vl (i, j) can be computed directly from the eigenvalues of the corresponding Hessian matrix2, as

2 R 2 B S ′ − 2 − V = e 2β 1 e 2c2 , (3.1) k l k  −  V ′ = v V ′ . l 1k l k

Here = λ /λ , S = λ2 + λ2, and finally the parameters β and c are user defined RB 1 2 1 2 p parameters. The result of (3.1) merges work by Steger and Frangi et al. [71, 38]. The eigenvector corresponding to the smaller eigenvalue is considered to be the local thread direction (v1). The result is that each pixel from the gray scale image is now mapped to a vector V ′(i, j) R2. In order to improve the segmented image tracking, l ∈ ′ ′ the map Vl (i, j), is smoothed. The image, Vl (i, j), is smoothed using a normalized low pass Gaussian kernel, G : Ω R, in order to reduce the segmentation noise. k →

The domain Ωk of the Gaussian kernel is centered about (0, 0). Since eigenvectors can be negated (i.e. H v = λ v = H ( v ) = λ ( v )) and V ′ is a collection l 1 1 1 ⇒ l − 1 1 − 1 l ′ of scaled eigenvectors, special care must be taken when blurring the vector map Vl . The blurring is accomplished using Algorithm 1. The final result of the segmentation and filtering are the images V , V : Ω R2. l r → An example comparing an unsegmented image to a segmented image is shown

2 ′ ′ ′ The calculations for Vl and Vr are analogous. Consequently the equations for Vr will be skipped for brevity.

34 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

Algorithm 1 This algorithm blurs local pixels together in order to maximize the output magnitude and eigenvector alignment Gaussian angle blur ′ 1: procedure (Vl ,G,Vl) 2: for (i, j) Ω do ∀ ∈ 3: v G(0, 0)V ′(i, j) ← l 4: for (l,s) Ω (0, 0) do ∈ k \ 5: if (i l, j s) / Ω then −′ −′ ∈ 6: v Vl (m, n) Where (m, n) Ω and (m, n) (i l, j s) is minimized. ← ∈ k − − − k 7: else 8: v′ V ′(i l, j s) ← l − − 9: end if 10: d v v′ ← · 11: if d> 0 then 12: v v + G(l,s)v′ ← 13: else 14: v v G(l,s)v′ ← − 15: end if 16: end for 17: Vl(i, j)= v 18: end for 19: end procedure

35 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

(a) original image (b) segmented image

Figure 3.2: A sample segmented image (b) together with the corresponding original image (a). The segmented image uses false color to indicate the direction of the as estimated by the algorithm. Image edges are also segmented in addition to the suture thread. This is because, in the gray scale image, the border appears to be a contrasting location to its surrounding edges. The thread self intersection can be identified from the direction information as can be seen from the different colors of the intersecting thread segments. Suture thread initialization and tracking must be robust against falsely segmented regions. in Fig. 3.2. The suture thread, (purple) is highlighted from the background. The segmented image is falsely colored to indicate direction.

3.4 Suture Thread NURBS Model

In the proposed tracking approach, the thread is modeled as a 3-dimensional NURBS

T curve. The suture thread is defined by a set of n + 1 control points C = [c0, ..., cn] where each c R3 R+. The R+ is the weight of the control point. The order of ∈ × the curve, o, indicates the polynomial degree that is used to smooth together the control points. A NURBS curve definition also includes a set of m + 1 knots where m = n+o+1. In this case, the set of knots, U, are distributed in the range [0, 1]. The end knots have a multiplicity of o + 1. This ensures that the end points of the curve fall on the points c0 and cn, respectively. This greatly simplifies manipulation of the end points. NURBS curves have been used in previous vision research because they generate smooth curves using a relatively low dimensional space [86, 79, 77]. This naturally allows for the NURBS curve to have a reduced internal energy compared to point wise curve definitions such as in Kass et al. [69]. The NURBS curve is

36 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

parametrically defined as in [47]

n i=0 Ni,o(u)wici p(u)= n , (3.2) P i=0 Ni,o(u)wi P The function p : [0, 1] R3 generates the points on the curve where u [0, 1] is → ∈ the curve parameter. The curve function p(u) is defined by the control point set C R3×(n+1) R+(n+1) as well as the knot vector U which is a set of m + 1 knots ∈ × in the range [0, 1]. The term Ni,o(u) is a B-spline basis function of order o. In this study, the weight wi is defined to be 1 for all indices. This reduces the basis space of the NURBS curve to R3×(n+1) and allows the NURBS curve to be more easily projected from 3-dimensional space into two dimensional images. Additionally, since unity weights reduce the space of optimized parameters, the overall optimization time is reduced. The basis functions are defined over the knot vector of the NURBS curve as defined above. After the curve, p(u), is computed, it is projected onto the stereo images. The resulting two curves are pl(u) and pr(u).

3.5 Initializing the Suture Thread

The curve initialization function generates a thread model including loops and dead ends when given an arbitrary point along the thread. Previous works on this subject center around the initialization of thin features in a static image (e.g. detecting roads in satellite images, blood vessels in CT or MRI scans, and even cracks in cement pictures). The implementation of this algorithm is related to previous work by Kaul et al. [70]. While Kaul et al. relied on the using the level set method to grow along the graph [87], this work utilizes the inter point cost function to grow the point list based on Dijkstra’s tree growth algorithm [88]. Once an initial point is generated3, the point grows outwards based on a predefined

37 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

cost function as given by (3.3). The initially selected point has a cost of 0. The cost

of pixels adjacent to the current active pixel is calculated using the segmented image value V ,V : Ω R2 as4 l r →

(i , j ) L′(i + i , j + j )= L(i, j)+ k x y k . (3.3) x y V (i + i , j + j ) ρ k l x y k

Here, L(i, j) is the cost of the image pixel at the location i, j. The location offset indices are i , j 0, 1, 1 . This finds the cost of the 8 connected neighborhood x y ∈ { − } around the currently active pixel. The diagonal pixel costs are scaled by √2. The power parameter ρ is used to improve the delineation between the thread and noisy background. The growth algorithm is outlined as Algorithm 2. The variable Lmax is a maximum cost that is allowed to be generated from the segmented image.

Algorithm 2 This algorithm grows out from the initial selected point in an image.

1: procedure pixel growth(Vl) 2: Preinitialize all pixel costs to 1. − 3: Add the pixel at the seed point to the active pixel list and set its cost to 0. 4: while Active pixels are not do ∅ 5: Find the lowest cost (L) of active pixel coordinates (i, j) Ω ∈ 6: for ix, iy 1, 0, 1/(0, 0) do ∈ − ′ 7: Find the new cost L (i + ix, j + jy) using (3.3). ′ 8: if L (i + ix, j + jy) < L(i + ix, j + jy) or L(i + ix, j + jy)= 1 then ′ − 9: L(i + ix, j + jy) L (i + ix, j + jy). ′ ← 10: if L (i + ix, j + jy) < Lmax then 11: The pixel (i + ix, j + jy) is now stored in the list of active pixels. 12: end if 13: end if 14: end for 15: Remove pixel (i, j) from the list of active pixels. 16: end while 17: end procedure

Algorithm 2 is applied to both images in the stereo pair simultaneously. Stereo

matching between the left and right images is performed along with the execution 3There are many methods that can be used in order to generate the initial point. While this work utilized a space-mouse, tracked grippers or a target location can also be used. 4 For brevity, only the cost function for Vl is shown, but the cost is analogous for Vr.

38 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

of Algorithm 2. Specifically, as the point set expands, points are allowed to match

between the stereo images once the expanding fronts are some distance away from the previously defined stereo point. Once matching candidate points from Algorithm

′ ′ 2 are found (xl, xr), the stereo aligned points xl and xr are found such that

min x x′ + x x′ and (3.4) ′ ′ l l r r xl,xr k − k k − k

f1   f  2   0= x′ y′ x′ y′ 1 f  ,  l l r r   3     f4      1    x x′  l ′  l where xl = and xl = ′ yl yl     

Here f1, f2, f3, f4 define the epipolar line constraint of the stereo image pair. The

′ ′ points xr and xr are defined similarly to the points xl and xl. The constraints in (3.4) constrains the new points to the epipolar image lines [89]. As long as the epipolar projection error is small enough, the candidate points are considered to be a match.

Although not explicitly shown in Algorithm 2, when points are matched together, their growth cost L is set to 0. This acts to ‘reset’ the cost functions and keeps the graph growth on the actual thread. The growth of this path is shown in Fig. 3.3a. When a new stereo point is found, they must be matched to their appropriate thread segment. This matching is accomplished by using parent points. As points are matched, they are considered to have a parent point. The parent point determines the thread list that the points are attached to. The purpose of the parent point is to help identify where branches or thread intersections occur.

39 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

(a) region growth (b) segmented image

Figure 3.3: As the segmented suture thread region grows, it will branch as in (a). Once the regions are done growing, they are connected together based on their align- ment. This is illustrated in (b).

The initial seed point is considered to have no parent and define the first thread point list. If a newly matched point has multiple parents, (i.e. a different parent in each image), then they are considered to be a branch point. This point is then considered to be the initial point of a new point list. Conversely, if a point has multiple children, then it is also considered to be a branch point and the children are used to initialize new point lists. After all of the new points are found, the different thread branches are combined into one thread. Two thread segments are connected if the endpoints of the thread segments are close to each other and have similar tangent angles. The merging of matching segments is shown in Fig. 3.3b. This gives a final list of the suture thread initial points. The list of continuous thread points is then used to generate a curve. The NURBS curve is calculated using a least square approximation of the generated 3-dimensional

T points. The result is a list of NURBS control points C = [c0, ..., cn] . Once the thread has been initialized, the NURBS model is updated using an iterative tracking algorithm.

40 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

3.6 NURBS Curve Iteration

After completing the suture thread initialization, the curve is defined as a set of

T control points, C = [c0, ..., cn] . The proposed algorithm tracks the deforming suture thread as it moves in the surgical environment by iteratively updating the control points of the suture thread model. This includes removing and inserting control points as needed. The goal is to minimize the image energy while preventing the internal energy from being too high. Evenly spacing the NURBS control points helps to minimize the internal energy. The image energy of both the suture thread and its end points are computed using the segmentation results Vl and Vr.

3.6.1 Image Energy

The energy of the NURBS curve image is based on both the image magnitude and image alignment. Not only should the curve image match the peak of the segmented image, they should share the same tangential direction as well. The image energy equation actually represent a pair of image energies, one for the left and one for the right image5.

u=1

El = ∆El(pl(u), tl(u)) du where (3.5) − Zu=0 (t V (p ))2 ∆E (p , t ) = ∆E = l · l l . l l l l V (p ) k l l k

Here tl(u) is the unit tangent to the curve at the point in image space while pl(u) is defined to be the projection of p(u) as defined in (3.2). The l subscript indicates that this is the left image of the stereo image pair (Er is formulated in a similar

manner). The goal of the NURBS optimization is to update the maps pl(u) such that the energy as defined by (3.5) is minimized. For every value of u [0, 1], the ∈ 5For the rest of the section, equations for the left image will be presented. The equations for the right image are analogous and will be skipped for brevity.

41 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

update force acting on that point is defined to be

′ f (u)= g p (∆E ) p p (3.6) l − p ∇ l l | l= l(u) 

Here ∆El is defined in (3.5). gp is a tunable gain that is applied to the energy gradient.

The gradient of the energy ∆El with respect to pl is as follows:

p ∆E = ∇ l l

Vl(pl) 2(tl Vl(pl)) pl (tl Vl(pl)) k k · ∇2 · − Vl(pl) k k 2 p V (p ) (t V (p )) + ∇ l k l l k l · l l . (3.7) V (p ) 2 k l l k

It is assumed that the point tangent and image direction (i.e. tl and Vl(pl))

are locally constant with respect to p . Likewise, the derivative of p (t V (p )) is l ∇ l l · l l also assumed to be small. That is to say, the alignment between the segmentation direction and the curve tangent does not change quickly. If the local energy gradient is small while the energy is large, then the local image region is actively searched for a

potential point. This is done to increase the image basis of the NURBS optimization algorithm. Optimizing the end points requires that they are given their own energy cost term. The energy of the endpoints are defined in the next section.

3.6.2 End Point Energy

A difficulty in tracking the thread arises due to the special case of the thread end-

points. Other works have used fixed end points or penalized changing the length of the suture thread in order to assist in tracking the endpoints [72, 74, 77]. In this work, the end of the thread is tracked using the local segmented image information as it aligns to the thread model. The end points have their own calculated energy

42 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

term:

E = (∆E (p (0), t (0)) V )2 (3.8) l,end l l l −k clk + (∆E (p (1), t (1)) V )2 . l l l −k clk

The parameter Vcl is the cutoff magnitude of the image. The cutoff magnitude is the average of the foreground mean magnitude and the background mean magnitude of the segmented image. This attracts the curve ends to the thread end. The end point forces acts on both ends as

′ f (0) = 2g p (∆E ) (∆E V ) p p , l,end − p,end ∇ l l l −k clk | l= l(0)  ′ f (1) = 2g p (∆E ) (∆E V ) p p . (3.9) l,end − p,end ∇ l l l −k clk | l= l(1) 

In (3.9) the vector tl = tl(0) and tl = tl(1) respectively. As in (3.7), the gradient of the segmentation NURBS alignment is assumed to be 0. Also, (3.7) and (3.9) define the forces that act on each point. In order to ensure that the forces align the NURBS

curve to the suture thread, they must be aligned to the curve directions.

3.6.3 Point Force Action

When a force, given by (3.7), acts on a point p (u) where u [0, 1], the force may l ∈ push the point along the suture thread to a section that is locally more strongly segmented, (i.e. a local bright spot). This may cause the points to bunch up on local bright spots in the segmented image. To combat this, the force as given by (3.7) is projected into the normal of the curve:

f (u)= f ′(u) t (u)(f ′(u) t (u)). (3.10) l l − l l · l

43 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

Similarly, the end point forces generated by (3.9) are projected onto the curve tangent as f (u)= t (u)(f ′ (u) t (u)). (3.11) l,end l l,end · l

Where u = 0, 1 . The forces are formulated such that the internal points are updated { } only in directions normal to the curve while the end points are additionally updated in directions that are tangential to the curve. This allows the entire curve to be updated both normally and tangentially. Notice that the forces acting on the end points are in fact the sum of both the normal and tangential force components.

3.6.4 Pointwise Update

The NURBS curve is defined to be a linear combination of the control points. Since it is impractical to compute the curve for every u [0, 1], only a finite number of ∈ points are actually computed. If the curve is generated for a list of parameters values u u where 0 j k, then the basis function N (u) can be precomputed and, as j ∈ ≤ ≤ i,o such, a matrix A(u) R(k+1)×(n+1) can be constructed ∈

N0,o(u0) N1,o(u0) ... Nn,o(u0)   N (u ) N (u ) ... N (u )  0,o 1 1,o 1 n,o 1  A(u)=   . (3.12)  . . .. .   . . . .      N0,o(uk) N1,o(uk) ... Nn,o(uk)  

The denominator from (3.2) is dropped because the weights wi are defined such that the denominator always sums to 1. The vector u = [u0, ..., uk] is defined such that the points are evenly spaced throughout the entire NURBS curve. The NURBS curve is now formulated as a linear combination of the control points

P = A(u)C. (3.13)

44 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

T Here P = [p(u0), ..., p(uk)] is the set of points on the NURBS curve at parameter values [u0,u1, ..., uk], C is the set of control points as defined previously, and A(u) is defined in (3.12). By retaining a precomputed copy of the matrix A(u), the NURBS curve can be quickly recomputed if the control points are updated without changing the parameter vector (u). After the point set P is found, it is projected into the stereo images resulting in the point sets Pl and Pr where

T Pl = [pl(u0), ..., pl(uk)] and

T Pr = [pr(u0), ..., pr(uk)] (3.14)

The precomputed discrete points of the NURBS curve greatly simplify the force com- putation on the set of NURBS curve points. This results in an offset that is applied

6 to the arrays of projected NURBS points (Pl and Pr) .

∆pl(ui)= fl(ui) (3.15)

Here i is the index of the parameter u. The stereo update vector is then ∆Pl =

[∆pl(u0), ..., ∆pl(uk)]. Once the left and right image updates are found, they must be deprojected into 3-dimensional space.

3.6.5 3-Dimensional Deprojection

Now that the update vector has been found for the set of points in stereo image space (∆Pl and ∆Pr), they must be mapped from the stereo image space into the 3-dimensional space. This can be accomplished by approximating the projection of the orthonormal curve basis t, n, and b, (tangent, normal, and binormal). Assuming that the 3-dimensional optimization vector is of the form ∆p(ui)= αit(ui)+βin(ui)+

6As in previous section, equations for the left image are presented. The right image equations are analogous and will be skipped for brevity.

45 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

γ b(u ). Then the projected vector ∆p (u ) α t (u )+ β n (u )+ γ b (u ). Here α , i i l i ≈ i l i i l i i l i i

βi, and γi are all gains describing the overall point update. This can be reduced to the following over determined matrix equation:

αi   tl(ui) nl(ui) bl(ui) ∆pl(ui)   β =   . (3.16)  i tr(ui) nr(ui) br(ui)   ∆p (ui)      r    γi     

The least squares solution of (3.16) gives the values of αi,βi, and γi. Since the offset of each curve point ∆p is found in 3-dimensional space, the update is constrained to the epipolar lines of the stereo camera system [89]. These values can then be used to

T generate the three dimensional offset vector ∆P = ∆p(u0), ..., ∆p(uk) . This offset matrix ∆P in the curve point space is mapped into the control point space using the matrix A(u) as defined in (3.12). This is similar to using the Jacobian transpose in robotics to project end effector forces into joint torques:

∆C = AT (u)∆P . (3.17)

The final control point update is completed using the update matrix ∆C.

C = ∆C D + C (3.18) t+1 ⊘ t

The end points are updated directly using the computed offset. This is done because the end points are defined to be the end control points (i.e. ∆c0 = ∆p0). The operator indicates that matrix ∆C is element wise divided by denominator matrix ⊘ D. The matrix D Rk+1×3 is a denominator matrix that is used as a normalization ∈

46 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

matrix to normalize the force offsets on the matrix ∆C.

1 1 1   . . . D = AT (u) . . . (3.19)       1 1 1  

This serves to normalize the effects of the curve point forces.

Control Point Post Processing

Once the control points have been updated, they are added and removed based on inter control point distance. Points are pruned if they are too close together. Con- versely points are inserted where there is a large gap between neighboring points. The points are inserted such that the curve does not change shape. The goal is to keep the set of interpoint distances c c and c c within a certain range. k k − k−1k k k − k+1k This acts to limit the local curvature as well as the internal energy of the NURBS curve.

This mitigates the possibility that control points are bunched up while ensuring that there won’t be any gaps in the thread model. Even though straight segments of threads do not need to be modeled with as many points, the presence of the points enables the model to respond more quickly to local changes in the thread shape. Fig. 3.4 shows the composite result of fitting a NURBS curve to a suture thread.

3.7 Experimental Validation

The goal of the experimental validation is to evaluate the performance of the su- ture thread tracking algorithm. The following aspects are validated: initialization, motion tracking, and distortion tracking. In order to test the algorithm, a pair of calibrated stereo cameras ( 640 480 pixels, GRAS-03K2C-C, Point Grey Research ×

47 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

Figure 3.4: This is an image of the thread model overlaid onto the camera image. The semi translucent green curve is the NURBS curve, while the red circles are control points.

48 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

Figure 3.5: The X-Y linear stage with a sample of thread. The orange and red patterned construction paper is meant to emulate some of the colors and patterns that might be present in an actual surgical environment. One end of the suture thread is affixed to the immobile piece of paper while the other is affixed to the linear stage. This allows the thread to deform as it is tracked.

Inc. Richmond, BC, Canada) were mounted above a motorized X-Y linear stage that allows the thread to be tracked as it moves. The experimental stage is shown in Fig. 3.5. The orange and red marbled paper was used in an attempt to mimic colors and patterns that might be found in a surgical setting. The computer completing the image processing contains an Intel Core 2 Quad processor running at 3GHz with 8

GB of installed RAM. The segmentation algorithm was implemented using CUDA and ran on a Nvidia GTX 650 Ti with 2 GB of GDDR5 RAM. During the experi- ments, the segmentation algorithm as well as the tracking algorithm ran in parallel threads. The segmentation algorithm runs with a loop speed of (15 Hz). The suture tracking thread runs with a loop speed of (20 Hz). Since the suture thread tracking and the segmentation operate as parallel threads, the thread model lags behind the segmentation model in the video display thread.

49 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

Table 3.1: The measured (ℓ) and detected ( ℓ) length of the suture threads. The standard deviation (σ) and the percent error∼ (%) are also reported.

ℓ (mm) ℓ (mm) σ (mm) % ∼ 107 108.8 2.41 1.68 175 181.4 2.65 3.66 200 203.53 2.77 1.76

3.7.1 Quantative Accuracy

In order to evaluate the accuracy of the thread tracking algorithm, the proposed algorithm was used to estimate the length of three pre-cut suture threads. The lengths

of the suture threads were 107 mm, 175 mm, and 200 mm. Each precut segment was positioned on the work space, initialized and tracked. This includes tracking the thread as it moves and deforms. One instance of each thread was measured in this way. The thread was twisted and kinked in the different images so that it did not lie

flat on the workspace. Additionally, one side of the suture thread was affixed to an immobile section of the background paper. This was done to ensure that the thread would deform significantly during tracking. This is illustrated in Fig 3.5. The table moved its attached thread endpoint in a figure 8 pattern with a maximum speed of 11

mm/s. The length of the thread model during deformation was logged and the mean and variance of the logged length are summarized in Table 3.1. The term ℓ represents the actual length of the suture thread, while ℓ is the average estimated length of ∼ the suture thread. The standard deviation of the measurements (σ) and the percent error (%) are also listed.

As the table indicates, the algorithm is capable of tracking the length of the suture thread within a few percent as well as a few millimeters. The largest error was with the 175 mm thread length. This indicates that the error is not a fixed value, nor is it dependent on length. It is most likely that the shape of the thread itself introduces the most errors and consequently is the source of most of the variance.

50 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

3.7.2 Calibrated Pattern Tracking

In order to validate the performance of the tracking against a known geometric shape, a predefined test pattern was used. The test pattern is shown in Fig. 3.6. The test pattern is defined as

xc(θ) 9.5 + 5 cos θ sin 1.5θ a(θ)=   =   mm. (3.20) yc(θ) 8.5 + 5 sin θ sin 1.5θ    

The domain of the curve parameter, θ is [ π/4, π]. The curve length is 250.9 mm. The − area of the pattern inside the 8 surrounding calibration circles is 106.5mm 106.5mm. × Since the geometry of the pattern is known, it is possible to directly compute the NURBS model fit with the curve geometry. The error was evaluated using 3 main criteria. The first criteria is the curve length in mm. This validates that the NURBS model of the suture curve is approximately the same length as the calibration curve.

The second criteria is the RMS curve error, defined as

π/2 ′ 2 − p(u ) a(θ) ds e = v π/4 k − k i, rms uR π/2 u ds t −π/4 da(θ) R ds = dθ, (3.21) dθ

where u′ = argmin ( p(u) a(θ) ) . (3.22) u k − k

The RMS error measures the average error of the curve along the entire length of the

calibration curve. The variable u′ is where the curve f(u) is closest to the calibration curve g(θ). While the RMS error is useful, it is also important to know the maximum error between the calibration curve and the NURBS curve.:

emax = max min p(u) a(θ) . (3.23) θ  u k − k 51 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

Figure 3.6: The thread calibrated pattern. The 8 circular markers are for independent geometric calibration.

The maximum error helps to estimate the open jaw width that would be required to accurately grasp the suture thread.

Even though the calibration pattern is planar, it is reoriented such that the plane normal is pointed in several different directions. The change in the orientation is approximately 15◦ in two orthogonal directions. The successful tracking of the ± different orientations are shown in Fig. 3.7. The red and blue circle indicate the

location where the fit is the worst. The quantitative fit of the curve is evaluated and summarized in Table 3.2. The length is consistently tracked to be within 5.0 % of the actual pattern length. The gripper used for suture manipulation opens to a jaw width of approximately

10 mm. The maximum error of 5.6 mm combined with the RMS error of less than 2 mm means that the gripper should reliably be able to grasp the suture along the entire length of the curve. Based on the error in the length, the gripper should deliberately try to grasp the suture several mm away from the end of the suture. While it is true that the calibration pattern is planar, printing it out allowed for a specific geometry to be used and tested. Angling the curve plane with respect to the camera image

52 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

(a) The baseline valida- (b) The +15x validation (c) The 15x validation tion −

(d) The +15y validation (e) The 15y validation − Figure 3.7: The different calibration pattern angles. The calibration curve is a black line with a purple suture overlay. The green circle on the NURBS (purple) curve indicates where the error is the largest. The blue circle indicates where the error is the largest on the calibration pattern.

Table 3.2: The error between the known curve and the fitted NURBS curve are summarized in the table below.

orientation ℓ (mm) % eℓ erms (mm) emax (mm) 0 251.7 0.32 1.4 4.7 +15x 262.9 4.75 1.2 2.9 15x 254.6 1.45 1.6 5.6 −+15y 248.6 -0.91 1.3 3.7 15y 245.2 -2.30 1.3 3.0 − plane allowed the depth of the camera tracking to be tested.

3.7.3 Qualitative Tracking Results

In addition to the quantitative tests, several qualitative tests were performed to evalu- ate the practicality of the NURBS initialization and tracking algorithms. The NURBS tracking algorithm is tested during thread motion, deformation, length changes, and knot tying. Many of these demonstrations are fully detailed in the video that accom- panies this chapter.

53 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

(a) (b)

Figure 3.8: Even though the intersecting threads are both strongly segmented as shown in Fig (a). The final initialized NURBS curve is only one of the two threads (Fig (b)). The video included with this chapter demonstrates the evolution of the thread initialization.

The initialization algorithm is validated when there are multiple intersecting su- ture threads in the stereo image. Images of the intersecting threads and the initialized NURBS curve are shown in Fig 3.8. Even though the threads intersect, initializa- tion is capable of distinguishing between them and models the target thread with a NURBS curve. The included video demonstrates the entire initialization process.

The tracking algorithm is also validated by tracking a suture knot as it is drawn tight. Key images from the loop tightening are shown in Fig 3.9. As the overhand knot loop is pulled tight, the control point get closer and closer together. Eventually, they are so close that the loop control points are deleted. The NURBS model then continues to track the suture as it moves and deforms. The video supplement animates the knot closing and thread movement during tracking. The video submitted with this chapter contains several additional components that are meant to illustrate the thread tracking algorithm. The video displays the tracking in both the segmented image space as well as in the raw images. During the video, the thread was moved using the linear stage. In order to validate the thread

54 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

(a) (b)

(c) (d)

Figure 3.9: As the suture knot is pulled tight in (a), the control points are drawn closer together. Eventually, the control points converge into a narrow area (b). When the area is small enough, the control points are pruned (c). When the tight knot is moved, it is then tracked as if it were the thread (d).

55 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING

tracking while it changes length, one end of the thread was fixed to the moving stage,

while the other end of the thread was fed through a hole in a immobile background. The thread lengthens as it is pulled through the hole by the linear stage. The longer thread is then tracked as it is deformed. The thread is successfully tracked while the stage moves with a velocity of 9.7 mm/s. The final video component demonstrates successful thread grasping. The included video also demonstrates how a robotically controlled surgical gripper can reorient and pick up a suture thread that was localized and tracked using the proposed algorithm. The suture thread was threaded through a tissue phantom (part number SCS-10 by Simulab Corp., Seattle, Washington USA). Once the user entered the suture thread point, the thread was traced and a grasp point near the end was identified. In addition to providing a grasp point, the NURBS curve model also provides a tangent to the suture thread. This allows the gripper to reorient as needed to grasp the suture. The robot uses visual feedback to translate and reorient until it is able to grasp the suture. The robot gripper coordinates were identified using colored circle markers attached to the gripper. The gripper in the video was designed in the lab [10] and is mounted to an ABB IRB140 industrial robotic assembly arm (ABB Ltd, Zurich, Switzerland). Once the arm grabs the suture thread, the thread is pulled to show that the gripper grasped the thread successfully.

3.8 Discussion and Conclusions

This chapter presents novel methods that can be used to track a complete surgical suture thread online in real-time using a calibrated stereo vision system. In the proposed method, the suture thread model is segmented and initialized from a single seed point on the thread. Once the suture thread is initialized, a NURBS model tracks the thread as it moves in real-time. The segmentation operates at 15 Hz while

56 CHAPTER 3. REAL-TIME SUTURE THREAD TRACKING the actual NURBS tracking operates at 20 Hz. The algorithm is robust against thread deformations, translations, and dynamic thread length. The algorithm is well suited to track a thread end as the thread is pulled through a tissue sample. The method was validated on bench top experiments using actual suture thread while the test bench environment was colored as to emulate surgical environments. The validation pattern confirmed that the tracking algorithm is accurate to within 5.6 mm which should result in reliable suture grasping. The current limitations of the initialization algorithm include some sensitivity to undesired segmented pixels. It also is currently unable to detect when the thread intersects itself for a finite length (e.g. overhand knot).

The main limitation with the thread tracking algorithm is that it does not add or remove control points based on local thread shape. This can result in reduced measurement accuracy of the actual thread since some bends might be poorly ap- proximated. Another deficiency is that when the thread is moving, the NURBS model begins to lag behind the images. This also is due to the parallel processing threads. One way around this might be to incorporate a velocity model (i.e. Kalman filter) into the NURBS curve iteration. Further work will focus on actively tracking the suture thread as it is manipulated by surgical grippers. This will allow the thread to be tracked as it is tied into a suture knot. Additionally, the NURBS fitting algorithm can be improved to support tracking the thread using previous velocity information.

57 Chapter 4

Catadioptric Stereo Tracking for Three Dimensional Shape Measurement of MRI Guided Catheters 7

The recent introduction of Magnetic Resonance Imager (MRI)-actuated steerable catheters lays the ground work for increasing the efficacy of cardiac catheter proce- dures. The MRI, while capable of imaging the catheter for tracking and control does not fulfill all of the needs required to identify and develop a complete catheter model. Specifially, the frequency response of the catheter must be identified to ensure stable control of the catheter system. This requires a higher frequency imaging than the MRI can achieve. This work uses a catadioptric stereo camera system consisting of a mirror and a single camera in order to track a MRI actuated catheter inside a MRI machine. The catadioptric system works in parallel to the MRI and is capable of recording the catheter at 60 fps for post processing. The accuracy of the catadioptric

7This chapter has been submitted to the IEEE International Conference on Robotics and Au- tomation (ICRA) 2016 and is under review [90]

58 CHAPTER 4. CATADIOPTRIC STEREO system is verified in imaging conditions that would be found inside the MRI. The stereo camera is then used to track a catheter as it is actuated inside the MRI.

4.1 Introduction

Cardiac catheters are an important surgical tool which are used for a wide variety of procedures such as angioplasty and balloon septostomy. Recently, Magnetic Reso- nance Imaging (MRI) actuated catheters have been developed [91]. The purpose of this paper is to introduce an external camera system that is used to track a catheter as a continuum robot during actuation inside the MRI. This work distinguishes itself from previous research by using a single camera and a single mirror to create an orthogonal view catadioptric stereo system. In this application, the camera system images the catheter while the MRI provides the primary force of actuation. The MRI machine is capable of completing real time imaging and is capable of distinguishing different tissue types inside the patient [92, 93]. The purpose of using an external camera tracking system is that the physical characterization of the catheter requires higher frequency imaging than the MRI can provide. An external camera system can track the catheter at high speeds while also validating the MRI tracking. Even though the camera imaging system would not be used during cardiac catheter procedures, it is necessary to the catheter development for both accurately modeling and controlling the catheter. The camera catheter tracking provides a set of validation data which can be compared to the results of tracking the catheter with MRI images as well as generate high speed catheter images for use in frequency response analysis. This frequency response model is required to ensure stable control of the catheter system. The external camera is an important component of the development cycle for a MRI actuated catheter. There are safety precautions associated with MRI machines that make it difficult

59 CHAPTER 4. CATADIOPTRIC STEREO

to use standard camera systems. In particular, the ambient magnetic field of the

MRI (around 3 Tesla) is powerful enough to dislodge flecks of metal and convert them into high speed projectiles. Consequently, surgeons, operators, and researchers must exercise caution when working inside MRI suites as introduced metallic objects can be sucked inside the MRI machine if they are too close. The safety concerns of

the MRI machine coupled with ferromagnetic material inside most camera systems, necessitate that the camera must stay more than 6 m away from the bore (center) of the MRI machine. At this distance, the forces due to the MRI ambient magnetic field are considered to be safe (< 300 Gauss). This far but necessary distance between the camera and the catheter requires using a long focal length lens. When using a long focal length at a far distance, it can be difficult to detect the movement of the catheter as it moves away and towards the camera (i.e. the catheter can move 12.5 cm which ± is approximately 2% of the distance between the camera and the catheter. While one solution would have been to use non ferrous MRI-compatiable cameras inside the

MRI bore (such as those by MRC Systems GmbH Heidelberg, Germany or Qualisys AB Gothenburg, Sweden), these cameras do not offer the full HD (1920 1080) × resolution at 60 fps that is required for successful frequency characterization of an MRI actuated catheter. Additionally placing the cameras inside the MRI machine may lead to electronic interference which may reduce MRI image quality. To combat this imprecise depth detection while preserving a large resolution image, a mirror was introduced to provide an additional catheter view that is oriented at 90o to the first one. This creates what is known as a catadioptric stereo system. The catadioptric stereo system developed in this work places the mirror about 30 cm from the target while the camera is by necessity approximately 6 m from both the tracking target and mirror. In this paper, the geometry and physical character- istics of the system are explored. The geometric accuracy and tracking capabilities of the system are validated using test patterns and successful catheter tracking is

60 CHAPTER 4. CATADIOPTRIC STEREO

demonstrated.

The remaining paper is organized as follows. Section 4.2 discusses prior work with catheter control and tracking followed up with prior work on catadioptric stereo systems. Section 4.3 shows and discusses the implemented catadioptric stereo system while explaining its geometry and calibration methods. Section 4.4 details the ex- perimental validation of the catadioptric system. Section 4.5 introduces the catheter model and demonstrates successful tracking. Finally, section 4.6 outlines the main results and how it will be used in support of further catheter development and testing.

4.2 Background

This project spans two distinct areas of research. Specifically, catheter modeling and tracking as well as the geometry of catadioptric stereo imaging. The next two subsections explore both aspects in detail.

4.2.1 Catheter Control and Tracking

Catheter robotics are being researched as a viable supplement to the current man- ual control that surgeons are currently required to utilize during catheter procedures

[91, 94, 95]. One particular procedure that would greatly benefit from catheter au- tomation is Cardiac Ablation which is used to treat Atrial Fibrillation [27]. During typical treatment for Atrial Fibrillation, a surgeon will manually direct a catheter up the femoral vein, inside the right atrium, and finally through the interatrial sep- tum into the left atrium. Once there, the surgeon will use the catheter to ablate the cardiac tissue. The purpose of the ablation is to stop extraneous pacing signals from transmitting through the left atrium and causing irregular heart beats[27]. The path of ablation is carefully picked and automating the catheter during the ablation would allow for that path to accurately followed as well as ensuring that the ablated

61 CHAPTER 4. CATADIOPTRIC STEREO

paths are continuous and well defined. High quality path execution reduce the risk

of fibrillation recurrence as well as complications [27]. Many catheters are underactuated which means that in order to accurately control the catheter, it is important that active sensing is used to localize the catheter and correctly identify its local range of motion [96].

Previous authors have looked at the problem of detecting catheters by treating them as continuum robots. These robots are often actuated using cannulas or guide [97, 98]. Catheters are long and thin, this can lead to complications when using stereoscopic vision. In particular, the long and thin structure of the catheter may fall on the epipolar lines of the stereo vision system[89]. This may lead to ambiguities when trying to deproject the catheter images into 3-dimensional Euclidean space. One solution is to use a set of orthogonal cameras. When the view points of the stereo camera system are at 90◦, the epipolar lines correspond to nearly orthogonal directions in camera space; this reduces the ambiguity that often occurs during stereo imaging and tracking. While one common problem with disparate views during stereo vision is the radically different appearance of the tracked object in each image, the symmetric shape of the catheter means that it is easily detectable in both images even when using, orthogonal viewpoints [99].

The restrictive nature of the MRI required utilizing a specialized stereo catadiop- tric system. A planar mirror allowed two orthogonal camera views to be utilized while no camera was in fact placed near the MRI bore.

4.2.2 Catadioptric Stereo

Catadioptric stereo imaging systems have some distinct advantages when compared to a dual camera stereo system. One of the main advantages is that since the left and right cameras both share the same physical camera, the geometry is shared (i.e. the camera intrinsic and distortion parameters are the same in both the main image and

62 CHAPTER 4. CATADIOPTRIC STEREO

the stereo image). One other common problem in stereo cameras systems is image

synchronization. In a two camera system, the cameras must be synchronized using an external clock. Since there is only one imaging device in a catadioptric stereo system, image synchronization is guaranteed between the different image frames. Previous work on catadioptric camera systems validate the geometry and test the results of

the calibration [100, 101, 102, 103]. Many of these examples used multiple mirrors to define the system geometry. Some catadioptric stereo systems even use non planar mirrors[100]. A diagram of the planar catadioptric system is shown in Fig. 4.1. Notice that the virtual camera is in a left handed coordinate frame. This is caused by the mirror flipping the direction of both the x and z vectors leaving y in the same

direction. The transformation between the real camera frame and the virtual camera frame is defined as follows [101]:

I 2nnT 2dn G =  −  . (4.1)  0 1   

Here I is the 3 3 identity matrix, n is the mirror normal vector, and finally, d × is the distance between the camera and the mirror (minimum distance between the plane and the origin point of the camera). The determinant, G = 1, captures the | | − fact that the transform changes the camera frame from a right handed one to a left handed one. It is also of note that G is its own inverse (i.e. G−1 = G). There are several aspects of this system design that distinguish it from previous work. While catadioptric stereo systems have been designed and analyzed, this paper looks at using such a system in order to track and model a 3d object using 2 nearly orthogonal views.

63 CHAPTER 4. CATADIOPTRIC STEREO

p′ ′ Mirror Face y p pm z′ x′ nVirtual Camera

z y x d Real Camera Figure 4.1: A planar catadioptric system has one real camera (solid) and one ‘virtual’ camera (dashed). A point p in real space projects onto the camera along the red ray. This point reflects off the mirror (black with normal n) along the green ray. The distance between the mirror and the camera origin, d, is aligned to the vector n. The apparent location of p in the mirror is point p′ even though the actual point of reflection is pm The dashed green rays show how the point is imaged in the virtual camera. Notice that the virtual camera has a left handed coordinate system. This system is not drawn to the same scale as the one being discussed here. In the actual system, the real camera is nearly 6 m away from the point p while the mirror is only 0.3 m from the point p. It is not possible to render the system to scale in a meaningful way in the available space.

64 CHAPTER 4. CATADIOPTRIC STEREO

(a) Catheter Tank and Mirror (b) Camera and LED

Figure 4.2: The components of the catadioptric stereo imaging system include the water tank and mirror (a) as well as the camera and LED (b). The grid of circles in the tank are for calibrating the mirror. Only one LED is shown in (b).

4.3 Catadioptric Hardware Set Up

The camera system being utilized is also unique in that the mirror is much closer to the object being tracked than the camera system. Notice that the mirror is angled at 45o to the camera (i.e. n = [√2/2, 0, √2/2]T ). This means that d =3√2 m. The result is that the transform is nominally:

0 0 1 6  −  0 1 0 0   G =   . (4.2)    10 0 6 −     0 0 0 1  

While this the nominal transformation is created by the mirror, there is variability in both the camera distance and mirror mounting angle. The actual stereo calibration must be robust against these variations. The lens used was variable focal length (12.5 75 mm) and was set to be approx- − imately 50 mm. The scene being imaged was in fact two scenes (original + mirror image) which are at different distances away from the camera. The difference is ap- proximately the distance from the scene to the mirror as shown by the point p′ in

65 CHAPTER 4. CATADIOPTRIC STEREO

Fig. 4.1. Since telephoto lenses, such as, the one used for imaging the catheter tend

to have a narrower depth of field, (i.e. a narrower range of distances where the scene appears to be in focus), the f-stop (defined as ratio of the focal length to the aper- ture diameter) had to be increased in order to increase the depth of field of the lens. Increasing the f-stop shrinks the lens iris and restricts the amount of light hitting the

camera sensor. Doing this increases the depth of field so that both the catheter and its mirror image are in focus. The eventual goal of the camera tracker is to track the frequency response of the catheter. To that end a 60 fps high definition USB3 camera with a resolution of 1080 1920 pixels was used. The camera was manufactured by × Point Grey (Richmond, BC, Canada).

The combination of the high frame rate, telephoto lens, and a large f-stop required significantly more light than typically needed for imaging applications. Due to the constraints imposed by the MRI, lighting sources had to be kept 6 m away from the MRI bore as well. While some light sources are not considered to be ferromagnetic, the DC power supply current may induce RF interference as well as introduce its own magnetic fields inside the MRI due to the DC current. Minimizing electrical interference inside the MRI is nearly as important as minimizing the dangers of ferrous materials inside the MRI suite. The long lighting distance coupled with the minimal light inside the MRI room and the high lighting requirements needed high powered spot lights to provide the necessary light at the correct distance. A pair of 100 Watt LED’s outputing approximately 9000 lumens each were equipped with plastic Fresnel lenses in order to adequately light the scene. Customized lighting was used to avoid introducing a significant amount of ferromagnetic material into the MRI suite because while it is true that the lights and camera are a safe distance away from the MRI unit, it is still a safety risk that must be respected. Pictures of the set up components are shown in Fig. 4.2. A diagram of the set up geometry inside the MRI suite is shown in Fig. 4.3.

66 CHAPTER 4. CATADIOPTRIC STEREO

6 m

Figure 4.3: The diagram above gives a sense of the scale of the catadioptric imaging system. The MRI bore, (left cylinder) has a diameter of 70 cm and contains the mirror (dashed box) as well as the catheter (red vertical line in front of the mirror). The camera (far right) is approximately 6 m away from both the catheter and the mirror. The dashed vertical line between the camera and the MRI marks the zone where the ambient magnetic field rises above 300 Gauss. While the dashed line marks a region of reasonable safety, hospital regulations require that any and all potentially hazardous metallic objects are kept as far away from the MRI as possible. The LED lights (not shown) flank the camera.

4.3.1 Catadioptric Calibration Process

In order to guarantee that the mirror calibration is robust even in the MRI, a cali- bration pattern was placed in the scene with the catheter (as shown in Fig. 4.2 and

Fig. 4.6). The pattern, which is a grid of black circles, is located such that it is out of the way of the catheter for nearly its full range of motion. When deployed inside the MRI, the camera intrinsic parameters are found using a chessboard calibration pattern. The intrinsics are found at the beginning of each experiment to account for variation in actual camera distance, lens focus, and lens zoom. The actual in- trinsics are computed using functions found in the OpenCV computer vision library [104]. Once the intrinsic camera parameters are found, the camera image is rectified. The mirror calibration pattern is then manually selected from the rectified image to seed the stereo calibration. Once the seed points in the front view are selected, the transform between the calibration pattern and the camera is computed in free space.

This transform is Glo. The next step is to find the transform between the calibration pattern and the mirror image. This transform is:

Gro = GGlo (4.3)

67 CHAPTER 4. CATADIOPTRIC STEREO

(a) No Water (b) Half Water

(c) Full Water

Figure 4.4: The lego block calibration pattern as imaged by the catadioptric stereo system. (a) has no water in the tank. (b) is halfway filled with water. Finally (c) has the tank filled with water. All three images have been cropped for space. Image (a) and (c) both show good alignment. Image (b) shows good alignment for the blocks that are either fully submerged or dry. The middle block is nearly at the waterline and consequently shows a significant amount of distortion which interferes with the tracking accuracy. The lighting and camera distances matched those of the catheter when it is being imaged inside the MRI machine.

Here G is the transformation between the camera and its mirror, as defined in (4.1) can be constructed in terms of spherical coordinates: x = [θ,φ,r]T where

n(x) = [cos(θ) cos(φ), sin(θ) cos(φ), sin(φ)]T

d(x)= r. (4.4)

The advantage of defining d and n using (4.4) is that the normal vector n is guaran- teed to be a unit vector. Solving for the spherical point x will also characterize the catadioptric stereo geometry. The derivative of the normal vector n and the distance d can be computed in terms of the spherical coordinates. Further computation (which will be ommited for brevity) are performed to compute the image derivatives of p′ in

68 CHAPTER 4. CATADIOPTRIC STEREO

terms of the spherical coordinates.

n ∂p′ ∂p′ ∂ = ∂G ∂G  ∂x  (4.5) ∂x ∂G  ∂n ∂d  ∂d  ∂x   

′ ∂p ′ The matrix ∂G is the derivative of the image point p with respect to the mirror transform G. The derivative of G is coupled with the derivative of d and n to create the derivative of the image point as a function of x. If multiple points are known, then the derivative of each point can be computed and combined.

′ ′ ∂p0 ∂p0 ∂x ∂G  ′   ′  ∂p1 ∂p1 ∂n  ∂x  =  ∂G  ∂G ∂G  ∂x  (4.6)  .   .  ∂n ∂d  .   .    ∂d  .   .   ∂x   ′   ′     ∂pn   ∂pn       ∂x   ∂G 

The point set Jacobian can be defined as

′ ∂p0 ∂G  ′  ∂p1 ∂n J =  ∂G  ∂G ∂G  ∂x  . (4.7)  .  ∂n ∂d  .    ∂d    ∂x   ′     ∂pn     ∂G 

If enough points are selected, the Jacobian becomes over determined and a pseudoin- verse can be used to update the spherical point x.

x′ = λJ −T (P ′ P ′ )+ x (4.8) measured − estimated

′ ′ Here x is the new mirror information while Pmeasured is the set of points detected in

′ the image and Pestimated is the set of points estimated from G. The variable λ is a scaling parameter used to ensure stability. This allows the calibration to constantly

69 CHAPTER 4. CATADIOPTRIC STEREO

Table 4.1: Tracking accuracy of the test pattern.

2 2 nl e¯rms(mm) σrms(mm ) max erms(mm) no water 17999 0.20 3.18E 4 0.27 − half water 11343 0.81 0.37 4.44 full water 10665 0.36 3.78E 4 0.46 −

iterate against new images by minimizing the error between the calculated points of

′ ′ pestimate (estimated from the previous calibration) and the detected points, pmeasured. After x converges, the normal vector n and the mirror distance d can be computed.

4.4 Experimental Validation

The geometric theory outlined above must be validated before the stereo system can be used in catheter tracking. In order to ensure that the results of tracking the catheter are meaningful, the accuracy of the stereo system is verified using a test pattern that demonstrates the accuracy of the catadioptric stereo system. The MRI

uses water molecules to complete its imaging. Since the catheter contains no water intrinsically, the catheter had to be submerged in a water bath in order to be visible to the MRI. The water while necessary for MRI imaging creates a water/plastic/air interface which can refract light; the catadioptric validation was completed with and

without water to show that the presence of water does not significantly affect the calibration results. The accuracy of the calibration was validated by utilizing black and white lego bricks mounted on an XYZ translational stage. The size of a single 1x1 brick is

8 8 9.6 mm. There were a total of 3 black lego bricks spaced throughout a face of × × white bricks. Three bricks were mounted together to create a test pattern on a plane that spans a rectangle of size 48.0 67.2 mm. The calibration is verified as the test × pattern is moved throughout a 1700mm3 volume. This allowed for the validation of both the local precision and global accuracy of the catadioptric stereo system. Three

70 CHAPTER 4. CATADIOPTRIC STEREO

different test runs were recorded; in the first run there was no water in the tank, in

the second run, the tank was halfway filled with water, in the final run, the tank was completely filled with water. The different water levels with the bricks are shown in Fig. 4.4. While the images were not captured inside an MRI machine, the lighting conditions and system geometry mimicked the set up required inside the MRI suite.

Table 4.1 summarizes the results. Here nl is the total number of images processed during the motion,e ¯rms is the average rms error between the test pattern and the

2 fitted points (The fit was found using a SE(3) transform), σrms is the variance of the

rms error, and max erms is the maximum error over the course of the calibration. From the results, it can be seen that the rms error is less than 1 mm when the

blocks are either dry or fully submerged. The only time that the error peaks above 1 mm is when the black block is at the waterline. Due to internal reflection, the block distorts. This causes the tracker to eventually track the waterline instead of the black block. Since the focus of this experiment was to measure the potential error caused

by the water’s refraction and not track objects near the water line, the results of this tracking were considered a success; even when the object is fully submerged, it still has a linear transform between it and the camera. It is however important to note the risks that are associated with tracking objects near the water line. Whenever

possible, tracked objects should be either dry or completely submerged. The stereo system is now validated both with and without water immersion.

4.5 Catheter Tracking

Now that the catadioptric stereo system has been validated, it can be used to track the catheter. The catheter, shown in Fig. 4.5, has a total of 3 coils: one axial and

2 side. All three coils are located at nearly the same location on the catheter. The coils are embedded in the models used by collaborators [105, 96, 106]. What follows

71 CHAPTER 4. CATADIOPTRIC STEREO

Figure 4.5: The catheter is approximately 110 mm long with orange paint applied to both the base, the coil set and finally the tip. The orange paint acts as a guide for tracking the catheter. The orange and blue wires near the tip, while not used for computer vision tracking allowed the users to determine if the coil reoriented after energizing the side coils. The orange paint creates a hard film upon drying which would interfere with the catheter mechanics if the entire length were to be painted. By only painting the coils and the two ends, the potential effect of the paint on the catheter is minimized. is a brief description of the catheter model.

4.5.1 Catheter Model

The catheter is a long narrow tube. As the deflection angle is sometimes large, this long tubing could not be analyzed as a whole beam by the beam theory. Therefore, finite differences approach is applied to analyze the deflection of the catheter by di- viding it into short segments. For each finite segment, a quasi-static torque-deflection equilibrium equation is calculated using beam theory and Bernoulli-Euler law. By using the deflection displacements and torsion angles, the kinematic model of the catheter system is derived by considering each segment as a robot link with the two ends of the segment as the links joints. The homogeneous transformation relationship between the two ends of the segments is calculated. A more detailed analysis of the catheter is available in [105, 96, 106].

4.5.2 Catheter Imaging and Tracking

The catheter being tracked is labeled with orange paint on the coils as well as the base and the tip. The orange paint created a unique color that was easy to identify

72 CHAPTER 4. CATADIOPTRIC STEREO

Figure 4.6: The catheter as it is imaged by the camera in the MRI suite. Notice the position of the stereo calibration markers. They are located as out of the way as reasonably possible. This ensures that the calibration can be iterated even as the catheter moves. The orange paint guides are still visible in both the frontal (left) and lateral (right) view of the catheter. This picture is of the catheter immediately after energizing. Consequently, the catheter is in the process of re-aligning its axial coil to the MRI ambient magnetic field. in the catheter images captured during MRI experiments. As such the orange points were used as a baseline for tracking the catheter. The detection of curvilinear objects like the catheter has been looked at previously by numerous authors [36, 74, 77]. The orange paint successfully tracks both endpoints along with the coil. The tracked points are fitted to a quadratic spline in order to measure the length of the catheter.

The fitted catheter image is shown in Fig. 4.6. The reason that the catheter is only painted in three locations is that painting creates a coating which changes the material properties of the catheter. Painting the catheter at the ends as well as on the coil allows the catheter to be labeled with a minimal effect on the catheter mechanics. The video accompanying this paper shows the catheter being tracked as it moves due to an energized axial coil. Table 4.2 summarizes the measured length of the catheter during tracking. Here nc is the number of images that the catheter is measured in,

73 CHAPTER 4. CATADIOPTRIC STEREO

Table 4.2: Tracked Catheter Length

¯ 2 2 nc ℓ(mm) σℓ (mm ) min ℓ(mm) max ℓ(mm) 858 107 2 105 111

¯ 2 ℓ is the average length of the catheter, σℓ is the variance of the catheter length, while finally min ℓ, and max ℓ are the minimal and maximal estimated lengths of the catheter respectively. The average length of the imaged catheter was 107 mm while the actual measured length of the catheter was 110 mm. This is within 2.7 % of the catheter length. The average measurement is likely short because the quadratic curve is only a 2nd order polynomial fit and does not adequately approximate the curvature of the catheter geometry. Utilizing a higher order model or even a NURBS model as well as incor- porating geometric models might estimate the catheter length more accurately. The still image shown in Fig. 4.6 shows the catheter as it moves and is tracked during energization.

4.6 Conclusions and Future Work

Catadioptric stereo tracking proved to be a viable solution to the unique problems that are associated with conducting research inside an MRI machine. The lego marker tracking shows that the catadioptric stereo system is accurate to within 1 mm. The catheter motion video shows that the tracking system is capable of following the catheter as it moves inside the MRI suite. This research is part of an ongoing project. The next step of the vision tracking is to align the camera coordinate frame to a base frame defined by the catheter holder. This would allow the relative position of the catheter to be identified using a intu- itive reference frame. Another future plan is to improve the performance of catheter tracking by implementing a catheter model which is then fitted to the observed image

74 CHAPTER 4. CATADIOPTRIC STEREO and incorporated with the catheter kinematic model in order to complete predictive catheter tracking. This includes using a higher order curve to track the catheter. Additional segmentation of the catheter body, acting in conjunction with the marker segmentation and a stochastic filter would allow the entire catheter to be accurately tracked. One final future goal is to use the computer vision tracking to identify the frequency response characteristics of the catheter during coil excitation. This will be accomplished by synchronizing the catheter excitation with the camera. Complet- ing the above steps will allow for characterization and control of an MRI actuated catheter system.

75 Chapter 5

Modeling of Needle-Tissue Interaction Forces During Surgical Suturing8

This chapter presents a model of needle tissue interaction forces that a rigid suture needle experiences during surgical suturing. The needle-tissue interaction forces are modeled as the sum of lumped parameters. The model has three main components; friction, tissue compression, and cutting forces. The tissue compression force uses the area that the needle sweeps out during a suture to estimate both the force mag- nitude and force direction. The area that the needle sweeps out is a direct result of driving the needle in a way that does not follow the natural curve of the needle. The friction force is approximated as a static friction force along the shaft of the needle. The cutting force acts only on the needle tip. The resulting force and torque model is experimentally validated using a tissue phantom. These results indicate that the proposed lumped parameter model is capable of accurately modeling the forces experienced during a suture.

8This chapter was originally published as a conference paper at the 2012 International Conference on Robotics and Automation (ICRA) c 2012 IEEE Reprinted with permission from [55]

76 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES 5.1 Introduction

Even with the assistance of robotic surgical systems, suturing is a challenging and

time consuming task during Minimally Invasive Surgery (MIS). Therefore, automating the suturing task is desirable. The robot pre-plans the suture motion, which when combined with force feedback, will allow the robot to minimize any tissue trauma that might occur as a result of suturing. In order for the robot to successfully complete an automated suture, the robot must understand the types of forces and torques that a

needle experiences during a typical suture. When a surgeon drives a needle through tissue during suturing, he inserts the needle with the tip normal to the tissue surface. The needle path also follows the curve of the needle [107]. Following these guidelines reduces the tissue trauma and aids healing. During a surgical suture, the surgeon can re-grasp the needle as necessary. He can even re-grasp the needle through the wound that is being sutured. Laparoscopic sutures are more difficult due to the reduced dexterity of the surgeon (there are only 4 degrees of freedom available due to the instrument portal). Despite the reduction in dexterity, surgeons still adhere to the same principles of needle driving as they would in an open suture. This can be fatiguing for the surgeons due to the combination of repetition required with suturing and the difficulties associated with the reduced degrees of freedom. One method of increasing the dexterity of a laparoscopic suture is to use a robotic assistant such as the daVinci R system (Intuitive Surgical, Sunnyvale,

California). Even though the daVinci R robot improves the surgeon’s dexterity, the surgeon must still complete the entire suture manually. A robot that can intelligently drive a suture needle could reduce surgeon fatigue while increasing dexterity. In addition to reducing the surgeon’s fatigue, automated suturing also has the potential to improve the speed of the suture. This could significantly decrease operation times and consequently improve the patient’s post operative outlook. The goal of this chapter is to analyze the interaction forces experienced when a

77 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

rigid curved suture needle is driven through a tissue sample. This includes modeling the forces and torques generated during a suture using a computationally efficient lumped parameter model. Previous studies on the different techniques that are used to model tissue needle forces are discussed in section 5.2. This is followed in section

5.3 by a discussion of the needle motion geometry and how that might impact the forces that the needle could sense as it cuts the tissue. Next, the lumped models that are used to describe the tissue forces are developed in section 5.4. Experimental validation of the needle force models with a detailed analysis and evaluation of the needle force models is presented in section 5.5. This chapter concludes with the final

comments and outline future work that is planned in section 5.6.

5.2 Needle Force Modeling

There has been significant work on modeling the needle tissue interaction forces during either straight or flexible needle insertion [25]. The purpose of many of these different models are for brachytherapys involving the precise placement of radioactive beads

that will irradiate the surrounding tissue and kill any nearby cancerous cells. For example, Chentanez et al. have modeled the tissue deformation of the prostate gland during the insertion of a straight hollow needle [51]. The modeling is performed using a three dimensional Finite Element Model (FEM) where the element mesh updates dynamically as necessary. This can be a very accurate method for modeling both material deformation and the forces generated during the deformation. One disadvantage of using a complex three dimensional FEM is that it can be difficult to solve the FEM in real-time such as would be needed for an automated needle path plan. Altervotiz et al. have worked on similar modeling of the prostrate gland [26]. Their models use a two dimensional FEM instead of a three dimensional one. This is one way of improving computational efficiency at the cost of model accuracy.

78 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

There have been many papers published that use FEM for modeling needle tissue interaction forces. Since material properties important to the FEM calculations can vary significantly between tissue types, Maghsoudi et al. publish a work that analyzes the sensitivity of the FEM algorithm to parameter deviation. This could include properties such as Young’s Modulus and the Poisson Ratio [52]. A significant amount of the force that a needle experiences is concentrated at the tip. This means that it is important to model the tearing event that the tissue undergoes [53]. Okamura et al. uses a lumped force model to simulate the axial forces a straight rigid needle would experience when it is inserted into a liver [54]. In the lumped force model each

f represents a contribution to the net force from a different source.

fneedle(x)= ffriction(x)+ fcutting(x)+ fstiffness(x) (5.1)

Compared to FEM based analysis, the lumped needle force is computationally effi- cient. A detailed analysis of needle tissue interaction forces sensed during surgical suturing is unavailable in the literature. Many of the previous works model the forces

that are experienced by needles used for biopsies and therapeutic applications. These needles are long, straight, hollow, and potentially flexible. The needles used for per- forming a suture are short, curved, rigid and solid. Since suture needles are inflexible, they will not comply with the tissue during a suture. Most lumped models available

in the literature do not model off axial forces resulting from tissue displacement. This force is very critical for suture applications as will become evident in the fol- lowing analysis (e.g. Fig. 5.9). Also FEM models are computationally intensive and therefore, are not suitable for use as a component of an inline needle control scheme.

79 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

5.3 Suture Needle Motion Model

The suture needle is approximated as a circular arc [107]. The canonical motion of the

needle is shown in Fig. 5.1. This figure is drawn with respect to the geometric center of the needle (C). The canonical motion can be expressed with two components. The first component is a rotation about the center of the needle (ω). The second component is the velocity (v) of the geometric center of the needle. The radius of the needle is defined as r. The coordinate frame defined by by xf and yf is the coordinate frame of the force and torque measurement. This frame is attached to the base of the

needle, but is aligned to the world frame, xw and yw. The tissue corresponds to the shaded region.

5.3.1 Ideal Needle Motion

Since it has been established that the best sutures are those that follow the natural curvature of the needle [107], the ideal motion of the needle is to move in a circular arc about the center. This motion is shown in Fig. 5.2(a). This motion plan reduces the velocity v to zero and the needle simply rotates with a constant speed (ω) about

the point O. In this case, O is aligned with the needle center (C). This needle motion is naturally planar.

5.3.2 Non Ideal Needle Motion

When a robot tries to perform an ideal needle motion, uncertainty in the needle mount would result in non-ideal needle motions. As the needle is assumed to be rigid, the

non-ideal motion is related to the motion of the geometric center of the needle. Fig. 5.2(b) shows a snapshot of the motion. The center of the needle (C) rotates about

the center of motion (O) with a radius rc. The starting angle of the geometric center

is the angle φ0. The angle κ corresponds to the relationship between the angle of the

80 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

yf xf

yw v r

C ω xw

Tissue

Figure 5.1: Canonical Needle Motion. The needle rotates with an angular velocity of ω. At the same time the geometric center (C) has a velocity of v. The tissue is indicated by the green shaded region.

κ r r

c C r O ω φ0

O

(a) Ideal (b) Non-Ideal

Figure 5.2: (a) Ideal Motion Model. The needle rotates about the center (O) in which the velocity v is 0. (b) Non-Ideal Motion Model. The center, C, rotates about the center of motion (O) with a radius rc. The starting angle (φ0) and the error angle (κ) are both geometric properties of the system. This particular image exaggerates the magnitude of rc and κ.

base of the needle and the angle of the geometric center of the needle. As the needle

is driven through the tissue, the movement of geometric center about O will induce stress in the tissue due to the non tangential motion of the needle body. Even though the needle mount could include non planar errors that would affect the force profile, for the purposes of this chapter non planar errors are ignored to simplify the analysis.

As a result, the motion of the needle will be assumed to be planar. However, the proposed model and analysis is not inherently restricted to planar motions. The area that the needle sweeps will be incorporated into the lumped forces model.

81 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

5.3.3 Area Sweep

Following the natural path of the needle as in Fig. 5.2(a) minimizes the tissue stresses. If the suture does not follow the path of the needle, then the needle will sweep out

an area as it moves through the tissue. Since the area sweep should be minimized, calculating the area swept by the needle could be a simple method of measuring the quality of the needle path. The area swept by the needle can be modeled as shown in Fig. 5.3. The area of the parallelogram formed by the vectors dℓ and vdt is the amount of tissue distortion that the needle is creating over a small time step (dt). The swept area can be computed as

da = v dt dℓ sin γ, (5.2) k k k k where γ is the angle between the two vectors v and dℓ as shown in Fig. 5.3 . This area sweep has a direction that is outward normal to the needle curve. Alternatively, the area can be calculated using vector notation as

da = v(θ)dt dℓ(θ), (5.3) ×

where θ is the angle of the needle segment. Since the motion is planar, only one directional component will be non zero (z). This means that the magnitude of the area corresponds to the third component of the vector generated by the cross product. This allows a direction to be assigned to the area that the needle sweeps out. That is

because the direction of the needle tissue compression must be known. The normal vector representation of the swept area is given by

dℓ da = da, (5.4) n dℓ × k k

82 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

where the subscript n indicates that the quantity is normal to the needle tangent in the x-y plane. Using the motion model outlined in Fig. 5.2(b), it is possible to calculate the area swept out by the needle and its direction.The needle segment vector is based on the angle of the needle

sin (θ) −  dℓ = r cos(θ) dθ, (5.5)        0   

The angle θ can be computed as follows:

θ = φ + κ ψ, (5.6) −

where φ, κ, and ψ, as shown in Fig. 5.3, are the angle of rotation of the needle center about the base frame, the angle offset of the needle base, and the position along the

needle arc. The velocity of the needle segment can then be calculated as

0 cos(φ)     v(φ,θ,ω)= rc 0 sin (φ) (5.7)   ×           ω  0      0 cos(θ)     + r 0 sin (θ) .   ×           ω  0     

83 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

Combining (5.3), (5.4), (5.5), and (5.7) and simplifying, yields

cos(θ)   dan =rωrc sin (θ) cos(φ) sin(θ) (5.8)        0    cos(θ)   rωrc sin (θ) sin(φ) cos(θ). −        0   

Notice that the area swept is only due to the motion of the geometric center. This is because the rotation of the needle about its center is always along its tangent. If φ = θ, then the area swept out by the segment is 0. This means that there may exist a point on the needle that is not sweeping out any area. The integral of (5.8) over the needle arc and over time will give the total area swept out.

t1 θ1(t) an = dan(φ(t),θ,ω) dθ dt (5.9) Zt0 Zθ0(t)

The variables t0 and t1 are the experimental start and stop times respectively. θ0 and

θ1 are the angles for which the needle is inside the tissue sample. The area swept an can be used to measure both the quality of a needle path and to estimate the force on the needle due to tissue deformations. The area swept by the needle is demonstrated in Fig. 5.4.

84 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

vdt ψ κ dθ dℓ r

c C r φ

O

Figure 5.3: The area swept out by a small segment of needle dℓ during an incremental motion vdt.

5.4 Suture Needle Forces

The force and torque acting on the needle will be modeled as the sum of three lumped forces.

fneedle(φ(t)) = ffriction(φ(t))

+ fcutting(φ(t)) + fnormal(φ(t)) (5.10)

5.4.1 Friction Forces

The friction force acting on the needle is a constant that acts in opposition to the needle motion over the entire length of the needle. As long as the velocity of the needle segments are approximated as constants, it is acceptable to use this models.

θ1(φ) ffriction(φ(t)) = νdℓ(θ) dθ (5.11) Zθ0(φ) −

The variable φ(t) is the position of the needle center as a function of time (t). θ0

and θ1 are the start and end angles of the needle in the tissue. ν is the friction force parameter9. This sums the friction forces so that the force can be calculated locally

around the needle portion embedded in the tissue. The overall friction force becomes a sum of forces which are each acting in different directions. The friction force will then change significantly in magnitude and direction as the needle moves through the

85 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

Tip Points Needle Outline Geometric Center Tissue Surface

Figure 5.4: Non-Ideal Area Sweep. The needle sweeps out an area in a clockwise direction during a suture. As the geometric center moves, the needle area sweep direction changes. tissue.

5.4.2 Area Forces

When the needle moves in a non ideal fashion as in Fig. 5.2(b), it will press against the tissue as it moves. The area swept by the needle as calculated in (5.9) can be used as a basis for the normal force. If the tissue is treated as a Hookian material,

9 This variable was originally named µs and referred to as the coefficient of friction. This is a misnomer since there is no normal force and the variable name has been changed to ν to avoid confusion.

86 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES then a spring constant K (measured in force per unit area) can be used to convert the area swept into a force magnitude. Since the tissue sample is typically not a cube, its spring constant may vary in different directions. Therefore K will be assumed to be a diagonal matrix instead of a scalar. Since the area computation includes both a magnitude and a direction, the tissue simply applies a restoring force to the needle. By modifying equation (5.8) to include K, the normal force due to area can be computed φ(t) θ1(φ) fnormal(φ(t)) = K dan(φ, θ) dθ dφ. (5.12) − Zφ(0) Zθ0(φ)

5.4.3 Cutting and Stiffness Forces

The stiffness force models the forces applied by the needle to the tissue before the needle begins to penetrate the tissue. The force is modeled as the angle of entrance squared. When the stiffness force is larger than the cutting force, the needle is assumed to be cutting the tissue and the cutting force is used instead. Both forces act in opposition to the needle tip.

f (φ(t)) = min(α, (θ θ )2β)dℓ(θ ) (5.13) cutting − tip − s tip

The variable α is the maximum magnitude of the cutting force. The variable difference θ θ is the difference between the tissue intersection angle and the actual tip angle. tip − s

β is a scaling coefficient. dℓ(θtip) is the tangent vector of the tip.

5.4.4 Torque Calculations

Since the needle is curved, torques can be an important indicator of the amount of tissue trauma that is occurring. The models are adapted to include modeling of the

87 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

torques. The torques are computed using the following equations.

θ1(φ) τfric(φ(t)) = νrf (θ) dℓ(θ) dθ, (5.14) Zθ0(φ) − ×

τnorm(φ(t)) =

φ1 θ1(φ) K rf (θ) dan(φ, θ) dθ dφ, (5.15) − Zφ0 Zθ0(φ) × τ (φ(t)) = r (θ ) f (φ(t)), (5.16) cut f tip × cutting

where rf is the vector from the force/torque sensor to the needle segment.

5.5 Results

As part of this study, the proposed needle-tissue interaction force models were val- idated with experimental data collected using a circular robotic motor stage and a tissue phantom.

5.5.1 Experimental Methods

In the experiment, a custom made one degree-of-freedom (DOF) rotational motion stage equipped with a six DOF force/torque sensor (nano17 by ATI Industrial Au- tomation, Apex, North Carolina) was used to drive a surgical suture needle into a tissue phantom. The interaction forces were recorded. The experimental setup is shown in Fig. 5.5. During the experiment, the motor turns the eccentrically mounted force sensor and needle such that the needle will drive through the tissue phantom. During the insertion, the motor turns with a constant velocity using a servo loop running at a 2 kHz sampling rate. The tissue phantom used is a commercial training phantom that is a suture training aid for surgeons (SCS-10 by Simulab Corp., Seatle Washington). The tissue phantom has dimensions 105 mm by 105 mm by 20 mm.

88 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

It has two sides. One side simulates a layer of skin tissue. The other side simulates subcutaneous fat. The subcutaneous fat side was used during the experiment. The needle used in the experiment is a CT-1 suture Needle (Ethicon Corp., Raleigh, NC). The needle is a half circle taper point needle that is 36 mm long. The needle is mounted such that the center of motion is offset from the geometric center of the needle so it is possible to measure the effects of non-ideal needle motion. For conve- nience of the reader and ease of interpretation, all of the force/torque measurements and models results are presented relative to the coordinate frame defined by xf and yf as shown in Fig. 5.1.

5.5.2 Force Data Post Processing

In order to be used for data analysis, the raw force torque data is post-processed. This removes three effects. The first effect is the force bias that the sensor naturally has (fbias). The second correction is to rotate the force sensor frame so that it remains parallel to the global frame. The angle of rotation is ρ. The rotation of the needle center φ is included because the force sensor rotates with the needle. The final correction is to remove the effect of gravity on the needle (mg). These corrections can be modeled as

0   fexp = mg        0    cos(ρ + φ) sin(ρ + φ) 0  −  + sin(ρ + φ) cos(ρ + φ) 0 (fraw fbias) , (5.17)   −      0 01   where the raw force data is fraw. The processed force data is fexp. To solve for all of the variables, the raw sensor data from when the needle is not penetrating the tissue

89 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

is used. As the only force in free space acting on the needle is gravity, we would have

0   fexp = 0 . (5.18)       0  

This allows for the bias, the angle offset, and the gravity force to be estimated using the numerical minimization. The processed force and torque profiles are now only

the result of needle tissue forces. The torque profiles are measured about the needle mount.

5.5.3 Measured Force Data

The results of 4 different experiments are plotted together in Fig. 5.6. The forces have been processed using (5.17). The plots are exclusively the needle tissue forces. The

variability from one tissue run to the next is negligible. For brevity, the corresponding torque plots are not included. The forces felt by the needle in the z direction are small compared to the x and y forces felt by the needle, therefore, the planar motion approximation is held. The magnitude of the measured forces is approximately 2.82

N. This is similar in magnitude to the suture forces of a trained surgeon as measured by Dubrowski et al. [20].

5.5.4 Parameter Fitting

There are a total of 12 parameters that need to be determined to fit the model to the experimental data. These include the following variables: Mr is the radius of motion

of the needle holder and force sensor. rc, κ, r and φ0 are the geometric parameters all defined in Fig. 5.2(b). The arc length of the needle and the height of the tissue relative to the motor axis are the final geometric variables. The remaining variables

90 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

Figure 5.5: Experimental Suture Apparatus. A motor holds a disc which mounts the needle base eccentrically. This allows the motor to turn the needle through an approximation of the ideal needle motion. are material variables, namely the friction force parameter ν as defined in (5.11), the spring constant matrix K, and the α and β parameters as defined in (5.13). The K matrix introduces two variables because only the x and y forces are influenced by the normal needle motion. The variables are estimated using using two methods.

Some of the geometry can be directly measured from the system. This includes the tissue height relative to the motor axis and the radius of the motion of the needle mount. The remaining parameters were estimated simultaneously using a numerical minimization that matched the force model estimate with the experimental data. The numerical optimization method was implemented using MATLAB R . The linear force results are shown in Fig. 5.7 and the torques are in Fig. 5.8. The force breakdowns show the final contribution from each force and torque component in Fig. 5.9 and Fig. 5.10. Notice the normal and friction forces constructively add in the y direction, but they destructively add for both the x direction and the torques.

91 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

3 Run 1 2 Run 2 Run 3 1 Run 4 0 Forces (N) ˆ x −1 0 0.5 1 1.5 2 2.5 3 3.5 4 Motor Position (rads) 3

2

1

0 Forces (N) ˆ y −1 0 0.5 1 1.5 2 2.5 3 3.5 4 Motor Position (rads) 0.3

0.2

0.1

0 Forces (N) ˆ z −0.1 0 0.5 1 1.5 2 2.5 3 3.5 4 Motor Position (rads)

Figure 5.6: Experimental Force Data. The plots display the results of four different needle drives. In order of descent, the plots show the x, y and z forces. The force measurements from multiple passes were similar. The measured z forces are an order of magnitude smaller than the x or y forces.

92 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

2 measured 1.5 modeled

1

0.5 Forces (N) ˆ x 0

−0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 Motor Position (rads)

2

1.5

1

0.5 Forces (N) ˆ y 0

−0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 Motor Position (rads)

Figure 5.7: The Linear Force Model. The experimental linear forces are plotted with both the model forces and the measured forces. The top plot is the x direction and the bottom plot is the y direction. Both sets of modeled forces closely track their measured counterparts.

93 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

14 measured modeled 12

10

8 (N-mm)

z 6

4 about ˆ τ

2

0

−2 0 0.5 1 1.5 2 2.5 3 3.5 4 Motor Position (rads)

Figure 5.8: The Torque Model. The torque model is plotted with the measured torque. There is strong correspondence between the two curves.

94 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

2 friction 1.5 normal cutting/Stiffness total 1

0.5 Forces (N) ˆ x 0

−0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 Motor Position (rads)

2

1.5

1 Forces (N) 0.5 ˆ y

0 0 0.5 1 1.5 2 2.5 3 3.5 4 Motor Position (rads)

Figure 5.9: The Linear Force Model Components. The force components ffriction, fnormal, and fcutting are plotted with the total modeled force. The top plot is the x direction and the bottom plot is the y direction. All of the force components are significant relative to the total force.

95 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

20 friction normal cutting/Stiffness 15 total

10

5 (N-mm) z

0 about ˆ τ −5

−10

−15 0 0.5 1 1.5 2 2.5 3 3.5 4 Motor Position (rads)

Figure 5.10: The Torque Model Components. The torque components τfriction, τnormal, and τcutting are plotted with the total modeled torque. The torque due to the friction is positive while the torque from the normal force is negative.

96 CHAPTER 5. MODELING OF SUTURE NEEDLE-TISSUE INTERACTION FORCES

5.6 Conclusions and Future Work

The results of the model fit closely match both the linear forces and the torques that the needle experiences during a suture. The lumped forces are a good approximation for canonical needle motion and small non ideal tests. It is important to note that the needle was moving slowly in the experiments so that the non viscous friction approximation would hold. In our future work, we als plan to collect experimental data from ex vivo tissue samples in order to study the validity of the model for actual tissue. We are also planning to extend the model by relaxing the planar motion approximation. This modified model will be experimentally validated. A comparison between the lumped model and a tissue FEM model will also be pursued.

97 Chapter 6

Needle Path Planning for Autonomous Robotic Surgical Suturing10

This chapter develops a path plan for suture needles used with solid tissue volumes in endoscopic surgery. The path trajectory is based on the best practices that are used by surgeons. The path attempts to minimize the interaction forces between the tissue and the needle. Using surgical guides as a basis, two different techniques for driving a suture needle are developed. The two techniques are compared in hardware experiments by robotically driving the suture needle using both of the motion plans.

6.1 Introduction

Even with the help of robotic surgical systems, suturing is a challenging and time consuming task during Minimally Invasive Surgery (MIS). Automating the suturing task may reduce both the time and difficulty of completing a suture. Pre-planning

10This chapter was originally published as a conference paper at the 2013 International Conference on Robotics and Automation (ICRA) c 2013 IEEE Reprinted with permission from [22].

98 CHAPTER 6. NEEDLE PATH PLANNING

the autonomous motion, when combined with force feedback, would allow the robot to minimize any tissue trauma that might occur during suturing. Well established manual suture techniques lay the foundation for robotic suturing. In order to com- plete an independent surgical suture, several components must be preplanned. First,

the needle must enter and exit the tissue in the proper locations and orientations. Secondly, the needle path must not put any unnecessary stress on the tissue. Finally, the needle must be able to react to any unforeseen obstacles that might impede the needle’s path. In an earlier study, Nageotte et al. [19] presents a path planning method for

a laparoscopic suture needle through tissue membranes using a limited degree of freedom laparoscopic instrument. This is different than suturing a solid block of tissue together. When the needle is penetrating the membrane surface, stress only occurs at the site of needle penetration. When a solid volume of tissue is sutured, the

entire embedded needle body may be deforming the tissue. Advances in MIS, (e.g. the daVinci R system built by Intuitive Surgical based in Sunnyvale California) also grant additional dexterity to the needle driver. The additional degrees of freedom enable the suture plan to be optimized for patient care quality.

There are also earlier studies on planning algorithms for percutaneous needle insertion, such as for biopsy, brachytherapy, etc (e.g. [26], [65], [66]). In percutaneous interventions, long and flexible needles are used to reach targets embedded deep inside the tissue. As such, the methods and algorithms developed for these applications cannot be applied to the planning of surgical suturing.

The goal of this chapter is to create, using the best practices of manual suturing, a path for suturing with a semi-circular needle. Section 6.2 lays the groundwork for the needle plan using both the surgeons’ best practices and their mathematical analogs. This is followed in section 6.3 with a step by step needle trajectory. The needle trajectory has two potential approaches. The two versions are empirically tested in

99 CHAPTER 6. NEEDLE PATH PLANNING section 6.4. The results of the two needle drives are analysed in section 6.5. The chapter concludes with a discussion of planned future work.

6.2 Best Practices of Suturing

There are many general rules that surgeons use to complete a suture. A typical list of such rules is below [108] [107]. Manual needle sutures normally result in a picture similar to Fig. 6.1.

1. The needle first “bites” the tissue orthogonally. By inserting the needle such that the tip is orthogonal to the tissue surface, tissue surface stress is minimized.

2. The wrench between the tissue and the needle during the suture must be mini- mized. Minimizing the needle tissue interaction force reduces the internal tissue stress, and consequently reduces additional tissue trauma due to the suture.

3. The re-grippable length of the needle during the suture must be adequate for the needle re-grasp to be completed successfully. Since the needle holder can not be inserted through the tissue, there must be an intermediate point during

the suture that the gripper can regrasp the needle on the tip to complete the suture.

4. The final depth of the needle in the tissue is an important component of a suc-

cessful suture. The actual target depth is determined by many factors, including both the wound being closed and the size of the needle.

5. The needle tip should only touch the tissue at the insertion site. Similarly, the needle gripper should not place unnecessary stress on the tissue.

The above list is not exhaustive, but details important components of a quality suture.

Converting the listed suture principles into a list of analytic equations allows for the planning algorithm to be automated and optimized against the suture guidelines.

100 CHAPTER 6. NEEDLE PATH PLANNING

γ γ ∼ ∼ γ Tissue

Figure 6.1: This is an example image of a suture (adapted from [107]). Once the needle completes the suture, the suture thread (purple) will close the wound (triangle cut out of the tissue). The distance of the entry point and the exit point from the wound are approximately equal to the depth of the needle (γ). The tip of the needle is the arrow. The base of the needle is marked by a circle. To avoid clutter, the thread will be absent from other needle figures.

6.2.1 Quantification of the Suturing Guidelines

The principles above can be adapted to equations directly. The bite angle of the needle is measured using the initial needle insertion vector. This unit vector (k) is shown in Fig. 6.2. The local tissue normal vector is y0. The inner product between the needle tip k and y approaches 1 as the needle tip penetrates the 0 − tissue orthogonally. Since best practices of suturing call for an orthogonal tissue bite, defining a metric b = kT y is one method of assessing the quality of the needle − 0 insertion. After evaluating the initial insertion angle of the needle, the next task is to critique the interaction forces felt by the needle during the suture. Minimizing the forces and torques that act between the needle and the tissue can be posed in multiple ways. The first method is to minimize the maximum forces and torques between the tissue and the needle during the suture. Another technique is to minimize the average

forces and torques. Modeling the tissue-needle forces during the planning stage will improve the overall quality of the needle plan. While many different tissue needle force models exist [25], a fast force and torque model of the suture needle tissue interaction is developed in [55]. This force torque model includes three components.

The first one is a friction force that acts tangentially to the needle. The second one is a tissue deformation force that acts normal to the needle. The final force is due to

101 CHAPTER 6. NEEDLE PATH PLANNING

yN xN α g f m yO k Tissue

xO

Figure 6.2: This is the pose of the needle before it begin to bite the tissue. Notice that the tip of the needle is nearly but not quite orthogonal to the tissue. The scaler α is the initial distance between the needle tip and the needle entrance point. m is the location of the tissue break undergoing repairs. Ideally m is midway between g and f. The location defined by f is the point where the needle is supposed to exit the tissue. the cutting of the tissue. The cutting force acts exclusively on the needle tip. In order to regrasp the needle during the suture, there must be a point during the suture such that the exposed needle tip can be gripped. Simultaneously, the gripper holding the needle base must deform the tissue as little as possible. One way of

determining this is to calculate the amount of needle exposed during the regrasping stage. The final depth of the needle in the tissue could be calculated by using the distance between the entrance g and the exit f. The depth of the needle is derived using the

following set of equations. Figure 6.3 illustrates the geometry that is used to derive the equations. The distance between the points f and g is p. The quantity, p, coupled with the needle radius, r, allows the height, h, of the gcf triangle to be calculated as:

h = r2 (p/2)2. (6.1) p − The maximum depth of the needle, d = h r, can then be calculated. The ideal − depth of the needle varies based on the application [107].

The final constraint is that during the initial insertion, the needle base cannot

102 CHAPTER 6. NEEDLE PATH PLANNING

p c h f r g d yOTissue

xO

Figure 6.3: The depth of the needle in the tissue (d) can be calculated using the distance (p) between the needle entrance (g) and exit (f) points along with the needle radius (r). The values p and r in turn generate the height of the needle center (c) above the tissue. This value is given by h. The difference between r and h gives the needle depth in the tissue.

touch the tissue. Likewise during needle extraction, the needle tip should not reinsert into the tissue. Due to the potential complications associated with either of the above cases, any path plan that causes the needle base to touch the tissue or the needle tip to reinsert into the tissue should be readily rejected as a viable candidate.

6.3 Needle Path Planning Algorithm

For the purposes of planning the path, assumptions about both the needle and tissue geometry are made. The tissue volume is locally approximated as a rectangular prism, while the needle is approximated as a semi circle. The needles being used are sold as 1/2 circle needles. The needle path itself can be broken down into 5 distinct

components: needle approach, initial needle insertion, needle reorientation, needle regrasp, and finally, needle follow through. The needle approach is where the needle moves into a position that is near the tissue and properly oriented for the needle bite. During the needle bite, the needle penetrates the tissue until the needle has reached a target depth. Once the initial insertion is complete, the needle is pushed forward

through the tissue while reorienting. The reorientation is required so that the needle will exit the tissue at the correct point. After finishing the reorientation, the needle completes the suture by moving the tip in the same circle defined by the arc of its

103 CHAPTER 6. NEEDLE PATH PLANNING

c′

f ω2 g′ ω1 c yOTissue

xO

Figure 6.4: Once the needle has penetrated the tissue, it is now possible to reorient the needle so that the needle will naturally drive to the target exit point. The point f is the location of the exit point. The dashed view of the needle is a sample orientation of needle after completing the alignment. The point c is the point that the needle center starts at. After the needle finishes the alignment, the center of the needle is now at c′. The dashed curve from c to c′ is the curve that the center of the needle will move through. Notice that this curve is a circular arc that is centered on the ′ ′ point g . The scalar, ω1, is the angular rate of rotation of the point c about g . The angular velocity ω2 is the speed of rotation of the body of the needle about the center point c.

body. Either during the needle reorientation or the needle follow through, the needle tip will egress from the tissue. When the needle tip is exposed, the gripper can grasp

the tip and complete the suture. Once the regrasp is complete, the needle should only move in an ideal circular path. This minimizes tissue stress for the remainder of the suture. The motion plan is centered about the needle for convenience.

6.3.1 Needle Approach

The needle approach is shown in Fig. 6.2. The information needed to generate

the initial approach pose include the points g and f, the vector k, and the initial insertion distance α. The quantities are all used to define the initial needle position. The intersecting vectors k and f g uniquely define a plane. The semi-circular needle − lies in this plane. The vector k, points along the needle tip. This combined with the plane of the needle completely defines the needle orientation. This orientation can be

described as a matrix, R SO(3). The scalar α defines the initial starting distance ON ∈ between the needle tip and the point g. The position and orientation of the needle

104 CHAPTER 6. NEEDLE PATH PLANNING

can be represented using g R4×4. The matrix, g , represents a homogeneous ON ∈ ON transformation from the needle frame (N) to the tissue frame (O) in SE(3). This homogenous transform is uniquely defined by R and p R3×1. The vector p ON ON ∈ ON is defined using the initial bite angle α:

p = g αk 2ry (6.2) ON − − N

6.3.2 Needle Bite

Once the needle is in position (Fig. 6.2), the suture is started. This is done by moving the needle along the tip vector (k) until it has penetrated the point g to a predetermined depth. This depth (β) comprises one of the inputs to the overall suture plan. The depth determines both the amount of deformation that the tissue may undergo as well as the amount of needle grip in the tissue. If the needle is not inserted deep enough, the tip may only skip along the surface as it reorients. If the needle is too deep, then the needle reorientation may cause too much displacement of the tissue. During the bite, the needle moves with the body velocity,

T V b = 100000 . (6.3) ON  

The body velocity can be scaled. Increasing the speed will decrease the suture time, decreasing the speed will result in a gentler needle insertion.

6.3.3 Needle Reorientation

Once the needle has penetrated the tissue, the needle is reoriented so that it will egress at the correct point on the opposite side of the wound. The reoriented center point of the needle is calculated using the available geometric information. The actual

105 CHAPTER 6. NEEDLE PATH PLANNING

needle intersection point can be computed by starting with the actual needle height,

h′ = yT (c g), (6.4) 0 −

where c is the needle center, and h′ is distance from the tissue surface to the needle center. The point m is the location of the wound undergoing suture. The point on the tissue surface closest to the current center of the needle, m′, is

m′ = c (h′)y . (6.5) − 0

The orthogonal projection of the needle basis vector yn on the tissue normal y0 is defined as y′ . The point where the needle actually intersects the tissue, g′, (g = g′ n 6 due to the curve of the needle) can be computed as

g′ = m′ + y′ √r2 h′2. (6.6) n −

Once the actual needle entrance point, g′ is found, the target exit point f is used to compute the target needle center location c′. The midpoint between g′ and f is given as p . The distance p′ = (f g′) allows the new target center of the needle, m k − k c′, to be computed as

′ 2 ′ 2 c = y0 r (p /2) + pm. (6.7) p − Once the target center position of the needle is defined, there are multiple ways to reorient the needle. One such method is to use a “static point”. The static point is where the needle does not move in a way that deforms the tissue. This point is the fulcrum that the needle center moves around. Selecting a “static point” that

minimizes the overall tissue trauma (e.g minimize the forces and torques) would be the optimal point of rotation. Another method of reorienting the needle is to drive the tip such that it can only go forward along the tangent direction or rotate about

106 CHAPTER 6. NEEDLE PATH PLANNING

the tip point. The goal of the non holonomic constraint is to avoid having the needle tip tear laterally through the tissue during the suture. A feature common to both types of needle reorientation is that the needle tip is always moving forward through the tissue.

Needle Reorientation About The Entrance Point

In this case, the rotation is computed about the site where the needle penetrates the

tissue. This is done to avoid stress on the point of needle penetration. Fig. 6.4 shows the needle as it realigns from the bite position to the follow through pose. During the reorientation, the needle center moves in a circle about the point g′ with an angular speed of ω1. Simultaneously, the needle is rotating about its center c with an angular rate of ω2. The needle frame velocity is sum of the two components. The component due to the rotation about the needle entrance, g′, is calculated using the transform

′ gON . The vector p is the vector that points from the point g to the center of the needle (c), as measured in needle coordinates. This vector is then used to create the

body velocity component required to spin the needle about its current insertion point.

p c g′   = g−1     (6.8) ON − 0 1  1       

2 b V ON = ω2 2r 0000 1 − −  T T +ω1 p 0 0 1 0 0 1 (6.9)  ×  −  − 

107 CHAPTER 6. NEEDLE PATH PLANNING

xn yn vc c′ c f g′ yOTissue

xO

Figure 6.5: The non holonomic motion plan reorients the needle such that the tip velocity is exclusively tangential. The vector, vc, is the motion of the needle center, ′ c, towards the desired needle center, c . Since vc is parallel to xn, it is not pointed directly at the target, c′.The dashed line representing the needle is one possible posi- tion of the needle after reaching the exit, f. Notice that the new needle position no longer passes through the point g′. This is due to the fact that the invariant motion of g′ must be sacrificed to maintain the non holonomic motion constraint.

Non-Holonomic Needle Rotation

When driving the needle such that the tip velocity is tangent to the tip, the velocity of the needle center must also be tangent to the tip vector. The difference between the center of the needle, c, and the new target center, c′, (as given by eq. 6.7) is the position error of the center. The velocity is then computed to be proportional of the

alignment between ec and xn.

′ ec = c c (6.10) − T V c = ec(xn ec) (6.11) T = rω 0 0 (6.12) − 1 

The overall equation of motion is then generated as follows:

T b V ON = ω2 2r 0000 1 − −  T + ω1 r 0000 1 . (6.13) − − 

108 CHAPTER 6. NEEDLE PATH PLANNING

(a) Time lapse of the holonomic needle (b) Time lapse of the non holonomic nee- motion. dle motion.

Figure 6.6: The images above are composite images of the needle position as it moves. The needles are colored green, yellow, and red. The green portion corresponds to the portion of the needle outside the tissue. The yellow portion is where the needle enters the tissue. Finally, the needle is colored red inside the tissue. The two different reorientations have different overall effects on the tissue. In Fig. 6.6a, the needle sweeps out a small area during the reorientation, but there is no area swept at the point where the needle intersects the tissue. In Fig. 6.6b, tissue stress at the needle tip is minimized, but the needle sweeps out a larger area. It also appears that the needle is deeper inside the tissue for the non holonomic needle motion.

When moving the needle non holonomically, the rest of the needle has to sweep a large arc to keep the tip moving tangentally. This results in larger needle tissue deformations. This also constraints the velocity of the needle center, c, to be tangent to the circle about the needle tip. This is shown in Fig. 6.5. A stop motion time lapse comparison between the two proposed motion styles is shown in Fig. 6.6a and 6.6b. The holonomic motion plan does appear to have some tearing due to the needle tip moving sideways. The non holonomic motion plan appears to sweep out a much larger area, but does not appear to cause tissue tearing at the needle tip. Directly modeling these effects on the tissue is outside of the scope of this chapter.

109 CHAPTER 6. NEEDLE PATH PLANNING

Figure 6.7: Shown above is a sample group of images for driving a suture needle holonomically. The tip of the gripper that actually holds the needle is touching the tissue right before the holder regrasps the needle.

6.3.4 Needle Regrasping

Once the needle completes the reorientation, it begins the needle follow through. Once the needle tip penetrates the tissue, even if it is before reorientation is complete, the needle is regrasped and extracted to prevent excessive distortion of both the entrance and egress point of the needle. During the regrasp, the gripper can be oriented to optimize its dexterity for future steps. After regrasping, no further reorientation is attempted because the needle tip is already poking out of the tissue. Any attempt to reorient at this point will only serve to stress the tissue further. In order for the regrasp to be reliable, it will be important to incorporate visual servoing to assist the robot in regrasping the needle [109].

6.3.5 Needle Follow Through

Once the needle has been aligned or regrapsed, the needle will be moved while min- imize its tissue deformation. This is accomplished by imposing the following body velocity on the needle

T b V ON = ω2 r 0000 1 . (6.14) − − 

By moving the needle along its own arc, further tissue distortion is minimized.

110 CHAPTER 6. NEEDLE PATH PLANNING

Figure 6.8: Shown above is a sample group of images for driving a suture needle with the non holonomic constraint. The tip of the gripper that actually holds the needle is touching the tissue right before the holder regrasps the needle.

Table 6.1: Variable Inputs Required for Needle Plan

Input Description g The needle bite target k The needle bite direction f The needle exit point α The initial approach distance β The initial needle bite depth ω1 The angular speed of reorienting the needle ω2 The angular speed of driving the needle

6.3.6 Needle Path Input List

The primary inputs required to generate the needle path plan are listed in Table 6.1. The first four inputs are used for the needle approach and set up. The final three parameters are used for the needle drive itself. The parameters ω1 and ω2 are used in both holonomic and non holonomic needle drives.

6.4 Empirical Path Evaluation

In this section, experimental validation of the proposed method for generating the needle path is presented. Since there are two potential methods of generating the needle path, it is important to understand the practical differences between them during actual needle sutures. To compare the two flavors of the needle drive, a suture

111 CHAPTER 6. NEEDLE PATH PLANNING

1.4

The Sharp force increase here is due This is where the gripper regrasps the needle for extraction 1.2 to the gripper compressing the tissue.

1

0.8 Force (N) 0.6 Completed Drive Needle Bite Needle Extraction Pre Needle Drive and Reorientation

0.4

0.2

0 0 50 100 150 200 250 300 350 400 Time (s) (a) Force Magnitude for the Holonomic Motion

1.4

The Sharp force increase here is due 1.2 This is where the gripper to the gripper compressing the tissue. regrasps the needle for extraction

1

Needle Bite 0.8 and Reorientation

Needle Extraction Force (N) 0.6 Completed Drive

Pre Needle Drive 0.4

0.2

0 0 50 100 150 200 250 300 350 400 Time (s) (b) Force Magnitude for the Non Holonomic motion

Figure 6.9: Measured force magnitudes for both the holonomic (6.9a) and non holo- nomic (6.9b) needle reorientation. The sensed readings are not adjusted to remove any forces due to the weight of the needle gripper.

112 CHAPTER 6. NEEDLE PATH PLANNING

needle was driven through a test sample of a tissue phantom. The robot driving the needle is a novel laparoscopic gripper [10] with an embedded force sensor attached to a ABB IRB140 industrial robot arm. The novel arm provides the ability to regrasp the needle while the ABB robot completes all of the motion. The needle is a CT-1

needle manufactured by Ethicon Inc. The needle is a 1/2 circle taper point which has a measured radius of about 12.7 mm. The phantom tissue is manufactured by Simulab Corp. as a surgical training aid. The particular model in the experiment is a SCS-10 subcuticular tissue simulator. The advantage to using the tissue phantom is that it is homogenous and should produce repeatable results as previously shown

in [55]. The purpose of using a homogenous tissue phantom is to ensure that driving the needle through the tissue in new, unpunctured, locations will produce results that depend on the type of needle drive and not the location of the needle drive. An embedded ATI nano17 force torque sensor is used to measure the wrench sensed by the gripper. The readings were sampled using a desktop computer running xPC target (designed by Mathworks R Inc.) at a 2 KHz sampling frequency. During post processing, the data was filtered using a butterworth bandpass filter (100 Hz and 5th order). This helped to remove any noise associated with the sensor.

The needle drive is performed open loop with no sensors, visual or otherwise, that helped to guide the grasper to the needle. Since the drive was open loop, minor tweaks (i.e. moving the robot regrasping point) to the needle drive had to be made to ensure that the regrasp step was successful.

6.4.1 Needle Drive Results

The holonomic needle drive is shown as a stop motion sequence in Fig. 6.7. The gripper successfully drives and extracts the needle. Once this process is done. The gripper is then ready to begin tying the suture knot. During the needle insertion, shown in Fig. 6.7, it appears that the body of the gripper would interfere with the

113 CHAPTER 6. NEEDLE PATH PLANNING

Table 6.2: Measured Forces And Torques

Drive f¯ f τ¯ τ k k k kmax k k k kmax A 0.45 N 1.37 N 13.90 N-mm 61.44 N-mm B 0.41 N 1.16 N 16.11 N-mm 47.02 N-mm tissue during the suture. This is due to the presence of the ATI F/T sensor and will not be an issue with an actual robotic surgical system, such as the daVinci R or the Raven system [28]. Due to the size of the tissue and the placement of the gripper, only the gripper tip touched the tissue during the experiment. The non holonomic needle drive is shown as a stop motion sequence in Fig. 6.8. The results are visually similar to the holonomic needle drive, however, it appears that the needle is much deeper inside the tissue after the non holonomic needle drive as opposed to the holonomic drive. This is most apparent in Fig. 6.7:6 and 6.8:6. Since the optimal tissue depth of the needle is approximately the needle radius, the non holonomic needle drive reaches a more appropriate tissue depth.

6.5 Results and Discussion

A plot of the force magnitudes sensed by the needle for both holonomic and non holonomic drives are shown in Fig. 6.9a and 6.9b respectively. Due to space limita- tions in the chapter, the resulting torque magnitudes are not shown, but they have a similar shape. No attempt was made to remove the weight of the gripper and needle in the recorded plots. Each plot is broken up into distinct sections. Each section corresponds to a stage of the needle drive. The stages include, driving the needle, regrasping the needle, and finally extracting the needle. While both sets of results have a similar overall shape, the peak forces are smaller for the non holonomic needle drive at the point right before the regrasp process starts. As shown in both Fig. 6.7 and Fig. 6.8, right before the regrasp, the gripper tip touches the tissue. While it is true that the contact will increase needle tissue forces, surgeons will often use forceps

114 CHAPTER 6. NEEDLE PATH PLANNING

to manipulate the tissue during a suture. For that reason, the gripper contact is considered to be part of the needle drive, any forces due to such contact are therefore important components of the forces being measured. After the regrasp is completed, the non holonomic needle drive experiences a larger overall force. This is likely due

to the open loop regrasp. Since the regrasp is open loop, there are small errors in the needle regrasp because of uncertainties in the needle position estimate. As a result extra forces due to positioning errors may be present. The forces and torques of the two methods are compared using both the maximum and the mean values in Table 6.2. The non holonomic drive (drive B) is better than the holonomic drive (drive A) in three of the four categories. Based on the available surgical guides [107], the non holonomic needle drive ap- pears to be better. The overall force is reduced and the needle is deeper in the tissue sample. To verify that the non holonomic drive is better, the experiment should be repeated, and the effects caused by gripper tissue contact should be investigated.

6.6 Conclusions and Future Work

The open loop needle drive experiments gave many insights into the forces and torques generated during a suture. This completes a preliminary but important step towards automated suture needle driving. In order to improve upon the current capabilities of

MIS, the suture must be performed autonomously at a speed that is faster than what surgeons are able to do [14]. This can be accomplished by completing a preplanned motion with visual servoing so that during the needle regrasp, the gripper can quickly and precisely grab the needle to complete the suture. During procedures, sensor feedback (both visual and tactile) will be used to interrupt, or tweak the suturing operation if a complication arises. The suturing technique analyzed in this study is the, simple interrupted suture.

115 CHAPTER 6. NEEDLE PATH PLANNING

Analysis of variations of the simple interrupted suture (e.g., tissue eversion) or other suturing techniques (e.g., matress suture, continuous suture, etc. [107]) is outside the scope of this study, and will be the subject of future studies.

116 Chapter 7

Conclusions

The focus of this dissertation is on the technology required to enable semiautonomous

surgical subtask execution with a particular emphasis on surgical suturing. To that end problems related to suture thread tracking, suture needle-tissue interaction force modeling, and suture needle path planning have been addressed.

7.1 Suture Thread Tracking

Using NURBS curves to track surgical suture threads shows that active camera sens-

ing is viable for automated surgical suturing. The suture thread can be tracked under multiple conditions commonly found during surgical suturing: occlusions, knot-tying, and length changes. All of the tracking was completed in real time (15 Hz) using a graphics processor. The tracking utilized a stereo camera system. The thread was

1 tracked within an accuracy of 4.7 mm which is narrower than 2 the jaw width of a surgical manipulator. The tracking algorithm was validated against planar patterns that underwent 15◦ tilting to verify that the tracking was robust against the epipolar ± lines of the stereo image pair. Even though the thread was successfully tracked in a

variety of circumstances, there are several opportunities for improvement of the track- ing. Most notably, this tracking algorithm does not utilize any predictive modeling

117 CHAPTER 7. CONCLUSIONS to try and anticipate the thread motion.

7.2 Catadioptric Stereo Tracking of an MRI Guided

Catheter

The catadioptric stereo system introduced is not only MRI compatible, but it is also accurate to within 1 mm when using validation patterns. The camera system is capable of recording at 60 fps. Care must be used when immersing the tracking targets in water because while refraction does not introduce significant error, total internal reflection at the water line does cause potential tracking errors. The deflection of the catheter inside the MRI was tracked and the catheter length was validated using the labeled catheter images. The average tracked catheter length was within 3% of the actual catheter length.

7.3 Suture Needle-Tissue Interaction Force Mod-

eling

This study demonstrated that lumped force models can be accurate. This accu- racy was achieved with the introduction of a normal force component. The normal force describes the forces due to the compression of the tissue in directions that are perpendicular to the needle shaft. The magnitude of this force was found to be pro- portional to the amount of area that the needle sweeps as it is driven through the tissue. Minimizing the swept area should minimize needle-tissue interaction forces and subsequent tissue trauma. This model was validated on a tissue phantom which is more homogeneous that many actual tissue structures. Comparing the results of the tissue phantom to actual tissue samples would verify that the lumped model can be used in vivo.

118 CHAPTER 7. CONCLUSIONS

7.4 Suture Needle Path Plan

The final study this dissertation looked at was the suture needle driving path. The

suture needle path plan was generated by quantifying the best practices of surgical suturing. Two competing path plans were identified: Holonomic and Non-Holonomic. The two paths were compared by driving the needle through a tissue phantom using a surgical wrist attached to a industrial robot arm. A force sensor integrated into the

surgical wrist allowed for the forces of each path to be compared. The comparison of the two motion plans was hindered by the fact that the force sensor made contact with the tissue during the needle drive. More testing and comparison of the two path plans would allow the surgical robots to automatically determine when one path plan

might be superior to the other.

7.5 Future Research Problems

Many of the pieces for semiautonomous robotic suturing are now in place. The cur- rent challenge lies in integrating the above steps. The algorithms described earlier in this work can be integrated to drive the suture needle and track the suture thread.

Combining these algorithms will bring semiautonomous robots closer to reality. While one challenge is integrating the different topics addressed, each of the proposed so- lutions can be improved upon or looked at further. Solutions to the visual suture thread tracking can be further improved. Many of these enhancements are directly applicable to tracking the MRI guided catheter.

One of the biggest challenges in computer vision is algorithm robustness: inconsis- tencies in lightning, reflection, and the surgical scene combine to make visual track- ing extremely challenging. Regulatory limitations along with the significant public response to any medical device failures necessitates that significant failure is intoler- able. To that end steps must be taken to ensure that the tracking is not only robust,

119 CHAPTER 7. CONCLUSIONS

but its failure mode does not cause complications. Stochastic estimators (i.e. Kalman filter) provide one method of stabilizing the suture tracking as it can sense when the thread segmentation fails and can respond accordingly. The current tracking algo- rithm does not utilize any form of stochastic modeling. Incorporating a stochastic

estimator is one technique that can improve the robustness and utility of the thread tracking algorithm. Testing the tracking algorithm on pre-recorded surgical videos is another approach that would allow the tracking algorithm to be tested and validated on image sets that reproduce the challenges of computer vision processing in vivo. One further area of improvement is completing tracking of the suture needle. Track- ing the suture needle will allow for more accurate robotic motions as well as aiding the robot in analyzing the needle-tissue interaction forces that occur during suture needle driving. The suture needle force models were very accurate on tissue phantoms, but further validation of the suture needle-tissue interaction force models, which includes testing the models on tissue samples, would show the usefulness of such models for complet- ing sutures during a surgical procedure. The validated models can then be used to control the needle drive: identifying when the needle is not being driven correctly and adapting the drive accordingly. Additionally, the force models can be used to analyze and compare competing needle path plans. Integrating the generated needle paths with the needle tissue interaction forces will allow the path plans to be compared and evaluated by estimating then directly measuring the interaction forces during the needle drive. To that end, a new needle driving stage has been constructed. This stage introduces an XY-linear stage in addition to the rotational needle driving motor that allows the force models to be tested under different needle paths as well as comparing the interaction forces of different needle drive paths. One observation made regarding suture needles is that there are many different shapes and profile designs. The needles used throughout

120 CHAPTER 7. CONCLUSIONS

1 this work is a 2 circle taper point needle (CT-1 Ethicon LLC Cincinnati, OH USA). This is one of many different needle types. Understanding the purpose and usage of each needle type will allow for the force models and path plans to be adapted to the procedure or current needle.

Surgical robots are gradually becoming an integral component of the healthcare industry. The next 20 years will likely see the introduction and adoption of remote surgery, semi or even fully autonomous surgical robots, automated diagnosis systems, and even personal care robots. This work brings that future one step closer.

121 Bibliography

[1] J. E. Wickham, “The new surgery.,” BMJ, vol. 295, no. 6613, pp. 1581–1582, 1987.

[2] T. G. Weiser, S. E. Regenbogen, K. D. Thompson, A. B. Haynes, S. R. Lipsitz, W. R. Berry, and A. A. Gawande, “An estimation of the global volume of surgery: a modelling strategy based on available data,” The Lancet, vol. 372, no. 9633, pp. 139–144, 2008.

[3] T. M. Fullum, J. A. Ladapo, B. J. Borah, and C. L. Gunnarsson, “Comparison of the clinical and economic outcomes between open and minimally invasive ap- pendectomy and colectomy: evidence from a large commercial payer database,” Surgical endoscopy, vol. 24, no. 4, pp. 845–853, 2010.

[4] C. O. of Surgical Therapy Study Group and Others, “A comparison of laparo- scopically assisted and open colectomy for colon cancer.,” The New England journal of , vol. 350, no. 20, p. 2050, 2004.

[5] B. M. Ure, J. F. Kuebler, N. Schukfeh, C. Engelmann, J. Dingemann, and C. Pe-

tersen, “Survival with the native liver after laparoscopic versus conventional ka- sai portoenterostomy in infants with biliary atresia: a prospective trial,” Annals of surgery, vol. 253, no. 4, pp. 826–830, 2011.

122 BIBLIOGRAPHY

[6] H. Masoomi, B. Buchberg, B. Nguyen, V. Tung, M. J. Stamos, and S. Mills, “Outcomes of laparoscopic versus open colectomy in elective surgery for diver- ticulitis,” World journal of surgery, vol. 35, no. 9, pp. 2143–2148, 2011.

[7] N. Katkhouda, R. J. Mason, S. Towfigh, A. Gevorgyan, and R. Essani, “La-

paroscopic versus open appendectomy: a prospective randomized double-blind study,” Annals of surgery, vol. 242, no. 3, p. 439, 2005.

[8] P. Kazanzides, G. Fichtinger, G. D. Hager, A. M. Okamura, L. L. Whitcomb, and R. H. Taylor, “Surgical and Interventional Robotics - Core Concepts, Tech- nology, and Design [Tutorial],” Robotics Automation Magazine, IEEE, vol. 15,

pp. 122–130, June 2008.

[9] R. H. Taylor, “A Perspective on Medical Robotics,” Proceedings of the IEEE, vol. 94, pp. 1652–1664, Sept. 2006.

[10] T. Liu, “Design and Prototyping of a Three Degrees of Freedom Robotic Wrist

Mechanism for a Robitic Surgery System,” Master’s thesis, Case Western Re- serve University, Cleveland, OH, 2010.

[11] R. J. Damiano, “Robotics in : the Emperor’s new clothes.,” The Journal of thoracic and cardiovascular surgery, vol. 134, pp. 559–61, Sept. 2007.

[12] G. S. Guthart, J. K. Salisbury, and J. K. Salisbury Jr., “The IntuitiveTM telesurgery system: overview and application,” in Robotics and Automa- tion, 2000. Proceedings. ICRA ’00. IEEE International Conference on, vol. 1, pp. 618–621 vol.1, 2000.

[13] K. Kirkpatrick, “Surgical Robots Deliver Care More Precisely,” Communica- tions of the ACM, vol. 57, pp. 14–16, Aug. 2014.

123 BIBLIOGRAPHY

[14] F. Tendick, R. W. Jennings, G. K. Tharp, and L. W. Stark, “Sensing and Manipulation Problems in Endoscopic Surgery: Experiment, Analysis, and Ob- servation,” Presence, vol. 2, no. 1, pp. 66–81, 1993.

[15] J. D. Pitcher, J. T. Wilson, S. D. Schwartz, and J.-p. Hubschman, “Robotic

Eye Surgery: Past , Present , and Future,” J Comput Sci Syst Biol, vol. S3, pp. 1–4, 2012.

[16] M. Bonfe, F. Boriero, R. Dodi, P. Fiorini, A. Morandi, R. Muradore, L. Pasquale, A. Sanna, and C. Secchi, “Towards automated surgical robotics: A requirements engineering approach,” in Biomedical Robotics and Biomecha-

tronics (BioRob), 2012 4th IEEE RAS EMBS International Conference on, pp. 56–61, June 2012.

[17] S. Iyer, T. Looi, and J. Drake, “A Single Arm Single Camera System For Automated Suturing,” in Robotics and Automation (ICRA), 2013 IEEE Inter-

national Conference on, pp. 239–244, May 2013.

[18] F. Nageotte, P. Zanne, M. de Mathelin, and C. Doignon, “A Circular Needle Path Planning Method for Suturing in Laparoscopic Surgery,” in Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International

Conference on, pp. 514–519, Apr. 2005.

[19] F. Nageotte, P. Zanne, C. Doignon, and M. de Mathelin, “Stitching Planning in Laparoscopic Surgery: Towards Robot-assisted Suturing.,” I. J. Robotic Res., vol. 28, no. 10, pp. 1303–1321, 2009.

[20] A. Dubrowski, R. Sidhu, J. Park, and H. Carnahan, “Quantification of motion characteristics and forces applied to tissues during suturing,” The American Journal of Surgery, vol. 190, no. 1, pp. 131–136, 2005.

124 BIBLIOGRAPHY

[21] T. Frick, D. Marucci, J. Cartmill, C. Martin, and W. Walsh, “Resistance forces acting on suture needles,” Journal of Biomechanics, vol. 34, pp. 1335–1340, Oct. 2001.

[22] R. C. Jackson and M. C. Cavusoglu, “Needle path planning for autonomous

robotic surgical suturing,” in Robotics and Automation (ICRA), 2013 IEEE International Conference on, pp. 1669–1675, May 2013.

[23] S. Leonard, K. L. Wu, Y. Kim, A. Krieger, and P. C. W. Kim, “Smart Tis- sue Anastomosis Robot (STAR): A Vision-Guided Robotics System for La- paroscopic Suturing,” Biomedical Engineering, IEEE Transactions on, vol. 61,

pp. 1305–1317, Apr. 2014.

[24] D.-L. Chow, R. C. Jackson, M. C. Cavusoglu, and W. Newman, “A novel vi- sion guided knot-tying method for autonomous robotic surgery,” in 2014 IEEE International Conference on Automation Science and Engineering (CASE),

pp. 504–508, IEEE, Aug. 2014.

[25] N. Abolhassani, R. Patel, and M. Moallem, “Needle insertion into : A survey,” Medical Engineering & Physics, vol. 29, no. 4, pp. 413–431, 2007.

[26] R. Alterovitz, K. Y. Goldberg, J. Pouliot, and I.-C. Hsu, “Sensorless Motion

Planning for Medical Needle Insertion in Deformable Tissues,” Information Technology in Biomedicine, IEEE Transactions on, vol. 13, pp. 217–225, Mar. 2009.

[27] H. Calkins, K. H. Kuck, R. Cappato, J. Brugada, A. John Camm, S. A. Chen,

H. J. G. Crijns, R. J. Damiano, D. W. Davies, J. DiMarco, J. Edgerton, K. Ellenbogen, M. D. Ezekowitz, D. E. Haines, M. Haissaguerre, G. Hindricks, Y. Iesaka, W. Jackman, J. Jalife, P. Jais, J. Kalman, D. Keane, Y. H. Kim, P. Kirchhof, G. Klein, H. Kottkamp, K. Kumagai, B. D. Lindsay, M. Mansour,

125 BIBLIOGRAPHY

F. E. Marchlinski, P. M. McCarthy, J. L. Mont, F. Morady, K. Nademanee, H. Nakagawa, A. Natale, S. Nattel, D. L. Packer, C. Pappone, E. Prystowsky, A. Raviele, V. Reddy, J. N. Ruskin, R. J. Shemin, H. M. Tsao, and D. Wilber, “2012 HRS/EHRA/ECAS expert consensus statement on catheter and surgical

ablation of atrial fibrillation: Recommendations for patient selection, proce- dural techniques, patient management and follow-up, definitions, endpoints, and research trial design,” Journal of Interventional Cardiac Electrophysiology, vol. 33, no. 2, pp. 171–257, 2012.

[28] M. J. H. Lum, D. C. W. Friedman, G. Sankaranarayanan, H. King, K. Fodero,

R. Leuschke, B. Hannaford, J. Rosen, and M. N. Sinanan, “The RAVEN: Design and Validation of a Telesurgery System,” Int. J. Rob. Res., vol. 28, pp. 1183– 1197, Sept. 2009.

[29] D. J. Buckmaster, W. S. Newman, and S. D. Somes, “Compliant motion control

for robust robotic surface finishing,” 2008 7th World Congress on Intelligent Control and Automation, pp. 559–564, 2008.

[30] P. Boonvisut, R. Jackson, and M. C. Cavusoglu, “Estimation of Soft Tissue Mechanical Parameters from Robotic Manipulation Data.,” IEEE International

Conference on Robotics and Automation : ICRA : [proceedings] IEEE Interna- tional Conference on Robotics and Automation, vol. 2012, pp. 4667–4674, Dec. 2012.

[31] E. P. Westebring-van der Putten, R. H. M. Goossens, J. J. Jakimowicz, and

J. Dankelman, “Haptics in minimally invasive surgery–a review.,” Minimally invasive therapy & allied technologies : MITAT : official journal of the Society for Minimally Invasive Therapy, vol. 17, pp. 3–16, Jan. 2008.

126 BIBLIOGRAPHY

[32] K. Li, J. Zhan, B. Pan, Y. Fu, and S. Wang, “A miniature 3-axis distal force sensor for tissue palpation during minimally invasive surgery,” in 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1234– 1239, IEEE, Dec. 2013.

[33] C. Staub, T. Osa, A. Knoll, and R. Bauernschmitt, “Automation of tissue piercing using circular needles and vision guidance for computer aided laparo- scopic surgery,” in Robotics and Automation (ICRA), 2010 IEEE International Conference on, pp. 4585–4590, IEEE, IEEE, May 2010.

[34] Y. Kurose, Y. M. Baek, Y. Kamei, S. Tanaka, K. Harada, S. Sora, A. Morita,

N. Sugita, and M. Mitsuishi, “Preliminary study of needle tracking in a micro- surgical robotic system for automated operations,” in International Conference on Control, Automation and Systems, pp. 627–630, Oct. 2013.

[35] T. Osa, C. Staub, and A. Knoll, “Framework of automatic robot surgery sys-

tem using Visual servoing,” in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, pp. 1837–1842, 2010.

[36] R. C. Jackson, R. Yuan, D.-L. Chow, W. Newman, and M. C. Cavusoglu, “Automatic initialization and dynamic tracking of surgical suture threads,”

in 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 4710–4716, IEEE, May 2015.

[37] J. Canny, “A Computational Approach to Edge Detection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. PAMI-8, pp. 679–698,

Nov. 1986.

[38] A. Frangi, W. Niessen, K. Vincken, and M. Viergever, “Multiscale vessel en- hancement filtering,” in Medical Image Computing and Computer-Assisted In- terventation MICCAI98 (W. Wells, A. Colchester, and S. Delp, eds.), vol. 1496

127 BIBLIOGRAPHY

of Lecture Notes in Computer Science, pp. 130–137, Springer Berlin Heidelberg, 1998.

[39] L. Gang, O. Chutatape, and S.-M. Krishnan, “Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian

filter,” Biomedical Engineering, IEEE Transactions on, vol. 49, pp. 168–172, Feb. 2002.

[40] Z. Lin, J. Wang, and K.-K. Ma, “Using eigencolor normalization for illumination-invariant color object recognition,” Pattern Recognition, vol. 35,

pp. 2629–2642, Nov. 2002.

[41] K. Arbter and G. Hirzinger, “Real-time visual servoing for laparoscopic surgery. Controlling robot motion with color image segmentation,” IEEE Engineering in Medicine and Biology Magazine, vol. 16, no. 1, pp. 40–45, 1997.

[42] C. Doignon, P. Graebling, and M. de Mathelin, “Real-time segmentation of surgical instruments inside the abdominal cavity using a joint hue saturation color feature,” Real-Time Imaging, vol. 11, pp. 429–442, Oct. 2005.

[43] J. A. Brown and D. W. Capson, “A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter.,” IEEE transactions on vi- sualization and computer graphics, vol. 18, pp. 68–80, Feb. 2011.

[44] M. Z. Zia, M. Stark, B. Schiele, and K. Schindler, “Revisiting 3D geomet-

ric models for accurate object shape and pose,” in 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 569–576, IEEE, Nov. 2011.

[45] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (Intelligent Robotics

and Autonomous Agents). Intelligent robotics and autonomous agents, The MIT Press, Sept. 2005.

128 BIBLIOGRAPHY

[46] L. G. Brown, “A survey of image registration techniques,” ACM Computing Surveys, vol. 24, pp. 325–376, Dec. 1992.

[47] L. Piegl and W. Tiller, The NURBS book (2nd ed.). New York, NY, USA: Springer-Verlag New York, Inc., 1997.

[48] J. Nickolls, I. Buck, M. Garland, and K. Skadron, “Scalable Parallel Program- ming with CUDA,” Queue, vol. 6, pp. 40–53, Mar. 2008.

[49] H. Reichenspurner, R. J. Damiano, M. Mack, D. H. Boehm, H. Gulbins,

C. Detter, B. Meiser, R. Ellgass, and B. Reichart, “Use of the voice-controlled and computer-assisted surgical system zeus for endoscopic coronary artery by- pass grafting,” The Journal of Thoracic and Cardiovascular Surgery, vol. 118, pp. 11–16, July 1999.

[50] J. P. Slater, T. Guarino, J. Stack, K. Vinod, R. T. Bustami, J. M. Brown, A. L. Rodriguez, C. J. Magovern, T. Zaubler, K. Freundlich, and G. V. S. Parr, “Cerebral oxygen desaturation predicts cognitive decline and longer hospital stay after cardiac surgery.,” The Annals of thoracic surgery, vol. 87, pp. 36–44;

discussion 44–5, Jan. 2009.

[51] N. Chentanez, R. Alterovitz, D. Ritchie, L. Cho, K. K. Hauser, K. Goldberg, J. R. Shewchuk, and J. F. O’Brien, “Interactive Simulation of Surgical Needle Insertion and Steering,” in Proceedings of ACM SIGGRAPH 2009, pp. 88:1–10,

Aug. 2009.

[52] A. Maghsoudi and M. Jahed, “Multi-parameter sensitivity analysis for guided needle insertion through soft tissue,” in Biomedical Engineering and Sciences (IECBES), 2010 IEEE EMBS Conference on, pp. 97–100, 2010.

[53] S. Misra, K. B. Reed, B. W. Schafer, K. T. Ramesh, and A. M. Okamura, “Observations and models for needle-tissue interactions,” in Robotics and Au-

129 BIBLIOGRAPHY

tomation, 2009. ICRA ’09. IEEE International Conference on, pp. 2687–2692, May 2009.

[54] A. M. Okamura, C. Simone, and M. D. O’Leary, “Force modeling for needle in- sertion into soft tissue,” Biomedical Engineering, IEEE Transactions on, vol. 51,

no. 10, pp. 1707–1716, 2004.

[55] R. C. Jackson and M. C. Cavusoglu, “Modeling of needle-tissue interaction forces during surgical suturing,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on, pp. 4675–4680, May 2012.

[56] P. Boonvisut and M. C. Cavusoglu, “Estimation of Soft Tissue Mechanical Parameters from Robotic Manipulation Data.,” Mechatronics, IEEE/ASME Transactions on, vol. 18, pp. 1602–1611, Oct. 2013.

[57] A. R. Cohen, S. Lohani, S. Manjila, S. Natsupakpong, N. Brown, and M. C. Cavusoglu, “Virtual reality simulation: basic concepts and use in endoscopic neurosurgery training.,” Child’s nervous system : ChNS : official journal of the International Society for Pediatric Neurosurgery, vol. 29, pp. 1235–44, Aug.

2013.

[58] M. Bro-Nielsen, “Finite element modeling in surgery simulation,” Proceedings of the IEEE, vol. 86, pp. 490–503, Mar. 1998.

[59] J. van den Berg, S. Miller, D. Duckworth, H. Hu, A. Wan, X.-Y. Fu, K. Gold-

berg, and P. Abbeel, “Superhuman performance of surgical tasks by robots using iterative learning from human-guided demonstrations,” in Robotics and Automation (ICRA), 2010 IEEE International Conference on, pp. 2074–2081, May 2010.

[60] A. Coates, P. Abbeel, and A. Y. Ng, “Learning for control from multiple demon- strations,” in Proceedings of the 25th International Conference on Machine

130 BIBLIOGRAPHY

Learning (ICML-08) (W. W. Cohen, A. Mccallum, and S. T. Roweis, eds.), pp. 144–151, 2008.

[61] M. Kaiser and R. Dillmann, “Building elementary robot skills from human demonstration,” in Robotics and Automation, 1996. Proceedings., 1996 IEEE

International Conference on, vol. 3, pp. 2700–2705 vol.3, Apr. 1996.

[62] J. Rosen, J. D. Brown, L. Chang, M. N. Sinanan, and B. Hannaford, “General- ized approach for modeling minimally invasive surgery as a stochastic process using a discrete Markov model,” Biomedical Engineering, IEEE Transactions on, vol. 53, pp. 399–413, Mar. 2006.

[63] H. Mayer, D. Burschka, A. Knoll, E. U. Braun, R. Lange, and R. Bauern- schmitt, “Human-machine skill transfer extended by a scaffolding framework,” in Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pp. 2866–2871, May 2008.

[64] T. Osa, K. Harada, N. Sugita, and M. Mitsuishi, “Trajectory planning un- der different initial conditions for surgical task automation by learning from demonstration,” in Robotics and Automation (ICRA), 2014 IEEE International Conference on, pp. 6507–6513, May 2014.

[65] S. P. DiMaio and S. E. Salcudean, “Needle steering and motion planning in soft tissues,” Biomedical Engineering, IEEE Transactions on, vol. 52, pp. 965–974, June 2005.

[66] R. Alterovitz, K. Goldberg, and A. Okamura, “Planning for Steerable Bevel-

tip Needle Insertion Through 2D Soft Tissue with Obstacles,” in Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on, pp. 1640–1645, Apr. 2005.

131 BIBLIOGRAPHY

[67] L. Vancamberg, A. Sahbani, S. Muller, and G. Morel, “Needle path planning for digital breast tomosynthesis biopsy,” in Robotics and Automation (ICRA), 2010 IEEE International Conference on, pp. 2062–2067, May 2010.

[68] R. C. Jackson, R. Yuan, D.-L. Chow, W. Newman, and M. C. Cavusoglu, “Real-

time visual tracking of dynamic surgical suture threads,” Automation Science and Engineering, IEEE Transactions on, submitted for review.

[69] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” International Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, 1988.

[70] V. Kaul, A. Yezzi, and Y. J. Tsai, “Detecting curves with unknown endpoints

and arbitrary topology using minimal paths,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, pp. 1952–1965, Oct. 2012.

[71] G. Steger, “An unbiased detector of curvilinear structures,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, pp. 113–125, Feb. 1998.

[72] R. D. Wedowski, A. R. Farooq, L. N. Smith, and M. L. Smith, “High speed, multi-scale tracing of curvilinear features with automated scale selection and enhanced orientation computation,” in High Performance Computing and Sim- ulation (HPCS), 2010 International Conference on, pp. 410–417, June 2010.

[73] T. Yamaguchi and S. Hashimoto, “Fast crack detection method for large-size concrete surface images using percolation-based image processing,” Machine Vision and Applications, vol. 21, pp. 797–809, Aug. 2010.

[74] S. Javdani, S. Tandon, J. Tang, J. F. O’Brien, and P. Abbeel, “Modeling and

perception of deformable one-dimensional objects,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on, pp. 1607–1614, May 2011.

132 BIBLIOGRAPHY

[75] J. Schulman, A. Lee, J. Ho, and P. Abbeel, “Tracking deformable objects with point clouds,” in Robotics and Automation (ICRA), 2013 IEEE International Conference on, pp. 1130–1137, IEEE, May 2013.

[76] N. Padoy and G. Hager, “Deformable Tracking of Textured Curvilinear Ob-

jects,” in Proceedings of the British Machine Vision Conference, pp. 5.1—-5.11, BMVA Press, 2012.

[77] N. Padoy and G. D. Hager, “3D thread tracking for robotic assistance in tele- surgery,” in Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ Interna- tional Conference on, pp. 2102–2107, Sept. 2011.

[78] N. Komodakis, G. Tziritas, and N. Paragios, “Fast, Approximately Optimal Solutions for Single and Dynamic MRFs,” in Computer Vision and Pattern Recognition, 2007. CVPR ’07. IEEE Conference on, pp. 1–8, June 2007.

[79] T. H. Heibel, B. Glocker, M. Groher, N. Paragios, N. Komodakis, and N. Navab,

“Discrete tracking of parametrized curves,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 1754–1761, June 2009.

[80] R. G. N. Meegama and J. C. Rajapakse, “ NURBS snakes,” Image and Vision { } Computing, vol. 21, no. 6, pp. 551–562, 2003.

[81] J. M. Sackier and Y. Wang, “Robotically assisted laparoscopic surgery,” Surgi- cal Endoscopy, vol. 8, pp. 63–66, Jan. 1994.

[82] C. Staub, S. Can, B. Jensen, A. Knoll, and S. Kohlbecher, “Human-computer

interfaces for interaction with surgical tools in robotic surgery,” in Biomedical Robotics and Biomechatronics (BioRob), 2012 4th IEEE RAS EMBS Interna- tional Conference on, pp. 81–86, June 2012.

133 BIBLIOGRAPHY

[83] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre, “Recent advances in augmented reality,” IEEE Computer Graphics and Appli- cations, vol. 21, no. 6, pp. 34–47, 2001.

[84] M. Basu, “Gaussian-based edge-detection methods-a survey,” IEEE Transac-

tions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 32, pp. 252–260, Aug. 2002.

[85] G. S. Muralidhar, A. C. Bovik, and M. K. Markey, “A Steerable, Multiscale Singularity Index,” Signal Processing Letters, IEEE, vol. 20, pp. 7–10, Jan.

2013.

[86] H. Martinsson, F. Gaspard, A. Bartoli, and J. Lavest, “Reconstruction of 3D Curves for Quality Control,” in Scandinavian Conference on Image Analysis (B. Ersbø ll and K. Pedersen, eds.), pp. 760–769, Springer Berlin / Heidelberg,

2007.

[87] D. Adalsteinsson and J. A. Sethian, “A fast level set method for propagating interfaces,” Journal of Computational Physics, vol. 118, no. 2, pp. 269–277,

1995.

[88] E. W. Dijkstra, “A note on two problems in connexion with graphs,” Nu- merische Mathematik, vol. 1, no. 1, pp. 269–271, 1959.

[89] T.-J. Cham and R. Cipolla, “Stereo coupled active contours,” Proceedings of

IEEE Computer Society Conference on Computer Vision and Pattern Recogni- tion, pp. 1094–1099, June 1997.

[90] R. C. Jackson, T. Liu, and M. C. Cavusoglu, “Catadioptric Stereo Tracking for Three Dimensional Shape Measurement of MRI Guided Catheters,” in Robotics

and Automation (ICRA), 2016 IEEE International Conference on, submitted for review.

134 BIBLIOGRAPHY

[91] S. W. Hetts, M. Saeed, A. Martin, P. Lillaney, A. Losey, E. J. Yee, R. Sin- cic, L. Do, L. Evans, V. Malba, A. F. Bernhardt, M. W. Wilson, A. Patel, R. L. Arenson, C. Caton, and D. L. Cooke, “Magnetically-Assisted Remote Controlled Microcatheter Tip Deflection under Magnetic Resonance Imaging,”

J Vis Exp, no. 74, 2013.

[92] A. E. Campbell-Washburn, A. Z. Faranesh, R. J. Lederman, and M. S. Hansen, “MR Sequences and Rapid Acquisition for MR-Guided Interventions,” Magnetic Resonance Imaging Clinics of North America, Aug. 2015.

[93] K. Ratnayaka, A. Z. Faranesh, M. A. Guttman, O. Kocaturk, C. E. Saikus,

and R. J. Lederman, “Interventional cardiovascular magnetic resonance: still tantalizing.,” Journal of cardiovascular magnetic resonance : official journal of the Society for Cardiovascular Magnetic Resonance, vol. 10, p. 62, 2008.

[94] P. Lillaney, C. Caton, A. J. Martin, A. D. Losey, L. Evans, M. Saeed, D. L.

Cooke, M. W. Wilson, and S. W. Hetts, “Comparing deflection measurements of a magnetically steerable catheter using optical imaging and MRI,” Med Phys, vol. 41, p. 22305, Feb. 2014.

[95] R. S. Penning, J. Jung, J. A. Borgstadt, N. J. Ferrier, and M. R. Zinn, “To-

wards closed loop control of a continuum robotic manipulator for medical ap- plications,” Intelligent Robots and ... , pp. 5139–5146, 2011.

[96] T. Greigarn and M. C. Cavusoglu, “Pseudo-rigid-body model and kinematic analysis of MRI-actuated catheters,” in 2015 IEEE International Conference

on Robotics and Automation (ICRA), pp. 2236–2243, IEEE, May 2015.

[97] E. Ayvali and J. P. Desai, “Accurate in-plane and out-of-plane ultrasound- based tracking of the discretely actuated steerable cannula,” in 2014 IEEE

135 BIBLIOGRAPHY

International Conference on Robotics and Automation (ICRA), pp. 5896–5901, IEEE, May 2014.

[98] D. B. Camarillo, C. F. Milne, C. R. Carlson, M. R. Zinn, and J. K. Salis- bury, “Mechanics modeling of -driven continuum manipulators,” IEEE

Transactions on Robotics, vol. 24, no. 6, pp. 1262–1273, 2008.

[99] D. B. Camarillo, K. E. Loewke, C. R. Carlson, and J. K. Salisbury, “Vision based 3-D shape sensing of flexible manipulators,” in 2008 IEEE International Conference on Robotics and Automation, pp. 2940–2947, 2008.

[100] C. Geyer and K. Daniilidis, “Catadioptric projective geometry,” International

Journal of Computer Vision, vol. 45, no. 3, pp. 223–243, 2001.

[101] J. Gluckman and S. Nayar, “Planar catadioptric stereo: geometry and calibra- tion,” in Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), vol. 1, pp. 22–28, IEEE

Comput. Soc, 1999.

[102] J. Gluckman and S. Nayar, “Rectified catadioptric stereo sensors,” IEEE Trans- actions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 224– 236, 2002.

[103] H.-H. P. Wu, M.-T. Lee, P.-K. Weng, and S.-L. Chen, “Epipolar geometry of catadioptric stereo systems with planar mirrors,” Image and Vision Computing, vol. 27, pp. 1047–1061, July 2009.

[104] G. Bradski, “The OpenCV Library,” Dr Dobbs Journal of Software Tools,

vol. 25, pp. 120–125, 2000.

[105] T. Liu and M. C. Cavusoglu, “Three Dimensional Modeling of an MRI Actuated Steerable Catheter System.,” IEEE International Conference on Robotics and

136 BIBLIOGRAPHY

Automation : ICRA : [proceedings] IEEE International Conference on Robotics and Automation, vol. 2014, pp. 4393–4398, Jan. 2014.

[106] T. Greigarn and M. C. Cavusoglu, “Task-Space Motion Planning of MRI- Actuated Catheters for Catheter Ablation of Atrial Fibrillation.,” Proceedings

of the ... IEEE/RSJ International Conference on Intelligent Robots and Sys- tems. IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2014, pp. 3476–3482, Sept. 2014.

[107] D. A. Sherris and E. B. Kern, Essential Surgical Skills. Saunders, 2 ed., 2004.

[108] N. B. Semer, Practical plastic surgery for nonsurgeons. Hanley & Belfus, 2001.

[109] P. I. Corke, “Visual Control Of Robot Manipulators – A Review,” in Visual Servoing, pp. 1–31, World Scientific, 1994.

137