<<

MULTIMODAL IMAGING, COMPUTER VISION, AND

FOR MEDICAL GUIDANCE

A Dissertation

Presented to

The Graduate Faculty of The University of Akron

In Partial Fulfillment

of the Requirements for the Degree

Doctor of Philosophy

Christopher Andrew Mela

December, 2018

MULTIMODAL IMAGING, COMPUTER VISION, AND AUGMENTED REALITY

FOR MEDICAL GUIDANCE

Christopher Andrew Mela

Dissertation

Approved: Accepted:

______Advisor Interim Department Chair Dr. Yang Liu Dr. Rebecca K. Willits

______Committee Member Interim Dean of the College Dr. Brian Davis Dr. Craig Menzemer

______Committee Member Dean of the Graduate School Dr. Rebecca K. Willits Dr. Chand Midha

______Committee Member Date Dr. Ajay Mahajan

______Committee Member Dr. Yi Pang

______Committee Member Dr. Jiahua Zhu

ii

ABSTRACT

Surgery is one of the primary treatment options for many types of diseases.

Traditional methods of surgical planning and intraoperative lesion identification rely on sight as well as physical palpitation of the suspect region. Since these methods are of low specificity, doctors have begun relying upon technologies to make diagnoses and to help plan and guide surgical procedures. Preoperative imaging technologies such as Magnetic Resonance Imaging (MRI) and X-ray Computed

Tomography (CT) do well to aid in diagnostics and operative planning. However, compact technologies with high specificity and resolution that are convenient for intraoperative use are needed to aid in surgical guidance. Methods including imaging, intraoperative microscopy and ultrasound have gained significant recent attention towards these ends.

Discussed in this dissertation is the initial design, construction, programming, testing and expanding of a platform technology integrating multimodal medical imaging, computer vision and augmented reality. The platform combines a real-time, head-mounted stereoscopic fluorescence imaging system in-line with a near- display. The compact and light-weight assembly provides the user with a wide field-of-view, line-of-sight imaging system that simulates natural . Additionally, an ultrasound

iii imaging module is connected and incorporated into the display, along with a portable fiber microscopy system. Lastly, pre-operative MRI/CT imaging models are incorporated into the system for intraoperative registration and display onto the surgical scene.

Novel software algorithms were developed to enhance system operations.

Fluorescence detection while using a wearable imaging platform was improved through the incorporation of an optical point tracking regime with pulsatile illumination. Optical fiducial marker identification was added to aid in ultrasound and tomographic image registration. Additionally, stereoscopic depth-of-field measurements were used towards the implementation of a fluorescence-to-color video rate co-registration scheme.

System testing was conducted on multiple fronts. Fluorescence imaging sensitivity was evaluated to determine minimum detectable concentrations of fluorescent dye.

Surgical and medical diagnostic simulations were also conducted using optical tissue phantoms to evaluate device performance in relation to traditional methods, and to identify areas of improvement. System resolution was also analyzed both in planar spatial coordinates as well as depth-of-field measurements. The performance of various augmented reality displays was tested with respect to fidelity of fluorescence identification and resolution. Lastly, the system was tested for registration accuracy.

In summary, we have developed a platform integrating intraoperative multimodal imaging, computer vision, and augmented reality, for guiding and other medical applications.

iv

ACKNOWLEDGEMENTS

Special thanks to my advisor Dr. Yang Liu who took me on when few others would.

Thanks to my lab mates, Tri Quang and Maziyar Askari, for working with me these past few years. Thanks as well to my committee members for, well, being on my committee.

I’m sure reading this will be fun. Additional acknowledgements to Stephen Paterson, Visar

Berki, Drs. Forrest Bao, Vivek Nagarajan and Narrender Reddy for their technical support at various times during my time at the University of Akron. More gratitude towards

Charlotte LaBelle and Sandy Vasenda for assisting with my many and varied administrative needs, as well as to Dr. Daniel Sheffer who accepted my application and was the first person to welcome me at Akron.

Thanks to our clinical collaborators at the Cleveland Clinic, in particular Dr. Frank

Papay who made our collaborations possible. Additional thanks to Dr. Stephen Grobmyer who kindly brought our system into the , and Drs. Edward Maytin, Maria Madajka and Eliana Duraes for collaborating with us on clinical imaging research.

A big thanks to NASA, William Thompson at Glenn Research Center and Baraquiel

Reyna at Johnson, as well as to the whole NASA Space Technologies Research Fellowship team.

v

TABLE OF CONTENTS

Page

LIST OF TABLES………………………………………………………………………xiv

LIST OF FIGURES…………………………………………………………………….xix

CHAPTER

I. INTRODUCTION………………………………………………………………………1

1.1. Imaging for Surgical Oncology………………………………………………1

1.2. Intraoperative Fluorescence Imaging for Surgical Interventions…………….5

1.2.1. Fluorescent Dyes……………………………………………………5

1.2.2. Fluorescence Imaging in Surgery…………………………………..8

1.2.3. Instrumentation in Fluorescence Imaging…………………………..9

1.2.4. Fluorescence Imaging Systems……………………………………11

1.3. Augmented Reality in Medical Imaging…………………………………….14

1.4. Multimodal Imaging………………………………………………………..17

1.4.1. Ultrasound…………………………………………………………17

vi

1.4.2. Radiology………………………………………………………….18

1.4.3. MRI/CT……………………………………………………………18

1.5. Scope and Aims……………………………………………………………..19

II. STEREOSCOPIC IMAGING GOGGLES FOR MULTIMODAL INTRAOPERATIVE IMAGE GUIDANCE ……………………………………….25 2.1. Introduction………………………………………………………………….25

2.2. Materials and Methods………………………………………………………28

2.2.1. Imaging System Instrumentation………………………………….28

2.2.2. Image Acquisition, Processing, Registration and Display………..32

2.2.3. System Characterization…………………………………………..34

2.2.4. Image-Guided Surgery in Chicken Ex Vivo………………………36

2.2.5. Telemedicine………………………………………………………37

2.3. Results……………………………………………………………………….38

2.3.1. System Characterization…………………………………………..38

2.3.2. Image-Guided Surgeries in Chicken………………………………41

2.3.3. Telemedicine………………………………………………………46

2.4. Discussion…………………………………………………………………...47

2.4.1. ………………………………………………………..48

2.4.2. Characterization…………………………………………………...48

vii

2.4.3. Microscopy………………………………………………………..49

2.4.4. Ultrasound…………………………………………………………50

2.4.5. Future Work……………………………………………………….50

2.5. Conclusions………………………………………………………………….51

III. METHODS OF CHARACTERIZATION FOR A STEREOSCOPIC HEAD- MOUNTED FLUORESCENCE IMAGING SYSTEM……………………………….52 3.1. Introduction………………………………………………………………….52

3.2. Materials and Methods………………………………………………………57

3.2.1. Optical Imaging and Display……………………………………..57

3.2.2. Computation……………………………………………………….59

3.2.3. Fluorescence Detection Sensitivity……………………………….59

3.2.4. Fluorescence Guided Surgical Simulation………………………..64

3.2.5. Resolution Testing………………………………………………..67

3.2.6. Display Testing…………………………………………………….69

3.3. Results……………………………………………………………………….70

3.3.1. Fluorescence Detection Sensitivity……………………...... 70

3.3.2. Fluorescence Guided Surgical Simulation………………………..76

3.3.3. Resolution Testing…………………………………………………77

3.3.4. Display Testing……………………………………………………78

viii

3.4. Discussion…………………………………………………………………..81

5.4.1. Dark Room Study…………………………………………………81

5.4.2. Tissue Phantom Study…………………………………………….82

5.4.3. Display Testing……………………………………………………84

3.5. Conclusions………………………………………………………………….85

IV. APPLICATION OF A DENSE FLOW OPTICAL TRACKING ALGORITHM WITH PULSED LIGHT IMAGING FOR ENHANCED FLUORESCENCE DETECTION ………………………………………………………………………….86 4.1. Introduction………………………………………………………………….86 4.1.1. Enhancing Fluorescence Imaging for Clinical Application……….86

4.1.2. Pulsed Light Imaging for Skin Cancer Therapy…………………..90

4.2. Materials and Methods………………………………………………………92

4.2.1. Computation……………………………………………………….92

4.2.2. Instrumentation……………………………………………………92

4.2.3. Illumination………………………………………………………..93

4.2.4. Fluorescent Point Tracking………………………………………..96

4.2.5. Fluorescence Sensitivity………………………………………….97

4.2.6. Fluorescent Point Tracking Accuracy ………………………...... 98

4.3. Results……………………………………………………………………..101

4.3.1. Pulsed Light Imaging…………………………………………….101

ix

4.3.2. Fluorescent Point Tracking Accuracy……………………………103

4.3.3. Fluorescence Sensitivity………………………………………...104

4.4. Discussion………………………………………………………………….106

4.4.1. Pulsed Light Imaging…………………………………………….106

4.4.2. Fluorescent Point Tracking Accuracy……………………………107

4.4.3. Fluorescence Sensitivity…………………………………………108

4.5. Conclusions…………………………………………………………….....109

V. MULTIMODAL IMAGING GOGGLE WITH AUGMENTED REALITY COMBINING FLUORESCENCE, ULTRASOUND AND TOMOGRAPHICAL IMAGING………………………………………………………………………..111 5.1. Introduction………………………………………………………………..111

5.1.1. Single and Multimode Imaging………………………………….111

5.1.2. Multimodal Image Registration………………………………….112

5.1.3. Augmented Reality Fluorescence Imaging System with Multimode

Registration…………………………………………………………..116

5.2. Materials and Methods…………………………………………………….117

5.2.1. Optical Imaging and Display…………………………………….117

5.2.2. Ultrasound Imaging……………………………………………..119

5.2.3. Computation……………………………………………………..119

5.2.4. Camera Calibration………………………………………………119

x

5.2.5. Fluorescence to Color Registration………………………………121

5.2.6. 3D Object Registration…………………………………………..124

5.2.7. Registration Lite………………………………………………….130

5.2.8. Ultrasound Registration………………………………………….131

5.2.9. Ultrasound Image Classification…………………………………134

5.2.10. Microscopic Imaging…………………………………………...135

5.3. Results……………………………………………………………………..137

5.3.1. Fluorescence to Color Registration………………………………137

5.3.2. 3D Object Registration…………………………………………..139

5.3.3. Ultrasound Registration………………………………………….141

5.3.4. Microscopic Imaging…………………………………………….143

5.4. Discussion………………………………………………………………….145

5.4.1. Fluorescence to Color Registration………………………………145

5.4.2. 3D Object and Ultrasound Registration………………………….147

5.4.3. Microscopic Imaging…………………………………………….148

5.5. Conclusions………………………………………………………………..148

VI. REAL TIME 3D IMAGING AND AUGMENTED REALITY FOR FORENSIC APPLICATIONS ……………………………………………………………….149 6.1. Introduction………………………………………………………………..149

xi

6.2. Materials and Methods…………………………………………………….152

6.2.1. Forensic Imaging System………………………………………..152

6.2.2. Fluorescence Detection Sensitivity………………………………155

6.2.3. Simulated Crime Scene………………………………………….157

6.2.4. Image Processing………………………………………………..158

6.2.5. 3D Scene Creation……………………………………………….159

6.3. Results……………………………………………………………………..160

6.3.1. Fluorescence Detection Sensitivity………………………………160

6.3.2. Simulated Crime Scene………………………………………….162

6.3.3. 3D Scene Creation……………………………………………….165

6.4 Discussion…………………………………………………………………..166

6.4.1. Fluorescence Detection Sensitivity………………………………166

6.4.2. Simulated Crime Scene………………………………………….169

6.4.3. 3D Scene Creation……………………………………………….169

6.5. Conclusions………………………………………………………………..170

VII. REAL-TIME DUAL-MODAL IMAGING SYSTEM……………………..171

7.1 Introduction…………………………………………………………………171

7.2. Materials and Methods…………………………………………………….176

xii

7.2.1. Dual-Mode Imaging……………………………………………..176

7.2.2. Illumination………………………………………………………177

7.2.3. Image Processing………………………………………………...178

7.2.4. Alignment and Resolution………………………………………..178

7.2.5. Depth Penetration………………………………………………..180

7.2.6. Imaging Studies…………………………………………………..182

7.2.7. Statistical Analysis……………………………………………….184

7.3. Results………………………………………………………………….....185

7.3.1. Alignment and Resolution……………………………………….185

7.3.2. Depth Penetration………………………………………………..185

7.3.3. Imaging Studies………………………………………………….187

7.4. Discussion………………………………………………………………….190

7.4.1. Depth Penetration………………………………………………...190

7.4.2. Imaging Studies………………………………………………….191

7.5. Conclusions………………………………………………………………..193

VIII. CONCLUSION…………………………………………………………………..194

BIBLIOGRAPHY………………………………………………………………………197

APPENDICES………………………………………………………………………….214

xiii

APPENDIX A………………………………………………………………….221

APPENDIX B………………………………………………………………….225

APPENDIX C………………………………………………………………….227

APPENDIX D………………………………………………………………….229

xiv

LIST OF FIGURES

Figure Page

2.1 Prototype imaging goggle system. (A) Schematic of 2-sensor setup. (B) Photo of 2-sensor setup for stereoscopic fluorescence imaging. (C) Photo of 4-sensor setup for simultaneous stereoscopic color reflectance imaging and fluorescence imaging. Top 2 sensors were for reflectance imaging and bottom 2 sensors were for fluorescence imaging. The horizontal and vertical inter-sensor distances are labeled in blue and red, respectively. (D) Imaging system with its handheld in vivo microscopy probe. (E) Schematic of 4-sensor setup. (F) Overall system diagram depicting all components and connections during a typical operation with the hand-held microscopy module connected. Alternatively, a portable ultrasound scanner with transducer could be connected to the computational unit. ……………………………………………….....30 2.2 System Characterization. (A) Modulation transfer functions (MTF) for the NIR imaging detector as a function of cycles per . (B) FOV measurements for the NIR imaging detector taken versus working distances of 20, 30, and 40 cm. Vertical and Horizontal FOV measurements were recorded. (C) Pixel intensity for the detected fluorescent emissions from excited solutions of ICG/DMSO of varying ICG concentration, as detected by the NIR imaging detector. The detected fluorescent intensity decreased with increasing working distance. In addition, the intensity of the fluorescence at any working distance increased linearly with dye concentration (R2 = 0.96). (D) The detected fluorescence from an injection of a serial dilution of ICG/DMSO into 0.5 mm holes cut into the surface of a chicken breast. Imaging was conducted at 40 cm working distance under 785 low pass filtered illumination. ………………………………………………………………………………39 2.3 Image-guided surgeries aided by the imaging goggles. Intraoperative imaging using the 2-sensor setup (A) with the addition of unfiltered NIR light for the imaging of anatomical data, and (B) without unfiltered NIR light. Goggle aided stereoscopic imaging of 2 fluorescent targets (blue arrows) under the skin of a chicken. The image from the hand-held microscope was displayed in the top left corner of each frame of the large FOV image, displaying a magnified view of the fluorescent targets with higher resolution (yellow arrows). (C) Intraoperative imaging using the 4-sensor setup: The anatomical information from the color reflectance image; (D) the functional information from fluorescence image. The detected fluorescence was pseudocolored green to facilitate visualization; (E) the composite fluorescence and color reflectance images displayed to the user.……………………………………………………………………………………………….42 2.4 Goggle aided stereoscopic imaging with ultrasound. Two fluorescent targets (blue arrows) implanted into the chicken breast at depths of approximately 3 mm. Ultrasound imagery was displayed in picture-in-picture mode at the upper left of the goggle imaging frame. The ultrasound transducer (purple arrow) was capable of detecting the implanted fluorescent tube as a dark region (orange arrow), similar to a large vessel or fluid filled sac. Imaging was conducted with (A) and without (B) unfiltered NIR illumination. …………………………………………………………..44

xv

2.5 The surgical resection of fluorescent tissues in chicken guided by the imaging goggle with its hand- held microscope. High resolution microscopic images were incorporated in picture-in-picture mode into the wide FOV stereoscopic goggle frames. Images displayed were illuminated using our light source with unfiltered NIR components for anatomical data (A-D) and using only the low pass filtered light (E-H). The images were from four distinct time points during the resection of the fluorescent tissue: (A & E) Chicken pre-injection; (B & F) post-injection and prior to any resection; (C & G) after a partial resection (excised tissue indicated by red arrows), note the residual lesions in both the goggle and microscope images (purple arrows), orange arrows indicate small residual lesions that were only revealed by the microscopic imaging; and (D & H) after completed resection and removal of residual fluorescent tissues (green arrows).……………………………………….45 2.6 Telemedicine of the real-time imaging goggle frames. (A) The fluorescence video stream as seen through the goggle display, transmitted via 4G LTE network. (B) The received fluorescence video frame displayed on the remote viewer’s smartphone. …………………………………………….47 3.1 Head-mounted imaging system consisting of stereoscopic CCD imaging sensors with large format focal lenses, augmented reality (AR) display and loupe-style head mount. In the pictured orientation, the sensors were filtered for NIR wavelengths. ………………………………………58 3.2 Well-patterned tissue phantom (A) and 3D printed mold (B). The wells were made to hold 6 different volumes of phantom solution, and 5 different concentrations of dye were contained in one molded phantom. The box used to contain the mold (B, in white) was also used to create additional solid rectangular layers of tissue phantom (not patterned) to be placed on top of the filled, patterned phantom to simulate fluorescence under a depth of tissue. ………………………………………..63 3.3 Mold design for creating surgical tissue phantoms (A) and a tissue phantom with fluorescent inclusion (B). ……………………………………………………………………………………..65 3.4 Resolution target (A) imaged from directly overhead. The first method of determining minimum discernable resolution of the system sensors was conducted by plotting across an imaged bar pattern to see whether three distinct troughs were present (B). ……………………………………………68 3.5 Minimum dye concentrations required to achieve SBR values of 1.2, 1.5 and 2 using the centrifuge tube volumes of ICG in DMSO. Tests were conducted over a range of working distances and excitation light intensities. ………………………………………………………………………..71 3.6 Minimum dye concentrations required to achieve a SBR of 2 in tissue phantom at a working distance of 20 cm. Readings were taken at a range of fluorescent inclusion volumes, inclusion depths in the tissue phantom and excitation intensities. Results indicate that the 1 mW/cm2 excitation intensity provided the lowest minimum dye concentrations to achieve the desired SBR in all cases. ………74 4.1 Example of a motion artifact in pulsed light imaging, resulting from a camera translation during frame capture. (A) Image of a pig ear with green false-colored fluorescent lesion indicated by the red arrow. The fluorescence has been differentiated through pulsed light imaging. (B) The same pig ear captured immediately following a camera translation. Blue arrows indicate the motion artifacts incurred by subtracting sequential frames acquired during the movement. The disparity in ear location between sequential frames caused the motion artifact. ……………………………….89 4.2 Diagram depicting system connections and basic usage. ………………………………………….94 4.3 Signal pulse trains from the microcontroller to the PC and the excitation LED, and the trigger signal from the PC to the camera. The pulses from the microcontroller to the PC and LED were of the same width, but the LED waveform was delayed by 10 µs from the PC bound waveform to allow for data transfer time. Camera captures occurred after each high and low pulse was received from the microcontroller following a 20 µs delay to account for LED rise and fall times, in order to avoid frame capture during a transient state. Camera exposure was set to one half the pulse width for the same reason. ……………………………………………………………………………………….95

xvi

4.4 Test for the accuracy of fluorescent pixel detection and non-fluorescent pixel rejection. A fluorescent dye filled cell culture dish was placed on top of a non-fluorescent tissue phantom block. The camera position was translated overhead of the block during imaging to determine whether motion artifacts were detected. ………………………………………………………………….100 4.5 Pig ear with topically applied PpIX solution (green), imaged under pulsed excitation. Clockwise from top left: Low pulse state without fluorescence excitation; High pulse state with excitation; the subtraction result from the High and Low frames, following a threshold and intensity based false color application; the subtraction result added back to the Low state image. …………………….102 5.1 Stereoscopic imaging sensors mounted onto a 3D printed housing. The imaging module was connected to a loupe-style head mount via articulating modular hose connectors (blue). ……….118 5.2 Chessboard pattern with co-registered fluorescent and color images. A NIR LED was reflected off of the chessboard to make the white squares visible through the NIR filtered fluorescent imaging sensors. The detected NIR squares were false colored blue and registered to the color image frame using the previously calculated transformation matrix. The correct transformation matrix for accurate registration was selected using the working distance (lower left) estimated by the concurrently calculated stereo depth map of the object plane. Registration error was estimated by comparing color and NIR chessboard corner locations. …………………………………………123 5.3 Registration accuracy assessment test for registering a 3D object with the goggle images. (A) The 3D printed cube (blue) with fiducial markers placed for optimal alignment. The markers have been detected by the system, and green dots have been placed at the selected registration corners. (B) The virtual cube (red) in 3D space. The cube is contained within the projection matrix (gray box) whose virtual fiducial points are indicated in red. (C) Co-registered image of the blue cube with the virtual red cube (purple). The red arrow indicates a small registration misalignment where the blue of the real-world cube can be seen. (D) Using the stylus with affixed fifth Aruco marker to assign an interior fiducial point for registration correction. ………………………………………126 5.4 The fluorescent tissue phantom (A) used for 3D object registration with fiducial markers as detected by goggle cameras. (B) The 3D virtual heart model (Credit: Dr. Jana [265]). ……………………129 5.5 Ultrasound transducer with affixed fiducial marker array (A). The registration corners in the array were marked in green. The cropped and color-coded ultrasound image (B) was labeled with virtual fiducial markers in red. The two sets of markers were co-registered using a perspective transform and the fluorescence was false colored purple for visibility. ……………………………………..132 5.6 Classified ultrasound image using VGGNet neural network classifier, left, and the transducer location from which the image was captured, right. Classification was conducted in real-time (< 0.1 s latency) and at video rate (>15 fps)……………………………………………………………...135 5.7 Fiber optic fluorescence microscope (A, B) passed excitation wavelength light to and from the target via a flexible fiber optic imaging bundle. The beam splitter separated the excitation from emission wavelengths, preventing image contamination at the imaging sensor. Additional bandpass filtration at the light source and imager further enhanced sensitivity. The system achieved fine resolution down to approximately 20 µm (C) and returned intricate fluorescent tissue morphology (D). ….136 5.8 Dual fluorophore fluorescence to color image registration conducted using the goggle system. The stereoscopic fluorescence imaging cameras have been filtered to simultaneously image ICG (A) and PpIX (B). The fluorescence was co-registered with color imaging data (C). …………………….139 5.9 Registration of the 3D heart model to the fluorescent tissue phantom via fiducial markers as detected by the goggle cameras. The initial registration (A) experienced some misalignment (blue arrow) between the fluorescent tube (yellow line) and the left common carotid (green line). The misalignment was corrected using the stylus to select the misaligned points, causing the registration to shift (B), and the target anatomies to become more closely aligned. ………………………….141

xvii

5.10 Registration of the color-coded ultrasound image (purple) to the transducer. The fluid filled hollow in the transducer image aligned with the fluorescence observed by the goggle cameras in the tissue phantom (green). …………………………………………………………………………………143 5.11 Plot of Signal-to-Background Ratio versus Dye Concentration for the fiber optic microscope, when detecting ICG in a tissue phantom. The background of the phantom was set at a 50 nM ICG concentration, and an SBR of 2 was achieved at a signal concentration of about 370 nM. ……….144 5.12 Frame from the fluorescence imaging cameras with the image from the fiber microscope displayed in the top left corner. A microscopic fluorescent boundary not resolved by the goggle cameras is indicated by the green arrow. The red arrow indicates ICG fluorescence in the near infrared spectrum, and the fiber optic probe tip can be visualized in use (Orange Arrow). ……………….145 6.1 Forensic imaging goggle. Red Arrow indicates a 3D printed housing containing stereoscopic imaging sensors and M12 lenses. Blue Arrow points out the stereoscopic display, mounted on the back of the 3D printed sensor housing, in-line with the imaging sensors to provide direct line-of- sight imaging. Green Arrow targets the adjustable medical-loupe style headmount. …………….153 6.2 Air Force 1951 Resolution Target. (A) Reflectance mode imaging with illumination intensity set to optimize contrast in the center grouping. (B) Target with spray settled on the surface in microdroplets (pseudo-colored magenta). White light illumination has been dimmed to enhance fluorescent visibility. …………………………………………………………………………….162 6.3 Still frames from the simulated crime scenes. (A) Blood dilutions (1:30 & 1:50) spattered onto a wood floor and nearby door frame. The detected fluorescent emissions were segmented from the background using adaptive thresholding, then normalized and false colored red before being added back to onto the original image. (B) Blood dilutions (1:30 on sink and floor, 1:100 on tub) scattered around a tiled bathroom. The detected fluorescent emissions were again segmented from the background and enhanced, but not normalized to a uniform intensity. Red Arrow: The large blood stain on the sink was difficult to visualize due to the reflected glare off of the ceramic surface. Using a separate adjustable white light source, rather than room lighting, reduced this effect. (C) Dilutions (1:50) applied to wooden furniture. Fluorescent emissions were processed as in (A). Note the ability of the system to detect small droplets of blood on the furniture legs. (D) Blood dilutions (1:30) on the floor of a dirty basement, processed as in (B). ……………………………………………….164 6.4 Image taken from , depicting the ProScope device being used to analyze a fluorescent target (1:50 blood/water dilution) on a carpet sample. The stereo imaging sensors provided a wide field-of-view, while the ProScope gave a close-up, useful for detecting small or faint targets. The ProScope image has been recolored using a Jet colormap and displayed in picture-in-picture mode……………………………………………………………………………………………...165 6.5 3D Reconstruction of a simulated crime scene from captured stereo images, viewed using MeshLab software. The chair, ottoman and floor were all on different optical planes. Object depth varied continuously from front to back. The fluorescent emissions from a treated blood stain (1:30 dilution) was visible and accurately projected onto the floor (red pseudo color). The 3D scene could be rotated and viewed from different angles. ……………………………………………………….166 6.6 Speckle pattern of detected fluorescence emissions. (A) Hemascein and Hydrogen Peroxide were sprayed over three blood stains (1:30, 1:50, 1:100 dilutions, left to right). In this case, the target fluorescence from the blood dilutions was much stronger than the background, however the speckled background fluorescence was still visible. (B) Similar speckled pattern of treated blood stain (1:10 dilution) on black cotton fabric. The background was brighter here, however the speckled appearance was distinctly less, likely the result of liquid absorption by the fabric. ………………168 7.1 NIR/VIS vein imaging system. (A) Schematic of the beam splitter device. (B) Beam splitter system in use for hand vein imaging. The subject’s hand was placed over a NIR LED array to conduct

xviii

transmission mode vein imaging. Room lighting provided illumination for reflectance mode VIS imaging. ………………………………………………………………………………………….177 7.2 Bar patterns captured via the imaging system for beam splitter alignment. (A) Reflection mode VIS image. (B) Transmission mode NIR image. (C) Properly aligned combination of the inverted and pseudo colored NIR (green bars) and VIS images. (D) Improperly aligned combined images. Note that the green NIR bar pattern is offset from the black VIS bar pattern. …………………………180 7.3 Schematic of the experimental setup for the determination of optical depth penetration. ……….182 7.4 Depth penetration study. (A) Three transparent silicone tubes of 2, 1, and 0.5 mm diameter filled with simulated blood were placed on the surface of a layer of porcine tissue, which was placed atop the NIR illumination source. (B) A 1.5 mm of layer of porcine tissue was placed over the vessels. All vessels remained clearly distinguishable. (C) An additional 1.5 mm layer (3 mm total) was added on top of the vessels. The tubes remained readily visible, though less distinct. (D) Under 4.5 mm all vessel edges became indistinct, appearing as broad diffused lines. (E) All vessels became increasingly less distinct under tissue depths exceeding 5 mm. …………………………………186 7.5 Example of combined beam splitter images. (A) Reflectance mode VIS image. (B) Transmission mode NIR image. (C) NIR image combined with the VIS color image, where the gray scale NIR image has been blended to match the subject’s skin tone, providing a more natural appearance. (D) Alternative processing for the combined NIR/VIS image. The NIR image has been inverted and false colored green before combining with VIS image to enhance contrast. …………………….189 7.6 Various examples of vein imaging of the hand, fingers and wrist. Imaging was successfully conducted on subjects of various skin tones and body types. ……………………………………190

xix

LIST OF TABLES

Table Page

3.1 Required ICG concentrations to achieve a Signal-to-Background Ratio of 1.2, 1.5 or 2. Testing was conducted using a range of excitation intensities and imaging working distances. The values listed were interpolated from fluorescent intensity readings taken on multiple dilutions. Different volumes of each dilution were placed into either 24-well plates at 2 mL, 48-well plates at 1 mL or centrifuge tubes at 2 mL for testing. Significant differences in concentrations were found between container type, SBR and excitation intensity. Differences in concentrations between working distances were varied. …………………………………………………………………………………………….72 3.2. Results from simulated surgeries. The fluorescent intensity of the resected tissue was found to be significantly greater when using the goggle versus the stand mounted system. Time to completion was not significantly different between the two systems, however the goggle time was less on average. ……………………………………………………………………………………………77 3.3 Resolution limit of the goggle cameras. Measurements were taken using a USAF 1951 Resolution Target, at three working distances. The smallest bar pattern on the target was considered discernable if the calculated contrast between the adjacent dark and light bars was at least 20%, or if the 3 adjacent bars appeared on a cross-sectional plot of the pattern as troughs (see Figure 3.4). The line width of the smallest detectable pattern was then set as the system resolution limit. ………………78 3.4. Minimum observable fluorescent emissions as seen through the 3 tested AR displays (M: MyBud, V: and L: LCD). The minimum dye concentration and excitation intensity is indicated for each working distance and phantom volume. All marks were made resulting from a majority vote by a panel of judges. Overall, the Accupix MyBud display achieved the best results. ……………79 3.5 Optimal system resolution as seen through each of the AR displays. Volunteers selected the smallest bar pattern in the USAF 1951 Resolution Target in which they could distinctly discern three adjacent black lines. The Accupix MyBud achieved the best results. ………………………………………80 4.1 Results of the fluorescence point tracking tests, including mean and standard deviation of the percentage of false positives in the image background and on the tissue phantom, the percent of false negatives in the target fluorescence, and the amount of motion artifacts as a percentage of the area of the object that has moved between sequential frames. Results were analyzed for significant difference (Sig Diff) between the point tracking and non-tracking pulsed light methods. ……….103 4.2 Minimum dye concentrations required (standard deviation) to achieve Signal-to-Background ratios (SBRs) of 1.2, 1.5 and 2.0 under various excitation intensities. Experiments were conducted using various concentrations of a 2 mL fluorescent solution of PpIX deposited into the wells of a 24-Well plate, and using 50 µL solutions applied topically to pig skin. Exact dye concentrations required to achieve the listed SBRs were interpolated. Readings were taken using both steady state (DC) excitation and pulsed light. Pulsed light imaging resulted in lower minimum dye concentrations for all categories listed below. ……………………………………………………………………….105

xx

5.1 Error measurements for the fluorescence to color registration algorithm. Measurements included translation error in the x-axis (Tx) and in the y-axis (Ty), as well as rotation error (R) and scaling error (S). ………………………………………………………………………………………….136 5.2 Error measurements for 3D object registration using Aruco fiducial markers. Measurements included translational error in the x-axis (Tx) and in the y-axis (Ty), as well as rotational error (R) and scaling error (S). Both the Fiducial Registration Error (FRE) and Target Registration Error (TRE) were determined. …………………………………………………………………………138 5.3 Error measurements for ultrasound to fluorescence image registration using Aruco fiducial markers. Measurements included translational error in the x-axis (Tx) and in the y-axis (Ty), as well as rotational error (R) and scaling error (S). Both the Fiducial Registration Error (FRE) and Target Registration Error (TRE) were determined. ……………………………………………………..140 5.4 CNN forearm classifier accuracy based on category. Categories correspond to forearm location (Distal, Mid Distal, Mid Proximal, Proximal and Middle) and transducer orientation (Transverse and Longitudinal)………………………………………………………………………………...143 6.1 Dilution ratio of artificial blood in water versus the material on which they were applied. Boxes were marked ‘X’ where the goggle cameras were able to detect Hemascein fluorescence, and ‘y’ where the ProScope was able to detect fluorescence. Positive detection was marked when the fluorescent signal was recorded at an intensity of at least 2 times the background. …………….161 6.2 Optical and fluorescence resolution of the goggle cameras and microscope. ……………………162 7.1 Optical depth penetration of system. The Depth of the synthetic blood vessels beneath layers of tissue was set against the vessel Diameters. The ‘X’ indicates the depths at which a vessel was visible with SBR of at least 2, while ‘x’ marks visibility at SBR > 1.5, and ‘o’ marks partial visibility at SBR > 1.2. …………………………………………………………………………………….187 7.2 Vein counts by visual inspection and using the imaging system for our 25 test subjects. Improvement Factor was defined as the ratio between system and visual counts. The alpha values from paired t-tests between visual and assisted vein counts indicated that the means were statistically different. …………………………………………………………………………………………188

xxi

CHAPTER I

INTRODUCTION

The introduction, in particular section 1.1, contains portions of three publication introductions. Included among these are: “ Mela CA, Patterson C, Thompson WK, Papay

F and Liu Y. (2015) Stereoscopic Integrated Imaging Goggles for Multimodal

Intraoperative Image Guidance, PLOS ONE, doi: 10.1371/journal.pone.0141956,” “Mela

CA, Papay FA, Liu Y. (2016) Intraoperative Fluorescence Imaging and Multimodal

Surgical Navigation Using Goggle System. In: Bai M (eds) In Vivo Fluorescence Imaging.

Methods in Molecular Biology, vol 1444. Humana Press, New York, NY,” and “Mela CA,

Patterson CL, and Liu Y, A Miniature Wearable Optical Imaging System for Guiding

Surgeries, in SPIE Photonics West, San Francisco, CA, USA, (2015), vol 9311: SPIE.”

1.1. Imaging for Surgical Oncology

Doctors use imaging technologies to make diagnoses, plan and guide surgical procedures and medical interventions, as well as to evaluate the results. A variety of medical imaging technologies are available for both anatomical and functional imaging.

Magnetic Resonance Imaging (MRI), Computed Tomography (CT), ultrasound and optical imaging are the most relied upon medical imaging technologies in common practice today

1

[1-10]. Limitations arise, however, when implementing any one of these techniques alone for medical guidance. When imaging a volume at some tissue depth beneath the visible surface, or when imaging functional anatomy such as blood flow or neurological stimulation, correlating the surgical scene with the pre-operative images can be difficult

[6-8]. Additionally, these technologies can be difficult to utilize in the operating room, due to their complexity, size, high costs or potential risk associated with long term use [11, 12].

When excising a lesion, the surgeon must distinguish between tumor and healthy tissues. Due to the mentioned difficulties in applying imaging modalities such as MRI or

CT intraoperatively, surgeries are still often primarily guided by sight and palpation [11,

13-15]. Intraoperative ultrasound has also been used to grant the surgeon with structural and anatomical imagery with good soft tissue contrast, and has been shown to improve tumor detection [16-18]. It can, however, still be challenging to correlate the almost abstract ultrasound imagery with the surgical landscape and perhaps even more difficult to achieve proper transducer orientation. Also, ultrasound has not historically demonstrated the resolution to distinguish small, millimeter sized satellite lesions which may linger in the surgical margins following a mass excision [16, 17, 19].

However, when a cancerous tissue is not precisely differentiated from healthy tissues, a small amount of tumor cells will likely remain inside the margins of the excised wound post operation, increasing the probability of a local cancer recurrence [20-23].

Traditionally, histological analysis is the gold standard for margin status determination [21-

23]. Histology requires sectioning of all surgically excised tissues, followed by staining and microscopic investigation, which takes time. Should the margins, or edges, of the

2 returned tissue sections come up positive for the presence of cancer cells then it is likely that tumors have been left inside the body and a follow-up surgery may be required.

Therefore, accurate margin control is needed in addition for volumetric tumor identification and localization.

Intraoperative optical fluorescence imaging has emerged as a promising solution for achieving better surgical margin control in situ [24-27]. Three common methods have been evaluated, including wide field-of-view (FOV) imaging, endoscopy and in vivo microscopy [13, 15, 28]. Endoscopic optical and fluorescence imaging modalities are much like their wide FOV equivalent, only made small for insertion into the body during laparoscopic and other minimally invasive procedures. Microscopic imaging, while lacking in wide view tumor localization capacity, brings a cellular level analysis to the bedside.

Numerous intraoperative fluorescent imaging systems have been tested in the past decade, offering a quick way to survey a target area for the planning and guidance of surgeries [16-20]. Of these, the most prominent initially included the Novadaq SPY and

FLARE imaging systems, two modalities which helped to popularize fluorescence medical imaging. More recently, the Hammamtsu PDE and Solaris imager, among others, have emerged as competitive alternatives. Additionally, in vivo microscopic imaging systems have been developed for magnified inspection of the surgical site [13-15]. These systems hold great potential for in situ tumor margin pathology, however they rely upon 2D stand- alone display screens for relaying fluorescent data to the . Looking back and forth from the display to the surgical landscape can impede spatial localization of the

3 functional information while reducing hand-eye coordination for real-time guided procedures. Additionally, fluorescence systems alone do not integrate ultrasound or tomographical imagery with the wide-field fluorescence imaging, limiting the ability to detect tumors, blood vessels or other anatomical points of concern at depth within the tissue. Given inability of fluorescence imaging systems to penetrate much more than several millimeters below the tissue surface, this limitation seriously impedes the surgeon’s ability to visualize sub-surface lesions and anatomical detail, necessitating the incorporation of additional imaging modalities.

On the limitation of optical coordination, many medical imaging systems use computer monitors to display results [3-14], causing user distraction and compromised hand-eye coordination, due to the need to look back-and-forth between the scene and the screen. To overcome this, wearable imaging and display systems, or “goggles”, have been developed by several research groups, including Liu et al [22-24]. While, these systems have been validated in preclinical and early clinical studies, regarding surgical planning and tumor resections, they still only offer 2D imaging and display capabilities, neglecting depth . Also, these systems tend to be bulky, heavy, potentially complicated to use, and difficult to wear for long procedures. Additionally, no microscopy, ultrasound, or any type of 3D image integration was offered, limiting the diagnostic power solely to that of wide-field fluorescence imaging. Nonetheless, the potential simplicity and portability of these fluorescence imaging goggles show great applicability for clinical use in a variety of settings, not only in the surgical suite, but any location where medical image guidance is used, but cannot normally go.

4

Wearable technology has great applicability to field-based or in-transit medicine, in which there is a need for medical imaging outside of a hospital setting [29]. The constraints of in-transit medicine make it difficult to bring large scanners such as magnetic resonance imaging (MRI) and X-ray computed tomography (CT) to the patient [1-10].

Ultrasound has become increasingly mobile and compatible with in-transit medicine [11-

15]. Although useful, ultrasound suffers from a small , and ultrasound images are not easily correlated with anatomical structures, potentially making use difficult for a novice [11-15]. Recently, as more significant efforts have been directed towards optical modalities, such as fluorescence imaging, greater functionality has been conferred to medical guidance beyond the hospital [15-20]. Compact imaging systems that combine the advantages of various modalities are most beneficial for clinical use. Additionally, for effective field deployment of an imaging system necessitates to device to be accurate, small, safe and easy to use.

1.2. Intraoperative Fluorescence Imaging for Surgical Interventions

1.2.1. Fluorescent Dyes

The origin of our modern fluorescent proteins comes from the bioluminescent jellyfish, Aequorea Victoria [30, 31]. These sea creatures were observed to bioluminesce in blue and then fluoresce in green, a result of the naturally occurring blue fluorescent protein aequorin and its green fluorescent counterpart. While aequorin was the first fluorescent protein to be discovered and isolated, the green fluorescent protein (GFP) was discovered shortly after [30, 31]. Initially observed to be a contaminant, GFP was later

5 found to interact with aequorin, being excited to fluoresce green by absorbing the aequorins blue light. Today, many of the fluorescent proteins we use in research and medicine are derived from GFP. Modern fluorophores, however, are not taken from jellyfish, but rather are grown and harvested from bacteria [30].

FDA approved fluorescent dyes such as fluorescein, methylene blue (MB) and indocyanine green (ICG) are regularly used in human clinical studies and medical diagnostics [13, 24, 27, 28, 32-35]. Medical grade fluorophores are inserted into the target tissue in a few ways. Passive markers are often injected intravenously and collect preferentially in tumors through the distended and leaky capillaries that are common to tissue that has grown too fast [11, 15, 27]. Other fluorophores use binding moieties to actively target overexpressed receptors on cancer cells to effectively “tag” the cells [36,

37]. Upon cellular binding a change in molecular morphology may occur, activating or increasing fluorescent production beyond the levels seen in the unbound fluorophore [28].

Alternative markers such as protoporphyrin IX (PpIX) is often administered topically to skin cancer patients in the form of aminolevulinic acid (ALA) or methyl ester cream (MAL) [38]. The cancer cells uptake the ALA or MAL preferentially, resulting in either the increased production or accumulation of PpIX. In addition to being useful as a contrast agent, PpIX is also used in photodynamic therapy (PDT). When the PpIX in the cell is highly stimulated by its excitation wavelength light, it will generate intense heat, effectively killing the cells from within [12, 39, 40].

The optical depth penetration limit has been a significant hindrance to the applicability of fluorescence imaging to a wider range of medical interventions. One way in which this difficulty has at least partially been addressed is through the use of near-

6 infrared (NIR) dyes such as ICG [33, 41-44]. Due to their lower reactivity, longer wavelength and lower energy, fluorescent emissions at NIR wavelengths can penetrate further through tissue before scattering or being absorbed than is possible with visible wavelengths. Additionally, the discovery of the NIR windows for tissue has propelled research in the area of fluorescence dye creation for the 750-900 nm and 1000-1700 nm wavelength bands [33, 41, 45, 46]. Living tissue has lower optical absorbance and scattering properties over these two wavelength bands, or NIR windows, through which light can pass even further without absorption or scatter.

In addition, some labs are studying the use of nanoparticles and quantum dots as fluorophores [45, 47]. These fabricated inorganic biomarkers possess the advantages of variable size, material composition, tunable wavelength, a potential for increasing specificity and selectivity, as well as increased brightness and decreased susceptibility to photo bleaching [48-50]. Nanoparticles may be coated with various surfactants to make them biocompatible, and fixed with various targeting moieties without great concern over deactivation due to undesired morphological malformations of the fluorophore [50, 51].

However, their small size and high chemical reactivity may also prove to be a significant health hazard [47, 52, 53].

1.2.2. Fluorescence Imaging in Surgery

Many surgical interventions applied in medicine today can benefit from the use of fluorescent markers. During a tumor resection, fluorescent dyes are used to improve the contrast between cancerous and healthy tissues [11, 12, 15, 25-27, 54]. This will both increase the likelihood that all of the cancer has been removed while decreasing the amount

7 of healthy tissue taken with it. Fluorescent imaging has been implemented intraoperatively with a variety of cancer types using various fluorophores, including PpIX for skin cancers and brain gliomas [40, 49, 55-59], ICG for a variety of tumors including breast and liver cancers [32, 60-65], as well as IRDye 800CW [34, 66, 67], MB [12, 44, 68] and fluorescein

[49, 69, 70], among others. Lymph node mapping and biopsy has also benefitted significantly from fluorescence applications, particularly with NIR dyes [32, 35, 71-77].

Lymph node mapping is commonly conducted for breast, head and neck cancer patients proceeding surgical interventions to improve staging and decrease morbidity during dissection. While fluorescein and MB have both historically been used for this application,

NIR dyes are taking the forefront.

Additional fluorescence studies target blood vessels and bile ducts to aid in a number of surgeries [78-80]. Due to anatomical variations, unintended nicks and cuts in an artery or vein can occur during a procedure, potentially resulting in patient or other serious surgical complications [81]. Also, imaging of blood vessels, or , can aid in their own repair, as well as serve as a tool for pathological analysis. Fluorescence imaging of blood vessels can be also be effective in perfusion assessment [82]. Knowledge of blood perfusion is useful for transplant or plastic surgery, where safe, non-invasive techniques are needed to reduce complications when attaching composite tissues [83-85]. Near infrared dyes are commonly used for vasculature imaging, as well, since the emissions must traverse the vessel walls and surrounding tissue for detection. Fluorescein, however, has also found common application for vein imaging and the imaging of retinal vasculature near to the skin surface [86-90].

8

1.2.3. Instrumentation in Fluorescence Imaging

Fluorescent imaging can be conducted at any number of visible light and invisible wavelengths, depending on the application and user preference, so long as the desired fluorophore exists [13, 15, 91]. For improved detection sensitivity during medical applications, a near-infrared (NIR) contrast agent is often used due to its lower absorption and scattering rate while propagating through tissue. Also, the excitation spectrum for NIR fluorophores typically produces less autofluorescence in the background signal than visible light wavelengths [42]. The use of multiple wavelengths simultaneously can also help the surgeon differentiate multiple targets at once [13, 49, 68, 77, 91, 92]. For example, two fluorescence excitation wavelengths are fed into the system from either one or two light sources and projected onto the surgical field together. Subsequent dual wavelength fluorescent emissions are typically detected using independent sensors [92] or two sensors conjoined via a beam splitter or dichroic mirror [68, 72, 77]. A dichroic mirror reflects light above or below a target wavelength and passes the rest, effectively separating the individual emission wavelengths. This is an example of division of amplitude fluorescent detection. Another method which can be used is division of time, in which the emission filter used is switched over a single detector. In addition, either the excitation filter or the excitation source must be alternated synchronously with the emission filter. The complexity of this method, i.e. need for additional moving parts, may make it less desirable for an intraoperative setting.

Often times for fluorescent imaging to be efficient, it must be conducted under strict lighting conditions, resulting in a poorly lit operating room [13, 15]. Studies have been undertaken to alleviate this problem by enabling fluorescent imaging under ambient light

9 conditions [20-21]. The simplest of the developed techniques involves the use of narrow band spectral filtration of the fluorescence detectors. Success of this method, however, is reliant on the fluorescence emissions coming from the target being significantly more intense than any reflected light at the emission wavelength, originating from the room lighting. One way to circumvent this issue is to filter the ambient room light source to reject emission wavelength bands. However, this can lead to poor fluorescence localization since the structural and anatomical information from the surgical scene will also not be visible through the filtered fluorescence sensors [93, 94]. Improved results were achieved by Zhu et al using a modified CCD with an electronic filter to eliminate the out of band light [95].

Multiple studies have been conducted to address the issue of fluorescence and background, or anatomical, imaging by co-registering reflectance mode images of the surgical scene with the fluorescence data. A technique by Mela et al captures both color and fluorescence mode imagery from the target scene simultaneously, and co-registers the data using a transformation equation based on camera-to-target working distances determined using stereoscopic disparity [94]. Studies by Zhang et al place fiduciary markers within the imaging planes of both fluorescence and color imaging sensors, and conduct image co-registration based on the positions of like markers in each frame [96].

Another study by Zhu et al incorporates an IR distance meter to supply look-up table metrics to find optimal focal lengths using a beam splitter device [97].

Dichroic mirrors have been used to separate color reflected, or white light from the fluorescence emissions, allowing both anatomical and functional data to be simultaneously imaged in the same plane. With advances in optics and miniaturization technology, this

10 method has become popular for a variety of system, including head-mounted [98-100], hand-held [101, 102] and stand-mounted systems [35, 67, 72]. The applicability of the technique leverages on its hardware, rather than software, reiance to conduct very accurate registration. Potential pitfalls include complexity and a tradeoff between image quality and size.

Alternative methods for improving fluorescence detection under ambient lighting conditions include the use of time-gated signal detection, or pulsed light imaging [15]. In one such method, the fluorescence is exited using a pulsatile excitation source and three out-of-phase images are mapped simultaneously [103]. The images are compared to pick out the AC waveforms from the constant DC background. The AC component, corresponding to the pulsed fluorescence, can then be false colored and overlaid back onto the DC background. In a second iteration of the study, camera captures are timed to the

AC pulsations of the 120 V room lights, capturing fluorescence emissions when the room lights are at a trough, or low point, in the AC signal [104]. Other methods tune camera capture to occur simultaneously with the peak excitation pulse state [92, 95, 105].

1.2.4. Fluorescence Imaging Systems

Fluorescence imaging systems come in a variety of formats [13, 15]. The earliest implemented and still most commonly used in surgical studies and interventions are the stand-mounted systems [72, 92, 106-109]. These modules contain all components of the system required for fluorescence imaging on a rolling cart, including a computer, monitor, cameras, lights and typically with an articulating arm to hold the illumination and imaging sensors over the surgical scene.

11

One of the first intraoperative fluorescence imaging systems to be put into broad commercial use was the Novadaq SPY, which is still considered a standard in intraoperative fluorescence imaging [106]. Increased research into fluorescence imaging was spawned by the release of the SPY as well as by the seminal publication by Troyan et al on the FLARE imaging system [72]. Developed in the Frangioni lab, the Fluorescence-

Assisted Resection and Exploration (FLARE) module provided an LED illumination and detection system mounted on an articulated arm, much like the SPY [72, 109]. The novelty of the system was that it could illuminate the surgical field while providing the appropriate fluorescent excitation, and then combine both the fluorescence emission and the white light reflectance imagery simultaneously. Many intraoperative surgical studies have since been performed utilizing these devices.

Various hand-held imaging systems have been developed in an effort to reduce the bulk and costs of stand-mounted systems [67, 110-115]. Additionally, hand-held systems are often used in contact with the anatomical target via a “nose-cone”, stand-off or other light shield placed on the end of the camera, forming an optimal lighting environment within the housing. Utilizing such an enclosure reduces or eliminates the need to account for variations in ambient room lights.

Wearable technology has also made an appearance in fluorescence imaging, through head-mounted sensors and augmented reality (AR) displays. Several variations on the theme have been constructed, incorporating a variety of optical configurations. Some of the earlier designs integrated large format imaging sensors and lenses onto bulky head- mounted frameworks [98, 99]. Alternate systems interfaced a commercial AR glass, such as the Google Glass or Microsoft Hololens, with a stand-mounted or hand-held

12 fluorescence imaging sensor [96, 116, 117]. More recent developments have incorporated smaller format sensors and lenses into compact housings for improved wearability [97,

100, 118, 119].

Microscopy in intraoperative fluorescence imaging can enable cellular level inspection of pathological sites including surgical margins or microvessels. Surgical microscopes typically utilize either fiber optic image guides or arm-mounted microscopic assemblies consisting of eyepieces, focal lens and objective, with a camera mounted via beam splitter. Intraoperative surgical microscopes have been used in various clinical studies including ICG angiographies [81], glioma resections [49, 58] and tumor visualization and margin inspection [120-123].

Falling under both the categories of illumination and detection, fiber optics are a useful means of conducting intraoperative fluorescent imaging. Not only can fiber optics be used to illuminate a reclusive surface, but they are also particularly useful in the endoscopic fields of microscopy [57, 124, 125], laparoscopy [126-128],

[129, 130], arthroscopy and colonoscopy. The light guide can deliver both the excitation wavelength and a white light to the target simultaneously while utilizing dual camera detectors to image fluorescence emissions and anatomical background. Fluorescent emissions can improve tumor resection efficacy by increasing the tumor contrast, improving resolution into the sub-millimeter range.

13

1.3. Augmented Reality in Medical Imaging

How the image is visualized can be as important as how it is recorded. Many of the discussed fluorescence imaging systems display onto a flat screen such as a monitor, which require the surgeon to look back and forth between the surgical field and the displayed fluorescent scene. This can make translating image features into real-space cumbersome and inaccurate. Alternative methods have been developed to alleviate these concerns. One such method is the head-mounted display (HMD) or AR display, which places either a miniature single or stereo display screen in front of one or both of the .

Due to the close proximity between display and eye, the images must be focused for clear viewing. Typical focal methods include the use of aspheric lenses or beam splitter cubes assemblies which collimate reflect illuminated imaging data towards the eye.

Systems with only one eyepiece, like the Google Glass, can display the fluorescence information to one eye, while the other remains unobstructed [96, 98, 116,

117]. In this way, the surgeon can see both the fluorescence and the color surgical field simultaneously. The down side is that viewing two different scenes simultaneously can be disorienting, making the fluorophore difficult to localize accurately. Additionally, this method negates stereo vision, limiting . Stereoscopic displays, like the

Microsoft HoloLens and Oculus Rift among others, can display fluorescence information to both eyes, creating a more immersive environment [94, 97, 100, 118, 119, 131]. A downside to this method is that the simulated stereo effect has been known to cause queasiness in some people. Additionally, whether through image co-registration or

14 broadband imaging, the addition of surgical background is necessary when using an immersive display, otherwise the user would not be able to localize the fluorescence.

Various techniques have been implemented to solve the localization issue. Optical see-through AR displays have been constructed with semi-transparent lenses, like the

Microsoft HoloLens or Meta. With these displays, the camera imagery is reflected off of a semi-transparent beam splitter or half-silvered mirror on the glasses lens without completely obstructing the real-world view beyond [97, 119, 131-133]. In this way a simple hardware solution to the localization issue is provided, however potential issues arise in regards to the accuracy of the registration between the projected fluorescence image and the actual physical location of the fluorescent object as seen by the user through the glasses. Additionally, optical see-through eyepieces typically do not have as high brightness or contrast as video see-through. Video see-through AR displays use miniature

LCD or LED screens to display imaging data to the wearer, rather like having two tiny monitors inside the HMD [94, 100]. Advantages to this method of display include controllable brightness and contrast, and direct localization of fluorescent data to the surgical landscape. A major disadvantage, as mentioned above, is the requirement of anatomical or surgical background, typically implemented through color image registration. A second disadvantage is the inability of the user to see any of the surgical scene, except through the AR display. It may be cumbersome for a surgeon to remove and then re-adorn a bulky headset during an operation. Therefore, more compact, loupe-style displays that can be easily moved out of the field of view are promising alternatives to the large, fully immersive commercial varieties, like the HTC Hive or Oculus Rift.

15

Due to the nature of wearable technology, everything is designed to be smaller and lighter weight than its desktop counterpart. A significant disadvantage of this trend in wearable fluorescence imaging systems is that the displays may be too small. Even sitting directly in front of the eyes, a small or low resolution display will provide a significantly less detailed view than a conventional stand-alone monitor may provide. Therefore, future endeavors must find a balance between size and efficacy.

Many AR displays provide stereoscopic view screens to the user, however, not all imaging systems use stereoscopic cameras [99, 100]. While it is useful to visualize the scene with both eyes, depth perception is limited if this vision is only 2-Dimensional. In order to properly simulate binocular vision, the left and right display screens must present images from a left and right stereo pair of cameras, respectively [134]. The baseline distance between the two cameras, also known as interaxial distance, as well as their relative line-of-sight with respect to each other and optical magnification will determine their stereoscopic effect [134, 135]. In practice, a smaller baseline is more appropriate for near-object imaging, while a larger camera baseline is better suited to distance, as per the effect. The baseline separation between the HMD screens, on the other hand, does not need to match that of the cameras, but rather the interpupillary distance between the user’s eyes. It is therefore preferable to mount the display screens and cameras so that their baseline separation distances can be independently adjusted to suit the application at hand.

Furthermore, when the cameras are not mounted directly onto the HMD [96, 117, 131], preferably in-line with the displays, a mismatch is created between the camera and user field-of-views, requiring surgeons to manually register the environment to their own

16 perspectives. Mounting the imaging sensors directly on the HMD enables a more natural and intuitive line-of-sight method of imaging [94].

Alternative display methods have been developed that avoid using any electronic screens by projecting the data directly onto the surgical plane [136-138]. The stand- mounted fluorescence imaging system detects the fluorescent emissions from the same point-of-view and field-of-view as the projector to minimize translation error. The detected fluorescence is typically false colored before being projected onto the physical surgical landscape.

1.4. Multimodal Imaging

Tissue thickness is a limiting factor for fluorescence imaging, as the intensity of detectable fluorescent emissions will diminish with tissue depth [33, 41, 44]. Therefore, fluorescence imaging techniques have been combined with various other imaging modalities to create useful multi-modal imaging regimes. Included among these other modalities are microscopy, ultrasound, MRI, CT, spectroscopy and nuclear imaging.

1.4.1. Ultrasound

Ultrasound has been utilized in conjunction with fluorescence imaging as a means of localizing the tumor at depths within the surrounding tissue, where the fluorescence is not visible [63, 113, 139]. In this way, the surgeon can assess tumor depth and volume before needing to cut into the tissue to visualize the fluorescence data. Additionally, the ultrasound images can be employed to locate blood vessels which the surgeon must avoid.

17

Multiple clinical studies have demonstrated the enhanced efficacy of combining the strengths of each modality for surgical planning, tumor resections and margin inspection

[139-141].

The ultrasound data is typically displayed separately from the fluorescence [63, 96,

113, 141], however co-registration techniques have been implemented [93, 94]. One promising method uses fluorescence and color imaging to identify fiducial markers affixed to the ultrasound transducer when it is within the imaging frame. Once detected, the live ultrasound imagery is registered directly to the markers, allowing the surgeon to see the combined view on the display.

1.4.2. Radiology

Imaging contrast agents that are both fluorescent and radioactive have been used to identify cancer in animal studies and clinical trials [76, 101, 120, 142-144]. Radioactive emissions, detected using Single-Photon Emission Computed Tomography (SPECT) or

Positron Emission Tomography (PET), and the fluorescent emissions are simultaneously monitored during the surgery. The fluorescence emissions were found to improve tumor margin identification in regions with high radioactive background. Complimentarily, the radioactive emissions aided in localizing tumors at greater tissue depths [145].

1.4.3. MRI/CT

Fluorescence imaging techniques have been combined with tomographical imaging modalities such as MRI and CT in an effort to improve tumor detection and contrast with the surrounding tissue, particularly during surgical planning [12, 146]. Unlike ultrasound,

18 which can more readily return real-time depth information during a procedure, MRI can provide detailed high-resolution anatomical charts to act as a reference estimating tumor size and location [101]. Combining these methods has been employed in vivo, and the resultant image was found to be an effective aid for effectively identifying and localizing tumors in the body. A common application for fluorescence and MRI co-imaging has been found in cases of glioma, where ALA and its downstream fluorophore PpIX are used [147,

148].

Surgical CT has increased in usage since the developments of the intraoperative C- arm module as well as portable CT [149, 150]. When used in conjunction with fluorescence imaging, CT is also frequently combined with SPECT or PET to better localize tumors at various tissue depths [120, 142, 145, 151]. Additionally, CT has found use intraoperatively during lymph node mapping and biopsies [152, 153].

1.5. Scope and Aims

The overarching goal of this study is the design, build and testing of a prototype fluorescence imaging system (Aim 1). The system will be multimodal, incorporating stereoscopic, real-time, line-of-site fluorescence and reflectance imaging with integrated microscopy and ultrasound imaging as well as MRI/CT display modalities (Aim 2).

Various tests will be conducted to evaluate system performance in comparison to traditional diagnostic and surgical practices (Aim 3).

19

Optical testing will evaluate the spatial resolution in both fluorescence and reflectance modes. Additionally, fluorescence detection sensitivity for 3 clinically relevant dyes (ICG, Porphyrin and Fluorescein) will be tested. Display capabilities of the system will be evaluated for observable spatial resolution and fluorescence sensitivity.

Fluorescence, optical reflectance, ultrasound and MRI/CT data will be co-registered and the results will be tested for registration accuracy and stability. Additionally, the system will be tested for real-time imaging functionality (frame rate and latency) for all registration modes (i.e. fluorescence with color registration, fluorescence with color and ultrasound, etc.). Lastly, system performance will be evaluated in comparison with a conventional stand-mounted, 2D fluorescence imaging system when guiding simulated fluorescent tissue resections. The system will also be evaluated for use in forensic imaging and vein identification and diagnostics.

Metrics have been defined for evaluating system performance, and will be evaluated at various points throughout the manuscript. Specifications defined in Chapter

2, including spatial resolution and fluorescence sensitivity, serve as preliminary findings from an initial system prototype. These results, as well as results taken from the literature, served as pilot data for defining the parameters used in the testing of the updated systems reported in Chapters 3, 4 and 5.

In partial evaluation of Aim 1, the spatial resolution in the xy-plane of the updated system (Chapters 3-5) was expected to meet or exceed 0.5 mm for either reflectance or fluorescence mode imaging at working distances between 20 and 60 cm. For this study, spatial resolution refers to the minimum dimension of the smallest object that can be readily

20 differentiated from adjacent objects in the returned imaging frame as being an independent structure with contrast of at least 20% in relation to the background. Spatial resolution in the z-direction (i.e. depth perception) was expected to exceed 5 mm. Depth perception is defined as the minimum amount of distance between two objects parallel to the camera plane that is required to indicate that those two objects reside at a different distance from the imaging plane.

Fluorescence sensitivity studies were conducted on 3 clinically relevant dyes (Aim

1). Chapter 3 focuses on the characterization of the system’s ability to detect Indocyanine

Green (ICG) both under optimal dark room conditions, as well as simulated surgical room conditions using tissue phantoms as fluorescent targets. Protoporphyrin IX (PpIX) detection is tested in Chapter 4 using a new temporal gating regime. The results were compared to conventional steady-state DC mode imaging. Lastly, a compact system design will be tested in Chapter 6 for the detection of fluorescein in field-like settings (i.e. out of the operating room). Minimal detection criteria with a Signal-to-Background Ratio (SBR) greater than 2 was set for a dye concentration of no more than 500 nM at the surface of the imaging plane for each study. In a dark room setting, the minimum detectable fluorescence concentrations were expected to be at least 50 nM in solution with an SBR of at least 1.5.

Tests conducted on tissue phantoms for the detection of ICG indicated positive detection at depths of at least 1 cm, and for dye concentrations of at least 500 nM with SBR of 1.5.

Fluorescence mode to color reflectance mode image registration was initially implemented on the prototype system in Chapter 2. The updated version of this regime is presented in Chapter 5, and was expected to provide more accurate registration results

21

(Aim 1). Error was determined by imaging distinct geometric targets which were visible on both the color and fluorescence imaging sensors. When the detected images were co- registered, the difference between each target’s location in the color versus the fluorescence image was evaluated. Translational error was expected at no greater than 1 mm in any direction on the xy-plane, rotational error limits were set at less than 5o and any scale differences should not have exceeded 5%.

Three different near-eye (AR) displays were tested for their ability to accurately translate the resolution of imaged scenes and detected fluorescent emissions to the user

(Aim 1). In Chapter 5, the results of multi-user evaluations of the headsets were presented.

Evaluations occurred while users observed the tested ICG fluorescence sensitivity targets as well as the resolution targets on each display. Each user indicated the minimum observable fluorescence and resolution pattern he or she could see, and combined statistics were used to determine display limits. The optimal display should have achieved minimal fluorescence detection at concentrations of at least 250 nM of ICG in tissue phantom, and a displayed minimum object resolution of no more than 0.5 mm.

Following characterization procedures in Chapter 5, the system was evaluated in its ability to aid in image guided fluorescent tissue resections against a traditional stand- mounted fluorescence system with 2D stand-alone display (Aim 3). Simulated surgical procedures were conducted on tissue phantoms. Comparative metrics included time of surgical completion and ratio of fluorescent to non-fluorescent tissue removed. The goggle system was expected to achieve at least equivalent performance in terms of tissue resection, and a decrease in surgical time.

22

Multimodal imaging capabilities were integrated into the system, as addressed in

Aim 2. Initially, both microscopy and ultrasound imaging were incorporated in the prototype goggle presented in Chapter 2. A new hand-held fluorescence microscopy module was integrated and tested for spatial resolution and detection sensitivity in Chapter

5. The spatial resolution of the module in the imaging plane should be improved over the initial prototype, achieving a minimum dimension of at least 0.1 mm, and the minimum detectable fluorescence concentration was at least 300 nM of ICG at an SBR of at least 1.5.

A portable ultrasound device was integrated into the system using two display modes for fluorescence and ultrasound image co-registration. The ultrasound images were incorporated using 2 viewing modes: picture-in-picture (Chapter 2) and registered to the transducer (Chapter 5). Also in Chapter 5, the registration accuracy of the ultrasound images to fiducial markers placed on the transducer was measured (Aim 2). Additionally, target registration accuracy was assessed using tissue phantoms with implanted fluorescent and ultrasound sensitive geometric objects. The difference in the locations of the detected fluorescence and co-registered ultrasound images in the combined scenes was measured.

The system was expected to achieve the following maximum allowable errors: 2 mm spatial translation, 5o rotation and 5% scaling error for both tests.

In addition to ultrasound, the system was tested for the registration of pre-operative

MRI/CT data to the imaging plane via fiducial markers (n >= 3). Registration accuracy was assessed in Chapter 5 via the co-registration of geometrical virtual 3D objects with real-world objects of the same shape as seen by the goggle cameras (Aim 2). Virtual fiducials placed on the object in 3D space were compared with the imaged optical fiducial

23 markers during registration to determine marker alignment error. Additionally, the location of the registered 3D virtual object in the imaging plane was compared with the imaged location of its real counterpart to determine target registration error. The calculated errors should have the following maximum values: 2 mm spatial translation, 5o rotation and 5% scaling error.

The system was translated for use in forensic imaging, in particular the detection of trace blood stains at crime scenes. During this study, presented in Chapter 6, Aim 1 was evaluated with respect to the detection of fluorescein dye. Additionally, Aim 3 was partially evaluated through the incorporation of a third, high magnification microscopy module. The system should detect low concentrations of blood in partially cleaned stains on multiple materials, comparable to the published literature.

A NIR vein imaging module was developed and evaluated in Chapter 7. The system co-registered color anatomical to NIR vein images with maximum allowable errors of: 1 mm spatial translation, 5o rotation and 5% scaling error (within xy-plane only). The system was expected to increase vein detection in the hand by a factor of at least 1.5 versus visual inspection. The system should also have been capable of detecting simulated with a diameter of at least 0.5 mm at depths of at least 2 mm in a tissue phantom (Aim 1).

Lastly, during all modes of operation the imaging system should operate at a video rate of no less than 15 fps, with an operational delay, or latency, of no greater than 0.1 s between image detection and subsequent co-registration of color, ultrasound or 3D imagery

(Aim 1). The operation frame rates and latency values found at each stage of system development were reported.

24

CHAPTER II

STEREOSCOPIC IMAGING GOGGLES FOR MULTIMODAL INTRAOPERATIVE

IMAGE GUIDANCE

In large part, Chapter 2 is a revised version of the published manuscript: “ Mela

CA, Patterson C, Thompson WK, Papay F and Liu Y. (2015) Stereoscopic Integrated

Imaging Goggles for Multimodal Intraoperative Image Guidance, PLOS ONE, doi:

10.1371/journal.pone.0141956”. Chapter 2 addresses parts of each Aim defined in the introduction, on a preliminary basis. The results provide pilot data and guidelines for the following chapters describing improvements made on the system to meet the specific metrics defined for the evaluation of each aim.

2.1. Introduction

Surgeons rely on imaging technologies to guide surgical procedures and evaluate the results. Available imaging modalities include Magnetic Resonance Imaging (MRI), X-

Ray Computed Tomography (CT) and Ultrasound (US) among others [1, 126, 154-158].

Due to their complexity, large size, cost or potential risk associated with long term use these technologies can be difficult to implement in the operating room to guide surgery.

25

Also, it can be difficult to correlate the surgical landscape with the pre-operative images during a surgery [126, 157, 158].

When excising a cancerous lesion, the surgeon needs to accurately distinguish between tumor and healthy tissue. Due to the difficulties in applying imaging modalities such as MRI or CT intraoperatively surgeries are often primarily guided by sight and palpation [36].

Intraoperative ultrasound provides the surgeon with anatomical information and soft tissue contrast. It can, however, still be challenging to accurately correlate the ultrasound imagery with the surgical landscape. When cancerous tissue is not distinguished from healthy tissues, a small fraction of tumors will remain inside the body post operation.

This will lead to cancer recurrence and follow-up surgeries. If a positive margin is found, the probability of a local cancer recurrence is high; for example in laryngeal cancer the odds of recurrence go up from 32% to 80% when a positive margin is found [159].

Therefore, accurate margin control is needed.

Traditionally, pathology is the gold standard for margin status determination [36,

159-161]. Pathological analysis requires sectioning of all surgical margins, followed by staining and microscopic investigation, which leads to extensive operating room time. For better surgical margin control, intraoperative optical imaging has emerged as a promising solution [36, 159-162]. Two pertinent approaches have been taken, including large field of view (FOV) imaging and hand-held in vivo microscopy [24, 57, 72, 121, 163-167].

Various large FOV intraoperative fluorescent imaging systems have been developed in the past decade [72, 164-167]. These systems rely upon 2D flat screen

26 displays for relaying fluorescent data to the physician. Fluorescence imaging systems have been developed, offering a quick way to survey the surgical area and guide surgeries [72,

166, 167]. However, such systems do not offer the capability of integrated in vivo microscopic imaging. On the other hand, in vivo microscopic imaging systems have been developed [57, 121, 163]. These systems hold great potential for in situ pathological analysis. However, the application is limited due to the small FOV and the difficulty involved in surveying all the surgical area in a timely fashion.

More recently, wearable imaging and display systems in a “goggle” form have been developed by Liu et al [168, 169]. These systems have been successfully validated in preclinical and clinical studies [65, 168, 169]. Despite encouraging results, these systems only offer 2D imaging and display capabilities, without depth perception. Also, the previous systems are bulky and difficult to wear for longer times. There was also no in vivo microscopy or ultrasound capability offered, limiting the diagnostic power to that of wide- field fluorescence imaging.

To overcome these limitations, we report the initial development of a platform technology entitled Integrated Imaging Goggle, in the literature. In this chapter, the following novel objectives are addresses:

 Our system leverages on the principles of stereoscopic vision to present both depth

perception and lateral spatial information to the surgeon.

 The system can image, overlay and present both color reflectance and near infrared

florescence information to the user in real time.

27

 Both large FOV fluorescence imaging and handheld microscopic imaging are

offered simultaneously. In this way, the surgeon can survey a large area and

perform examinations under the large FOV stereoscopic fluorescence imaging,

while investigating suspicious areas in detail with the in vivo microscopic probe.

 The goggle was integrated with non-optical imaging modalities including

ultrasound, providing multimodal image guidance to the surgeon and mitigating the

optical imaging limitation of penetration depth.

 Wireless goggle-to-goggle stereoscopic view sharing is enabled, where the remote

collaborator can visualize the same data that the local goggle wearer sees, with

stereovision and depth perception. This is important for remote guidance and

telemedicine.

2.2. Materials and Methods

2.2.1. Imaging System Instrumentation

Two prototype imaging goggle systems have been developed. The first, a 2-sensor setup, Figure 2.1A and 2.1B, was similar to a previously reported system [14, 93]. Imaging module design used 2 complementary metal–oxide–semiconductor (CMOS) sensors housed on a printed circuit board (PCB), and mounted with twin glass M12 lenses.

Likewise, the 4-sensor setup, Figure 2.1C and 2.1E, also used CMOS sensors. Fluorescence imaging was facilitated by mounting near-infrared (NIR) bandpass filters, centered at 832 nm ±37 nm (84-107 Edmunds Optics, NJ, USA), over 2 of the lenses on each goggle setup as emission filters. The 4-sensor setup was created for combining color reflectance with

28 fluorescence imaging. Therefore, the 832 nm bandpass filters for fluorescence imaging were used on only 2 of the sensor lenses, while the other 2 lenses for color reflectance imaging were fit with NIR cutoff filters. NIR cutoff filters helped preserve color accuracy by preventing the NIR light from combining with the visible light detected by the RGB in the array.

29

Figure 2.1 Prototype imaging goggle system. (A) Schematic of 2-sensor setup. (B) Photo of 2-sensor

setup for stereoscopic fluorescence imaging. (C) Photo of 4-sensor setup for simultaneous stereoscopic

color reflectance imaging and fluorescence imaging. Top 2 sensors were for reflectance imaging and

bottom 2 sensors were for fluorescence imaging. The horizontal and vertical inter-sensor distances are

labeled in blue and red, respectively. (D) Imaging system with its handheld in vivo microscopy probe. (E)

30

Schematic of 4-sensor setup. (F) Overall system diagram depicting all components and connections during a typical operation with the hand-held microscopy module connected. Alternatively, a portable ultrasound

scanner with transducer could be connected to the computational unit.

The prototype systems communicated with a laptop computer via USB2.0 connections. The light source utilized a 775 nm shortpass filtered (86-112 Edmund Optics,

NJ, USA) halogen lamp with a glass diffuser which allowed for both white light illumination and infrared excitation of the target, while removing any NIR components near the fluorescent emission bandwidth centered at 830 nm. A rotary potentiometer was added to the light source circuit so that illumination intensity could be varied. This way, both well-rendered white light surgical illumination (without NIR component beyond 775 nm) and fluorescence excitation (centered at 780 nm) are achieved concurrently. We further designed and implemented a feature for the light source that enables adjustment of background NIR components (> 800 nm). In brief, a small adjustable amount of unfiltered diffuse light from the halogen lamp was allowed to pass from the light box and illuminate the target. This was accomplished by placing a custom lid onto the light source enclosure, mounted on a sliding mechanical stage that could be opened to various levels for allowing in varying amounts of unfiltered light. In addition, the stage could be shut completely to eliminate the unfiltered optical component. The background components were helpful in providing references for anatomical structures of the surgical target for the 2-sensor setup.

Furthermore, the system was integrated with a hand-held microscopic imaging module (Supereyes, Shenzhen, CHN). The microscope was fitted with an 832 nm ±37 nm bandpass filter (84-091 Edmunds Optics, NJ, USA) for in vivo fluorescence microscopic imaging, Figure 2.1D. The system was also integrated with a portable ultrasound scanner

31

(Ultrasonix, Shenzhen, CHN) with 3.5MHz B-mode transducer to offer ultrasound imaging capacity. An overall system diagram is given, Figure 2.1F.

2.2.2. Image Acquisition, Processing, Registration and Display

We have developed custom algorithms using C++ coding language for real-time image acquisition, processing, registration and display. Cameras were controlled and managed using the OpenCV libraries [170], which allowed us to synchronize frame capture from individual sensors to the computer via USB connection. The 2-sensor setup used the raw gray-scale fluorescence imaging data, arrayed in a side-by-side (SbS) orientation. The

SbS data was then exported to the display for viewing.

Operating the 4-sensor setup, the color reflectance and NIR fluorescence images were merged to create composite images prior to being sent to the wearable stereoscopic display for visualization. Inter-camera image registration was required to accurately overlay the functional fluorescence information onto the reflectance mode anatomical information. Due to the inter-sensor height disparity between the filtered and unfiltered imaging sensors, Figure 2.1C, the NIR fluorescence frames had to be shifted to align with the color reflectance frames. This was accomplished by first measuring the vertical inter- sensor distance Lv (25 mm) between the center of each sensor, as well as the horizontal inter-sensor distance Lh (60 mm), Figure 2.1C. Next, a NIR LED (850 nm peak) was imaged at various working distances (20-40 cm) by the goggle. The center location of the

LED in the goggle camera images was determined, and the differences in x and y coordinate LED location between the left and right camera images was calculated in pixels,

32

Dv and Dh. From this information, a transformation metric, C, was determined from the equation:

(퐷 + 퐶 ) 퐿 푣 푣 = ℎ (퐷ℎ + 퐶ℎ) 퐿푣

The equation was solved for Cv and Ch, the calibration correction metrics in the vertical and horizontal directions, respectively. These values corresponded to the amount of translation each fluorescent image required to align with its corresponding color image.

The calibration correction metrics were manually determined to form a lookup table of correction values for working distances taken every centimeter between 20 and 40 cm working distance from the end of the goggle lenses to the imaging target. During regular use, the working distance was determined by first thresholding and then overlaying the left and right fluorescence images. The center of each fluorescent node found in the images was located as the one-half mean Euclidean distance between the edges of the node. Near- by fluorescent centers were grouped using a basic k-means clustering algorithm, and there center locations compared. The average difference between fluorescent centers was calculated in x and y coordinates, and these coordinates served as inputs into the pre- determined correction metric look-up table. As long as the distance to the fluorescent target remained within the range of the calibrated working distance, the co-registration error was expected to be less than 2 mm. Error was determined by comparing the locations of the calibration targets as seen in each of the overlain images, and the average difference in edge locations was calculated.

33

A picture-in-picture (PiP) display mode was implemented when utilizing the hand- held microscope probe. In brief, an additional image frame was added to the top left corner of each SbS image frame. This frame displayed the images from the hand-held microscope, for close up inspection of the surgical site. When the full frames were viewed stereoscopically through the wearable display screens, the microscope images aligned over each other and appeared as a single 2D image; although the remainder of the frame still appeared as 3D, due to the . A similar process was also implemented for displaying the ultrasound images. When operating in ultrasound mode the PiP display provided the user with subdermal structural information in real-time. In this way we have begun to address part of the limitation of optical imaging for detecting deep tissue structures.

2.2.3. System Characterization

The Modulation Transfer Function (MTF) for our systems was determined using the slanted edge technique for working distances of 20, 30 and 40 cm from lens to imaging target [87, 171, 172]. A target consisting of only a printed black and white edge was imaged. The slope of the edge was set at 5° off of the vertical y-axis to give an optimal

MTF, as demonstrated by Dumas et al [87]. A 128 by 64 pixel rectangle, consisting of equal parts black and white pixels, was selected from the edge line image for MTF calculations.

The FOV of the system was determined by imaging a precision graded ruler, oriented horizontally then vertically, in relation to the imaging frame, at working distances of 20, 30 and 40 cm from lens to target.

34

Fluorescence imaging studies were conducted using various concentrations of the

Indocyanine Green (ICG) (Cardiogreen Sigma Aldrich, MO, USA), dissolved in a

Dimethyl Sulfoxide (DMSO) solvent. The ICG was measured to have a peak excitation wavelength at 780 nm in DMSO using a benchtop spectrometer (Thermo Fisher Scientific,

MA, USA) and peak fluorescence emission at 830 nm. Dilutions of the ICG were made for goggle imaging sensor characterization studies and testing, using concentrations of 45, 60,

90, 120, 150 and 180 nM. Three different 2 ml centrifuge tubes filled with the fluorescent solution were imaged for each dye concentration. Empty tubes of the same variety were also imaged to determine whether they contributed any autofluorescence, and tubes containing only DMSO were imaged to set the background level. The fluorescence intensity of each tube was measured in 8-bit gray-scale intensity (measured from 0–255), minus the background intensity. The minimum detectable fluorescent intensity was defined as having a signal-to-background ratio (SBR) of at least 2.

Two non-fluorescing plastic tubes (5 mm Ø, 50 mm length) simulating blood vessels were filled with various concentrations of the ICG/DMSO solution. The tubes were placed side-by-side, then imaged to determine the minimum resolvable distance in between the tubes that the system could detect for various dye concentrations. This was implemented at working distances of 20, 30 and 40 cm and using the same ICG concentrations as with the previous detection limits test. The distances used in between the tubes were 0.25, 0.64 and 1.27 mm. To further characterize goggle performance, chicken breasts were injected with a dilution series of ICG in DMSO, in an array. Each injection contained 0.02 mL of solution and was administered into a small hole (0.5mm Ø) made into the surface of the chicken, with the top open to the air. Syringes, each containing a

35 different concentration of fluorescent solution, were prepared in advance, and imaging was conducted immediately following the final injection to avoid significant change in fluorescent concentration due to diffusion through the tissue.

The minimum resolvable depth perception provided by the stereoscopic imaging capability of goggles was estimated through experimental observations conducted by multiple users in the lab (n = 5). Stacks of 1 mm thick glass slides with masking tape covering the top were placed adjacent to each other, each stack containing between 1 and

4 slides. Each stack was assigned a number, and then two stacks were randomly selected and placed adjacent to each other. Two stacks containing the same number of slides could also be selected; this would function as a control for the experiment. Next, a randomly selected user would observe the adjacent stacks through the goggle while imaging the stacks from 20, 30, and 40 cm working distances. Finally, the user would indicate which stack was higher and by approximately what amount. The process was repeated until each stack combination was observed by each user.

2.2.4. Image-Guided Surgery in Chicken Ex Vivo

Whole chickens were utilized to conduct surgeries guided by the imaging goggle system ex vivo. Three parallel experiments, each for three different surgical studies were conducted, and SBR statistics were derived. The first method involved implanting a 0.6 ml microcentrifuge tube containing 300 picomoles ICG and a 0.2 mL circular capsule containing 100 picomoles ICG under the skin of the chicken breast. The chicken was illuminated using our light source with and without the addition of unfiltered diffuse light, respectively. Image-guided procedures were conducted using the 2-sensor setup integrated

36 with the hand-held microscope. The goggle frames were displayed with the hand-held microscope image in PiP mode on the wearable display. Similar simulated surgeries were guided by the 4-sensor setup. The second set of procedures involved implanting the fluorescent tube and capsule into the chicken breast at depths of about 3 mm under the top surface, with the skin in that region removed. Under illumination both with and without the addition of the unfiltered light components, the chicken was imaged using the 2-sensor setup and the ultrasound. The image from the ultrasound was displayed with the goggle frames in PiP mode. The third simulation, under the same lighting conditions as the prior studies, again utilized the 2-sensor setup with the handheld microscope to guide a simulated tumor resection. In this case, 0.02 ml of 11 picomoles ICG in DMSO solution were injected directly into the skinless chicken breast at depths of 2–3 mm under the surface. The stereoscopic large FOV fluorescence imaging capability was utilized to assist in the removal of the fluorescent tissue, while the hand-held in vivo microscope was used in assessing the surgical margins for additional fluorescence.

2.2.5. Telemedicine

We demonstrated the feasibility of transmitting the images recorded from the goggle devices and streaming them in near real-time to a remote viewer. To demonstrate the concept, we transmitted the real-time video feed wirelessly from the connected laptop to a remote viewer in a separate location using commercially available video streaming software. The remote viewer was then able to monitor the surgery in progress on a PC, laptop, tablet, smartphone, another set of goggles or other internet ready mobile device.

Our method was tested on WiFi as well as 4G LTE network. If the receiving end was

37 connected to another imaging goggle, the video stream could be viewed stereoscopically with simulated 3D depth perception.

2.3. Results

2.3.1. System Characterization

To better understand the performance of the imaging capability offered by our system, we characterized the MTF of the imaging system using the slanted edge technique,

Figure 2.2A. The results indicate the preferred transfer function, which is directly correlated to optimal resolution, at a working distance of 30 cm. The transfer function at working distances of 20, 30, 40 cm are very similar, however, indicating that the optimal working distance for the system lies within this range. The FOV measurements taken from the wide-field fluorescence images of the goggle cameras at 20, 30 and 40 cm working distances are shown in Figure 2.2B. The FOV increases linearly in both the vertical and horizontal directions with increasing working distance (R2 = 0.999).

38

Figure 2.2 System Characterization. (A) Modulation transfer functions (MTF) for the NIR imaging

detector as a function of cycles per pixel. (B) FOV measurements for the NIR imaging detector taken versus working distances of 20, 30, and 40 cm. Vertical and Horizontal FOV measurements were recorded.

(C) Pixel intensity for the detected fluorescent emissions from excited solutions of ICG/DMSO of varying

ICG concentration, as detected by the NIR imaging detector. The detected fluorescent intensity decreased

with increasing working distance. In addition, the intensity of the fluorescence at any working distance increased linearly with dye concentration (R2 = 0.96). (D) The detected fluorescence from an injection of a

serial dilution of ICG/DMSO into 0.5 mm holes cut into the surface of a chicken breast. Imaging was

conducted at 40 cm working distance under 785 low pass filtered illumination.

Furthermore, the fluorescence detection limit of the goggle sensors was also assessed, Figure 2.2C. As expected, the average detected fluorescent intensity increased with dye concentration for each working distance. The minimum detectable concentration

39 over the background (SBR = 2) was found to be 45 nM at the 20 cm working distance and

60 nM at 30 and 40 cm distances. The detected pixel intensity was found to increase with dye concentration in a linear fashion (R2 = 0.96). To test lateral resolution of the wide FOV fluorescence imaging, two fluorescent dye filled plastic tubes of various concentrations were utilized. After imaging the tubes side-by-side at various distances, it was determined that two fluorescent objects with a 0.25 mm gap in between them could be resolved at all working distances when the dye concentration was 150 nM or less. Above 150 nM, the sensors became saturated with fluorescent emissions and the two tubes could not clearly be separated.

Chicken breasts were used to assess fluorescence detection in biological tissues for small volumes of ICG. The minimum concentrations detectable corresponded to the results from our three tube study: 45 nM (0.8 picomoles of ICG) at a 20 cm working distance, and

60 nM (1.2 picomoles ICG) at 30 and 40 cm working distances, Figure 2.2D. The fluorescence detection data obtained with chicken tissues are consistent with our data obtained with ICG-filled tubes.

The degree of stereoscopic vision provided by the goggles was estimated by experimental observations. Users (n = 5) unanimously agreed that a 2 mm depth between adjacent slides was observable through the goggles at any of our working distances.

Additionally, users reported an observable difference between adjacent slides for a 1 mm height difference at working distances less than or equal to 20 cm.

40

2.3.2. Image-Guided Surgeries in Chicken

To better assess the performance of the imaging system, we performed image- guided surgeries on whole supermarket chickens. Intraoperative imaging of the whole chicken with 2 fluorescent targets (tube and capsule) implanted beneath the skin was conducted using the imaging goggle, Figure 2.3A and 2.3B. Testing was conducted with and without the addition of unfiltered NIR illumination from our custom light source and, under either lighting condition, the fluorescence was clearly visible above the background.

The average SBRs over three trials were 5.8 ±0.18 and 134.8 ±7.1 for the surgeries with and without the addition of unfiltered NIR illumination, respectively. The wide FOV imaging guided the assessment of the larger area, while the handheld microscope offered further investigation of lesions at higher magnification. The simulated lesions were clearly imaged and displayed stereoscopically by our system, in real-time (>15 fps). It was found by the surgeon that the depth perception and stereoscopic imaging capabilities are crucial for guiding surgeries and help to improve hand-eye coordination. The hand-held microscope also augmented the assessment of simulated lesions.

41

Figure 2.3 Image-guided surgeries aided by the imaging goggles. Intraoperative imaging using the

2-sensor setup (A) with the addition of unfiltered NIR light for the imaging of anatomical data, and (B)

without unfiltered NIR light. Goggle aided stereoscopic imaging of 2 fluorescent targets (blue arrows)

under the skin of a chicken. The image from the hand-held microscope was displayed in the top left corner

of each frame of the large FOV image, displaying a magnified view of the fluorescent targets with higher

resolution (yellow arrows). (C) Intraoperative imaging using the 4-sensor setup: The anatomical information from the color reflectance image; (D) the functional information from fluorescence image. The

detected fluorescence was pseudo colored green to facilitate visualization; (E) the composite fluorescence

and color reflectance images displayed to the user.

Imaging of the chicken with the same 2 fluorescent targets (tube and capsule) implanted beneath the skin was conducted using the 4-sensor setup, Figure 2.3C–E. The 4- sensor setup allowed us to attain both anatomical information from color reflectance imaging, Figure 2.3C, as well as functional information from fluorescence imaging, Figure

42

2.3D, in real-time (>15 fps). The fluorescence data was thresholded and a pseudo color

(green) was applied before being merged with the anatomical data into a combined imaging frame, Figure 2.3E. Combining the functional and anatomical data allowed the surgeon to better localize the fluorescence information with respect to the background, potentially improving surgical outcomes. The average co-registration error was determined to be less than 1 mm in any direction in the xy-plane when the goggle was used within the calibrated working distances (20–40 cm). A calibration correction metric was only required every 2 cm within the calibrated working distance (i.e. one correction metric at 20 cm, another at

22 cm, etc.), to providing a continuous registration.

The second surgical study was conducted using the goggle system integrated with ultrasound, displayed in PiP mode, Figure 2.4. The chicken was again illuminated with and without unfiltered NIR light. Fluorescent targets were clearly visible above the background in the large FOV images under both lighting conditions; averaged SBR values (n = 6) was

442 ±21 without unfiltered NIR light and 5.4 ±0.12 with the unfiltered NIR light.

Ultrasound was able to detect the implanted fluorescent tube which appeared as a hypoechoic dark, elongated object simulating a large blood vessel or fluid filled sac. In this way, the ultrasound images provided information complementary to fluorescence images.

The depth penetration of ultrasound imaging may also be desirable for deeper tissue assessment, as reported by the surgeon.

43

Figure 2.4 Goggle aided stereoscopic imaging with ultrasound. Two fluorescent targets (blue

arrows) implanted into the chicken breast at depths of approximately 3 mm. Ultrasound imagery was

displayed in picture-in-picture mode at the upper left of the goggle imaging frame. The ultrasound transducer (purple arrow) was capable of detecting the implanted fluorescent tube as a dark region (orange

arrow), similar to a large vessel or fluid filled sac. Imaging was conducted with (A) and without (B)

unfiltered NIR illumination.

The third surgical study performed involved injecting the chicken breast with fluorescent dye, and conducting a resection of the fluorescent tissue guided by the stereoscopic imaging sensors with the hand-held microscope. Under illumination with and without the unfiltered NIR light, the chicken was imaged throughout the surgical procedure using the goggle, from planning to post-operative analysis of the margins, Figure 2.5. The simulated lesion exhibited higher florescence intensity over the background under both illumination regimes, Figure 2.5B and 2.5F. After the partial resection of the simulated lesion, residual fluorescent tissue was still visible around the incision site, Figure 2.5C and

2.5G. The average (n = 3) SBR between the simulated lesions and the control normal breast

44 tissues illuminated by our light source with unfiltered NIR lighting included was 3.94

±0.04. The SBR using only the 785 nm low pass filtered illumination was found to be 220

±14. All three tests were conducted using the same illumination intensity as well as the same distance from the light source to the target. The hand-held microscope provided a magnified view of the surgical margin for inspection, complementing the large FOV of the stereoscopic goggle imaging sensors, and facilitating surgical decision making. These studies demonstrated the potential value of the Integrated Imaging Goggle system in assisting with surgical resections.

Figure 2.5 The surgical resection of fluorescent tissues in chicken guided by the imaging goggle with its hand-held microscope. High resolution microscopic images were incorporated in picture-in-picture

mode into the wide FOV stereoscopic goggle frames. Images displayed were illuminated using our light source with unfiltered NIR components for anatomical data (A-D) and using only the low pass filtered light

(E-H). The images were from four distinct time points during the resection of the fluorescent tissue: (A &

E) Chicken pre-injection; (B & F) post-injection and prior to any resection; (C & G) after a partial resection

45

(excised tissue indicated by red arrows), note the residual lesions in both the goggle and microscope images

(purple arrows), orange arrows indicate small residual lesions that were only revealed by the microscopic imaging; and (D & H) after completed resection and removal of residual fluorescent tissues (green arrows).

2.3.3. Telemedicine

The wireless connectivity of the goggle system facilitates telemedicine and remote collaboration. The real-time video from the goggle was shared over the internet on WiFi as well as 4G LTE network to a remote location (delay < 0.1 second, >15 fps).

For this experiment, we demonstrated the feasibility of wirelessly transmitting video from the goggle to another goggle, a smartphone or a computer. Commercial software was used for this experiment, including Google Hangouts and Skype. Example imagery as seen at the remote site is shown in Figure 2.6. When another set of stereoscopic display is used, we were able to view the imaging data stereoscopically in 3D at the remote site. Therefore, we have demonstrated goggle-to-goggle stereoscopic video capture, transmission and display, which facilitated the assessment of surgical scene from remote site with the depth perception.

46

Figure 2.6 Telemedicine of the real-time imaging goggle frames. (A) The fluorescence video stream as seen through the goggle display, transmitted via 4G LTE network. (B) The received fluorescence video

frame displayed on the remote viewer’s smartphone.

2.4. Discussion

In this chapter, we have reported the early development of an integrated imaging system that offers wide-field fluorescence and microscopic imaging as well as integrated ultrasound to guide surgical procedures and planning. It is the first wearable real-time stereoscopic fluorescence imaging system. The real-time registration of color reflectance images and fluorescence images is also desirable, to provide both functional information and structural information.

47

2.4.1. Stereoscopy

Implementing a stereoscopic 3D imaging system can significantly benefit tumor resections and SLN mapping in real-time. Stereoscopic or binocular vision allows for significantly improved depth perception. This occurs via the disparity between objects located in images as seen by our right versus left eyes. The brain processes this disparity, or parallax, into depth information [126]; for example, this is how a surgeon knows how deep an incision he is making. This information cannot be accurately attained from a 2D image alone. In medical applications, stereoscopic information can help a surgeon distinguish the 3-dimensional shape of a lymph node or the depth of a small tumor on the surface of an organ. In addition, faster surgical completion times have been reported when using stereoscopic display systems [74]. Previous research into 3D medical imaging has found applications in , , oncology, orthopedic studies, and vascular mapping among others [74, 126, 173]. Various modalities have been implemented for 3D stereoscopic imaging including X-ray, CT and visual imaging [174-178]. In detecting breast cancer lesions, Getty et al found a 44% decrease in false positives when utilizing stereoscopic mammography [179]. Results such as these demonstrate a potential advantage in having depth perception for medical imaging.

2.4.2. Characterization

In characterizing our system, we have determined sub-millimetric optical resolution with a large FOV when utilizing either reflectance mode or fluorescent imaging. In addition the fluorescence detection limit for our system was found to be in the 40–60 nM range, dependent on working distance. These factors are important when detecting small residual

48 tumors off of the main lesion site as well as when accurately mapping the tumor margins or SLN locations. While a larger and more sensitive fluorescence imaging sensors exist, we have traded off a degree of sensitivity to achieve a light weight and compact design.

Ergonomics is very important for a wearable system in terms of user compliance.

Additionally, it should be noted that the fluorescent detection limits are not fixed, and can be varied by changing the working distance of the cameras to the fluorescent target as well as by altering the intensity of the excitation light. Previous studies on liver cancer imaging in humans used ICG (0.5 mg/kg body weight) which is equivalent to 4.5 micromoles of

ICG for a patient weighing 75kg [63, 65]. In another clinical study 0.5 mM of ICG was used for sentinel lymph node mapping [75]. Given the parameters reported in these published studies [63, 65, 75], we believe the detection sensitivity of our system, 60nM/90 picomoles, is adequate to serve these surgical oncologic applications well. In the future we will verify the detection sensitivity in more intensive tissue phantom and clinical studies.

2.4.3. Microscopy

To further improve the system’s ability to detect small residual tumors, we have incorporated a hand-held fluorescence microscope which is displayed using a picture-in- picture mode. Picture-in-picture format provides the user with a convenient means of visualizing the microscopic (or ultrasound) data without losing sight of the wide FOV fluorescent imaging data, and without compromising the stereoscopic effect of the 3D display. Users may appreciate this feature as it preserves the line-of-sight fluorescence imaging while integrating additional imaging functionality.

49

2.4.4. Ultrasound

In addition to a small handheld microscope, we have integrated a portable ultrasound into the goggle system, making it a platform technology for multimodal imaging. The benefit of ultrasound incorporation is to provide complementary information and further improve visualization of depth information, beyond the depth penetration of optical imaging. Depending on the frequency of the ultrasound transducer, the typical maximum depth penetration through soft tissues is approximately 20 cm for deep region scanning (3–5 MHz), 5 cm at 1 MHz and 2.5 cm at 3 MHz [180]. Depth penetration of light from infrared fluorophores can vary depending on the dye concentration, detector sensitivity, process of detection, working distance and illumination intensity. Typical reports on maximum penetration depth range is approximately 2 cm, depending on the tissue types [43, 181]. Ultrasound complements the surface-weighted functional information provided by fluorescence with additional structural information at various depths through the tissue. Visualization of ultrasound data in PiP format provides the surgeon with simultaneous line-of-sight, stereoscopic fluorescence imaging as well as structural ultrasound data in real-time.

2.4.5. Future Work

Our current prototype microscope uses VGA (640 x 480) resolution. While this was good for demonstrating the concept, another hand-held microscope with a higher resolution sensor and a higher power lens may be developed in the future, providing us with a higher resolution and improved image quality for better tumor delineation and localization. Also,

50 the registration accuracy of the 4-sensor setup may be further improved in future iterations by further development of registration algorithm.

2.5. Conclusions

In summary, we have developed a novel imaging system for guiding surgeries. The prototype systems offer real-time stereoscopic wide-field fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. The goggle-to-goggle and goggle-to-smartphone telemedicine are also enabled to facilitate collaboration. The imaging goggle system has shown great potential for facilitating image-guided surgeries.

51

CHAPTER III

METHODS OF CHARACTERIZATION FOR A STEREOSCOPIC HEAD-MOUNTED

FLUORESCENCE IMAGING SYSTEM

In this chapter, Aims 1 & 2 were evaluated using an updated version of the system described in Chapter 2. Specifically, device performance metrics in regard to fluorescence detection sensitivity and resolution were determined for one clinically relevant dye (ICG).

Additionally, a comparison of AR displays was conducted, evaluating the displays on delivery of resolution and fluorescence data to the user. Fluorescence tissue resections were conducted to further evaluate and compare system performance to conventional methods.

3.1. Introduction

Optical imaging characterization techniques, including fluorescence detection sensitivity tests, resolution and contrast determination and tissue phantom studies, are standard procedure on fluorescence imaging systems during prototype development and pre-clinical studies. Fluorescence sensitivity, spatial resolution and contrast tests can

52 determine the system’s optimal performance parameters. Tissue phantom tests, particularly when conducted under realistic environmental conditions such as ambient lighting, can provide useful pre-clinical data on system performance.

Fluorescence detection sensitivity tests commonly employ two methods of evaluation. The first is the dark room test, wherein the optimal fluorescence detection limits, including minimum detectable dye concentration in solution and dynamic range of the system, can be determined [94, 97-100]. It is not necessary for such tests to be conducted in a room with no light, aside from the fluorescence excitation, as some studies simply use controlled lighting in a laboratory environment. A benefit of conducting these so called “dark room” tests is the establishment an optimal benchmark to which subsequent system tests can be compared. A disadvantage of using these tests alone is an unrealistic notion of real-world performance. Therefore, it is important to follow up the dark room test with more realistic tissue phantom and animal studies.

A variety of phantoms have been developed to simulate different optical tissue properties. Typically, a tissue phantom consists of a solvent, water is common, a solidifying agent, such as agar [96, 117, 182, 183] or gelatin [34, 137, 184, 185], and one or more scattering and absorbing agents. Scattering agents such as intralipid [34, 183-188] and titanium or silica oxide particles [183, 189-191] have become popular due to their varying particle sizes and low cost. Both types of scatterer are used independently as well as together in order to create various scattering effects for tissue mimicry. A variety of absorbing agents have been applied to tissue phantom construction, from ink [183, 189-

191], to coffee [187], to paint and cosmetic makeup [187, 192], as well as various blood

53 constituents, including whole blood, serum or just the hemoglobin [34, 183, 186-188].

Additionally, phantoms may contain preservatives and anti-bacterial agents to facilitate long term use [182, 183], and also a small concentration of fluorescent dye to simulate the naturally occurring autofluorescene of the mimicked tissue at the studied wavelength band

[183, 184, 186]. The precise recipe for a phantom is frequently varied to suite a particular application, as well as budget. While this variation can be a strength, as the phantoms can be tailored to mimic any variety of tissue and physiological condition, a potential weakness is the lack of standardization. Great variation in phantom design between studies can make direct comparison between systems difficult. Therefore, it is important that any phantom design be characterized at least for its optical scattering and absorption properties, and those properties must be compared to the target tissue [34, 183, 184, 187, 189, 191, 192].

In this way, optical properties can be a common reference point between studies.

Tissue phantoms can be used in both ideal “dark room” conditions, or in realistic environmental conditions simulating the medical situations to which the system under review will ultimately be applied. In either case, the use of a tissue phantom can provide more realistic feedback about system performance for a medical application. Studies can even investigate phantoms both under ideal and realistic environmental conditions to provide a range of performance parameters. Phantoms are frequently used both for fluorescence detection tests as well as for surgical simulations [34, 96, 184, 193], and as such, can provide vital system feedback to the developers and frequently also act as a prelude to animal or even clinical studies.

54

Parallel to the variety found in tissue phantom recipes and testing conditions, there is no standard in testing fluorescence in dark room or laboratory conditions either.

Significant variation has been observed between studies in the volume of fluorescent solution under analysis, the shape of the container or phantom holding the fluorescence, the intensity of the fluorescence excitation light and the camera working distances used for analysis. The shape and volume of the fluorescence solution under examination varies depending on the selected container holding the solutions. Containers used in the literature include glass vials [100], centrifuge tubes [94, 97-99], various chemical or biological well plates [67, 92, 105, 194, 195] as well as homemade vessels [34, 190, 196]. Additionally for phantom studies, there is significant variability in the depth of the imaged fluorescent inclusion within the tissue and on whether the fluorescent inclusion is fabricated as part of the phantom [185, 188], like a tumor in the body, or merely contained by the phantom [34,

190], like a cup holding water.

Several studies have made strides in consolidating these variations into a single test phantom. One excellent example, developed by Anastasopoulou et al, allows for the simultaneous testing of multiple concentrations of a fluorophore, the effect of tissue phantom imaging depth on a single concentration of fluorophore, system contrast and resolution [190]. Other studies have conducted analyses of a fluorescence node placed inside a tissue mimicking phantom at various depths and for various dye concentrations

[185, 188, 189, 197]. The ability of a system to differentiate between adjacent fluorescent nodes has been analyzed in phantom as well [100, 188]. Yet another study tested the detection of fluorescence nodules in a phantom using various node volumes [184].

55

In this chapter, the imaging goggle system was characterized for fluorescence detection using both dark room studies as well as tissue phantoms. Phantom imaging of fluorescent inclusions was conducted for varying fluorescent node volume and tissue depth. All fluorescent detection studies were conducted at varying camera working distances and intensity of excitation. Additionally, dark room fluorescent studies were conducted using three different containers for the dye solutions: 24 and 48 well plates as well as centrifuge tubes. Detected intensities were compared as a function of Signal-to-

Background Ratio (SBR) to determine whether volume, container and shape make a significant difference on dye detection. In addition to fluorescence studies, we present characterization data on the resolution of the goggle imaging sensors in 3D space, analyzing both planar spatial resolution (xy-plane) and depth perception sensitivity (z- plane). Next, we conducted surveys on the minimum fluorescent concentrations and resolution targets that a user wearing the goggle system can actually see while looking through the connected AR display. Lastly, the survey volunteers conducted simulated surgical resections on surgical tissue phantoms to test system performance during a fluorescent node excision. Goggle performance was compared to the performance of a comparable stand-mounted fluorescence imaging system on several parameters related to image guided surgery.

56

3.2. Materials and Methods

3.2.1. Optical Imaging and Display

Stereoscopic fluorescence and reflectance mode imaging was conducted using twin monochrome CCD imaging sensors (CM3-U3-13S2M-CS FLiR, CAN). Sensors were operated at 15 frames-per-second (fps) and 1.3 MP pixel density (1280H x 960V). The

CCD cameras were equipped with 8 mm focal length C-Mount style lenses (M118FM08

Tamron, JPN), which were fitted with 832 nm bandpass filters (84-107 Edmund Optics,

NJ). Sensors were mounted side-by-side using a simple custom aluminum frame aligning each pair of sensors together horizontally.

Multiple stereoscopic VR displays were tested. Two of the displays were commercial varieties (Wrap 1200DX Vuzix, CHN; MyBud Accupix, CHN), modified to fit the system. The third display was made in-house from commercially available parts, including: twin 1.5” IPS LCD display screens, focal lenses, an integrating bonnet and a microcontroller board to handle the displays (3356, 3787 & 3055 Adafruit, NY, USA).

Displays were attached to the back of the camera housing to enable the line-of-sight imaging and display method, also known as video see-through. The assembled imaging and display module was affixed to a dental loupe-style head mount, Figure 3.1.

57

Figure 3.1 Head-mounted imaging system consisting of stereoscopic CCD imaging sensors with

large format focal lenses, augmented reality (AR) display and loupe-style head mount. In the pictured

orientation, the sensors were filtered for NIR wavelengths.

Fluorescence imaging was conducted using the near infrared (NIR) dye indocyanine green (ICG). Excitation of the dye was induced using an adjustable focus, heatsink-mounted 780 nm LED emitter, fitted with an 800 nm short pass filter (M780LP1,

SM1F & FES0800 ThorLabs, NJ, USA). Illumination for resolution testing of the system was implemented via a custom array of 50 850 nm LEDs, mounted in a circular array (L3-

0-IR5TH50-1 LED Supply, VT, USA). The NIR light from the array allowed the resolution pattern to be seen by our system’s cameras through their optical filters.

Additionally, this reflectance mode light source was used to illuminate the background

58 during our surgical simulations, described below. Both NIR reflected light and excitation light sources were connected to separate, independently adjustable LED driver circuits

(LEDD1B ThorLabs, NJ, USA).

3.2.2. Computation

The cameras were connected, operated and controlled via a mini PC (NUC6i7KYK

Intel, CA). The system was outfitted with 32GB 2133 MHz DDR4 Ram and a 500 GB

SSD, while the CPU operated an Intel® Core™ i7-6770HQ processor at 2.6 MHz per core with Intel® ™ Pro 580 integrated graphic card. Ubuntu 16.04 LTE was installed as the primary operating system. Camera connections were made via USB 3.0 ports and the displays were connected via HDMI. Our custom code facilitated camera capture and processing of input imaging frames as well as output display to both the head-mounted AR unit and to a stand-alone monitor. Control and processing commands were programmed to operate in real-time using the Python 2.7 coding language.

3.2.3. Fluorescence Detection Sensitivity

Fluorescence detection sensitivity tests were conducted using Cardiogreen dye

(ICG) dissolved in Dimethyl Sulfoxide (DMSO) (I2633-50MG & 472301 Sigma Aldrich,

MO). Two sets of tests were conducted under two different lighting conditions. The first set of fluorescence sensitivity tests was conducted in a dark room with ambient light consistently measured between 40-50 µW/cm2 at the 830 nm wavelength. The second set of tests was conducted in an illuminated room, using ceiling-mounted fluorescent lights common to office and lab spaces, with an average ambient light measured at 80 µW/cm2.

Multiple dilution series (n = 6) of ICG in DMSO were prepared for each set of experiments.

59

Within each study, each dilution was imaged independently, positioning the camera vertically overhead, perpendicular to the table top. The excitation light source was positioned at a constant 60 cm distance from the fluorescent dilution, and at an incident angle of 15o from the vertical (or 75o up from the horizontal plane of the bench top) to avoid light obstruction by the camera. A range of excitation intensities (4, 2, 1, 0.5 and

0.25 mW/cm2) were used to quantify the effect of illumination power on fluorescence detection sensitivity. Excitation intensity was measured at the fluorescent target site using a sensitive photodiode (PM16-120 ThorLabs, NJ, USA), which was zeroed at ambient room light prior to testing. Additionally, each experiment was conducted over three working distances (20, 40 and 60 cm), measured from the end of the camera lens to the top of the fluorescent target.

During each fluorescence sensitivity test, the calculated SBR values for each dye concentration were used to determine the minimum required dye concentration the system would require to achieve SBRs of 1.2, 1.5 and 2. Calculations were made for each excitation intensity, volume of dye solution or container, working distance and tissue depth.

Fluorescent signal and background intensities used for calculations were measured from raw imaging data using ImageJ software [198]. The data was recorded and SBR values determined using Excel (Microsoft WA, USA). Statistical analyses were conducted using Minitab software (Minitab PA, USA).

3.2.3.1. Dark Room Tests

In the dark room study, a dilution series of ICG in DMSO was prepared using dye concentrations of: 300, 200, 100, 50, 25, 12.5, 6.25, 3.125 and 0 nM. Two mL of each dye

60 concentration were aliquoted into a single well of a 24 well cell culture plate

(EP0030722019 Eppendorf, DEU). Dilutions were placed into every other well on the plate, in order to prevent cross contamination of emitted fluorescent light between samples.

The effect of volume on fluorescent intensity was analyzed by preparing an additional set of dilutions in 48 well plates (EP0030723015 Eppendorf, DEU). In this case, a volume of

1 mL of each dye concentration was pipetted into every third well. Lastly, to compare the effect of shape on fluorescent intensity, 2 mL of each dye concentration was aliquoted into a different 2 mL centrifuge tube (508GRD-SC-FIS Fisher Thermo Scientific, MA). The centrifuge tubes were lain on their sides with the non-diffused side up. Fluorescent intensity of each dilution was recorded over 5 trials, using 5 independently created dilution series. The Signal-to-Background Ratios (SBRs) for each dye concentration were calculated, using the 0 nM concentration as a background reference. Multiple factor

ANOVA was conducted (α = 0.05) to determine significance between results. The calculated SBR values were analyzed as responses, while the dye concentration, excitation intensity, working distance, and container type were factors. Post-hoc Tukey tests (Cl =

0.95) were conducted on the factors to determine intra-factorial significance. Additional analyses, including ANOVA and Tukey test, were conducted within each volumetric category in order to observe intra-factorial variations as a function of the container.

3.2.3.2. Tissue Phantom Tests

Tissue phantoms were created to simulate the ability of the system to detect fluorescence emissions of ICG in a surgical setting. The phantoms were made using previously specified and tested recipes [183]. In short, each phantom consisted of a 50:1

61 ratio of distilled water to agarose powder (C996H58 Thomas Scientific, NJ), 2% of a 20% intralipid emulsion (I141-100ML Sigma Aldrich, MO), 1% of a 10% solution of India Ink in DI water and 0.2% silicon dioxide powder, by weight. The tissue phantom constituents were mixed at room temperature in a glass beaker, then heated in a microwave oven until reaching approximately 100o C. Heating was paused multiple times before reaching 100o

C so that the constituents could be stirred using a glass rod. Following heating, the solution was allowed to cool to below 80oC before being aliquoted into various volumes, wherein a concentration of ICG dissolved in DMSO was added.

An additional tissue phantom was created which functioned to contain the smaller, individually aliquoted phantoms with varying ICG concentration, in a manner similar to the aforementioned cell culture well plates, Figure 3.2A. Following the steps of heating the phantom solution and cooling it to 80o C, a 50 nM concentration of ICG was added to the container phantom to simulate the average NIR autofluorescence from muscle tissue

[34]. The still hot contents of the substrate phantom were then poured over a 3D printed mold placed within a box to set, Figure 3.2B. The mold patterned a series of differently sized wells onto the surface of the substrate, similar to [190], allowing for a range of tissue phantom volumes to be tested. Volumes used for this study included: 2, 1.5, 1, 0.5, 0.25 and 0.125 mL. A complete set of well volumes were used for each tested dye concentration, and wells were spaced 1.5 cm apart to prevent cross contamination of emitted fluorescent light between samples. The volume of each well was set by adjusting the diameter and keeping the depth constant, so that when the dilutions were aliquoted into the wells the surface of each volume was about level with the tissue phantom surface. A volume of ICG was added to each of the previously aliquoted tissue phantoms to achieve

62 total dye concentrations of: 62.5, 125, 250, 500, 1000, 2000 and 4000 nM. The fluorescent phantom solutions were then pipetted into the wells of the tissue phantom substrate.

Figure 3.2 Well-patterned tissue phantom (A) and 3D printed mold (B). The wells were made to hold 6 different volumes of phantom solution, and 5 different concentrations of dye were contained in one molded phantom. The box used to contain the mold (B, in white) was also used to create additional solid rectangular layers of tissue phantom (not patterned) to be placed on top of the filled, patterned phantom to

simulate fluorescence under a depth of tissue.

Detection of the fluorescent tissue phantoms was also tested under various tissue depths. Additional uniform, rectangular tissue phantoms were constructed with spatial dimensions (xy plane) matching the area of the container tissue phantom by using the same mold box used to contain the well-patterned phantom, Figure 3.2B. The rectangular phantoms were also made with a 50 nM concentration of ICG to simulate tissue autofluorescence, and were constructed at varying thicknesses, from: 1, 3, 5 and 10 mm thick.

Imaging of the fluorescent tissue phantom arrays was conducted first with the molded phantom wells filled with the varying fluorescent test phantoms open to the air.

63

Subsequent imaging tests were conducted with the layers of the rectangular phantoms placed over top of the filled wells at varying thicknesses. Fluorescent intensity readings of each fluorescent phantom were recorded and averaged over 5 separately prepared series, and SBRs were calculated, using the 50 nM concentration of the well-patterned phantom and the thickness layers as the background reference. Multiple factor ANOVA was conducted (α = 0.05) to determine significance between results. The calculated SBR values were analyzed as responses, while the dye concentration, excitation intensity, working distance, tissue thickness and volume were factors. Post-hoc Tukey tests were conducted on the factors to determine intra-group significance.

3.2.4. Fluorescence Guided Surgical Simulation

Resections of simulated fluorescent lesions created inside tissue mimicking phantoms were conducted. In this study, tissue phantoms were created as solid 10x5x5 cm blocks. The blocks were made using the same recipe as in the Detection Sensitivity section above [183], and included a 50 nM concentration of ICG to simulate autofluorescence.

The blocks were created by pouring the hot tissue phantom gel (< 80o C) into a rectangular

3D printed mold, Figure 3.3A. The mold was first only half filled and allowed to partially set before the top half was poured. Fluorescent occlusions were created in the top half of each phantom by pipetting a 200 µL volume of a 2000 nM dilution of ICG in DMSO into the block only when the gel began to visible thicken in the mold. The dye was applied in this fashion so that it would not diffuse throughout the block, but rather remain as an irregular shape concentrated towards the center of the top half of the phantom, Figure 3.3B.

64

Figure 3.3 Mold design for creating surgical tissue phantoms (A) and a tissue phantom with

fluorescent inclusion (B).

Surgical simulations were conducted with room lights on at an ambient intensity of

80 µW/cm2 for each trial. The excitation illumination was set at a constant intensity of 1 mW/cm2, measured at the phantom surface. Additionally, the surgical scene was illuminated by the 850 nm LED lamp in order to make the structural data (i.e. the phantom block and the surgeon’s hands and tools) visible through the cameras bandpass filters. The reflectance mode LED intensity was modulated to 50 µW/cm2 at the phantom surface, in order to prevent wash-out of the fluorescent data. The fluorescent region was excised using a surgical scalpel and forceps, and both the excised tissue and the excavated margin were saved for analysis.

Two sets of simulations were conducted, wherein image guidance was provided either by wearing the fluorescence imaging system described in this paper or by mounting the same cameras on a stand and viewing the captured imagery on a 2D, stand-alone monitor. The stand-mounted system was affixed on a translating arm mount, so that the working distance could be adjusted between 20 and 60 cm and the imaging angle could be

65 varied at the user’s discretion. Each set of simulations (wearable and stand-mounted) were repeated 3 times by 11 different users to gain statistical significance. Surgical resections

(3 each) were performed by 5 volunteer (MD), 3 medical students and 3 engineers, all of whom were not previously familiar with the system. Our “surgeons” were trained on best practices for system usage prior to simulations, and each volunteer performed a practice surgical simulation.

Parameters selected for surgical performance evaluation included weight of fluorescent tissue removed, time to surgical completion and fluorescent intensity of both the excised tissue and the remaining margin. Surgical completion time was determined as the difference between the procedure start time and when the administrator decided that all the fluorescence had been removed.

Fluorescent intensity of the excised tissue and remaining margin was determined by fluoroscopic analysis of the tissue phantom pieces, by the following procedure. The tissue phantom block containing the excavated margin was placed into a 250 mL glass beaker. The beaker was then placed into a microwave oven and heated until the phantom had completely melted. The melted phantom was stirred using a glass rod and allowed to cool to below 80oC before a 2 mL sample was aliquoted into a spectroscopic cuvette.

Fluorescent emission intensity readings were taken at the peak emission wavelength in the

ICG band, 832 nm. Next, the resected tissue phantom pieces were weighed and then they too were melted, stirred and analyzed for ICG emissions. Transmission mode spectroscopic readings were taken using a desktop spectrometer (ThermoFisher, MA,

USA). The spectrometer’s the white light source was filtered using an 800 nm low-pass

66 filter and the collection slit was filtered with an 832 nm band-pass filter to prevent stray light contamination of the fluorescence signal (47-586 & 84-107 Edmund Optics, NJ,

USA). The detected pass-band was plotted for analysis. Additionally, tissue phantom blocks containing no ICG were analyzed as negative control samples, and blocks containing a uniform 2000 nM ICG concentration were analyzed as a positive control.

Tissue phantom blocks containing only the 50 nM autofluorescence level of ICG were also analyzed as a comparative background. The averaged intensity of the negative control readings were subtracted from all surgical tissue phantom readings. Corrected margin and resected tissue intensities were reported as a percentage of the average intensity measured from the positive control samples. Each resected phantom was measured in triplicate.

Significant differences between system (Goggle vs Stand) as well as between user (Doctor,

Student and Engineer) were determined using multiple factor ANOVA (α = 0.05). The measured fluorescence intensity, surgical time and resected tissue weight were analyzed as responses, while the system and user were set as factors. Post-hoc Tukey tests (Cl = 0.95) were conducted to determine significance within factors based on system and user.

3.2.5. Resolution Testing

Spatial resolution of the system was determined using a 1951 USAF Resolution

Test Target (R3L3S1P ThorLabs, NJ, USA). The target was placed on top of a plain white sheet of printer paper and imaged from directly overhead, Figure 3.4A. Reflection mode illumination was provided by our 850 nm LED light source, mounted over-head at a 15o angle from the vertical camera mount, with a measured intensity of 100 µW/cm2 at the target. Imaging was conducted at working distances of 20, 40 and 60 cm. The raw images

67 were analyzed using ImageJ software for minimum detectable resolution, in two ways. The first method was to select the smallest bar pattern to contain three distinct dark bars separated by white bars observable by plotting the average intensity across the pattern,

Figure 3.4B. The second method was to select the smallest target having a contrast function value of greater than 20%. Contrast was determined using the equation below, where IDark is the average intensity of the dark bars in a single pattern and ILight is the average intensity of the light bars in between the black.

퐼퐿𝑖𝑔ℎ푡 − 퐼퐷푎푟푘 퐶퐹 = × 100% 퐼퐿𝑖𝑔ℎ푡 + 퐼퐷푎푟푘

The system spatial resolution was determined to be the width of the bars in the selected patterns. The results from the two determinant methods were compared.

Figure 3.4 Resolution target (A) imaged from directly overhead. The first method of determining

minimum discernable resolution of the system sensors was conducted by plotting across an imaged bar

pattern to see whether three distinct troughs were present (B).

68

The stereoscopic imaging capability of the system was evaluated for depth perception. The imaging system was first calibrated using known methods [199, 200].

More detail about this procedure was provided in Chapters 4 & 5. Next, side-by-side stereoscopic imaging frames were processed to obtain a composite frame known as a disparity map. The disparity map was translated into a depth map, which returns the distances from the optical plane to the various object planes located within the cameras joint field-of-view [201]. The depth map was then used to determine the limits of the system’s stereo vision capabilities by imaging adjacent targets of varying height difference.

The averaged depth map values for the targets were compared for significant difference (1- tailed t-test; α = 0.05) to determine the minimum difference in imaging depths that the system was able to distinguish. For these measurements, 1 mm thick microscopic slides were used as imaging targets. Slides were made opaque by applying masking tape to the top surface, and imaging was conducted on a dark backdrop. Stereoscopic depth mapping was conducted at a working distances of 20, 40 and 60 cm.

3.2.6. Display Testing

Evaluation of the stereoscopic display was also required of the wearable imaging system. The most efficient, and perhaps most appropriate, way to evaluate display quality was through user feedback. Metrics for defining display quality include visible resolution

(i.e. the smallest single dimension that a user can distinguish, looking through the display), fluorescence sensitivity (the minimum fluorescence concentration a user can see, looking through the display), and depth perception. For these tests, 11 volunteers evaluated display performance. Each volunteer would adorn the goggle system with one randomly selected

69 display connected. The system cameras were disconnected and stand-mounted, so that the evaluations could be compared more accurately to the sensor characterization data taken at three set working distances. For these tests, volunteers viewed a target through the AR displays that was being imaged in real-time by the mounted cameras at 20, 40 and 60 cm working distances. The volunteers first evaluated a tissue phantom containing multiple fluorescent inclusions of varying size and concentration, as described in the Fluorescence

Detection Sensitivity section of this chapter. Next, the volunteers examined an USAF 1951 resolution target, as described in the Resolution Testing section. Last, the volunteers examined stacks of opaqued microscopic slides, see again the Resolution Testing section.

For each test, volunteers indicated the smallest and dimmest fluorescent inclusion they could see clearly, as well as the smallest set of bar patterns and the lowest adjacent microscope slide stacks which could still be distinguished as having different heights.

Tests were repeated until each volunteer had used each display to analyze the test objects.

3.3. Results

3.3.1. Fluorescence Detection Sensitivity

3.3.1.1. Dark Room Tests

Tests were conducted on our imaging sensors to determine fluorescence detection limits under optimal conditions. Additionally, these tests were used to determine whether the volume and shape of the fluorescent target made a significant impact on detected

70 fluorescent intensity. Lastly, the results provide a look-up table relating the required excitation intensity and dye concentration to achieve a desired SBR.

Results were formulated as the minimum dye concentration in DMSO required to achieve a SBR of 1.2, 1.5 or 2, over a range of working distances and excitation intensities,

Table 3.1. Additional plots of the data were arranged and reported in Appendix A. As expected, the dye concentration required to achieve each SBR decreased with increasing excitation intensity. Decreasing working distance did not always correlate to a decrease in the required dye concentration, however, and this statistic appeared to have a greater dependency on the container type. The largest SBR values, hence lowest required dye concentration to achieve each SBR target, were calculated from the centrifuge tube volumes, Figure 3.5. The lowest SBR values, corresponding to the largest minimum dye concentrations, were found from the smallest volumes, using the 48 well plates.

Figure 3.5 Minimum dye concentrations required to achieve SBR values of 1.2, 1.5 and 2 using the

centrifuge tube volumes of ICG in DMSO. Tests were conducted over a range of working distances and

excitation light intensities.

71

Table 3.1

Required ICG concentrations to achieve a Signal-to-Background Ratio of 1.2, 1.5 or 2. Testing was

conducted using a range of excitation intensities and imaging working distances. The values listed were

interpolated from averaged fluorescent intensity readings (n=5) taken on a series of dilutions. Different

volumes of each dilution were placed into either 24-well plates at 2 mL, 48-well plates at 1 mL or

centrifuge tubes at 2 mL for testing. Significant differences in concentrations were found between

container type, SBR and intensity. Differences in concentrations between working distances were varied.

24 Well 48 Well Tube Intensity WD (cm) (µW/cm2) SBR 1.2 1.5 2 1.2 1.5 2 1.2 1.5 2 20 21.67 55.12 114.29 35.31 103.86 207.95 11.90 29.51 58.33 250 40 24.10 60.52 120.59 40.30 109.24 205.93 11.45 27.41 54.08 60 24.15 62.93 122.96 38.94 105.80 205.29 11.73 28.08 55.74 20 12.36 25.86 54.78 14.73 52.65 112.61 6.72 16.43 32.92 500 40 13.69 26.39 55.03 19.60 52.01 109.65 5.95 14.24 28.41 60 14.07 26.69 55.62 20.33 49.93 108.35 6.13 14.97 29.36 20 4.50 16.30 28.51 7.80 29.54 70.66 3.47 8.57 17.37 1000 40 6.57 16.81 30.62 10.43 25.87 58.55 3.52 8.07 15.60 60 6.62 16.94 30.63 13.04 28.72 62.58 3.01 8.58 15.91 20 3.88 11.12 19.48 6.70 16.93 48.35 2.35 5.50 10.79 2000 40 4.02 11.19 20.40 7.07 18.52 36.03 1.83 4.50 8.77 60 4.37 11.29 20.59 5.67 16.29 26.97 1.89 4.78 9.67 20 3.10 9.16 17.10 6.14 13.67 36.19 1.33 3.41 7.59 4000 40 4.03 9.70 17.15 6.32 13.91 30.97 1.36 3.39 6.69 60 4.49 9.98 17.21 5.33 10.93 19.64 1.45 3.68 7.33

Multiple factor ANOVA indicated significant differences between the SBR values calculated for each of the volumetric categories. The centrifuge tubes achieved significantly lower values than either of the well plates. Additionally, significantly lower concentrations of dye were required to achieve any of the SBR values when using a 2 mL volume in the 24 well plates than when imaging a 1 mL volume in the 48 well plates.

Results from ANOVA studies were tabulated and listed in Appendix D.1.1.

72

Varying the excitation intensities was also found to incur significantly different

SBR readings, within each volumetric category (i.e. 24 or 48-well plates or tubes). Post- hoc Tukey tests were conducted on the results from the excitation intensities, within each volumetric category, and the analyses listed in Appendix D.1.1. Results indicated significant differences in calculated SBR values when varying excitation intensity.

Exceptions to this were found between adjacent intensities. For instance, the SBRs calculated using the 2000 and 4000 µW/cm2 intensities did not produce significantly different results, however the results calculated between the 1000 and 4000 µW/cm2 intensities were significant.

Unlike with the excitation intensities, SBR values calculated over different working distances were not typically significantly different from each other. One exception to this was found for the 48 well plate volumes, where only the 20 cm working distance was found to provide significantly different results from the 40 and 60 cm distances. Varying the dye concentration in each volume, however, often resulted in different SBR values, as expected. Significance was not found between every concentration, however a regular trend in significance was determined. A larger difference between dye concentrations (e.g.

200 nM versus 300 nM) tended to prove more significant than smaller differences (e.g. 3 nM versus 6 nM), Appendix D.1.1.

3.3.1.2. Tissue Phantom Tests

The minimum dye concentration required to achieve a SBR of 2 in the tissue phantom was tested over multiple tissue depths, camera working distances and fluorescence excitation intensities. Results were summarized in Table 1A, in Appendix A.

73

The 0.125 mL volumes were excluded from the results due to a very large amount of variance between measurements. The minimum dye requirements to achieve an SBR of 2 at a 20 cm working distance was plotted below, Figure 3.6. Plots for the 40 and 60 cm cases progressed similarly to the 20 cm plot in Figure 3.6, below, however at different relative amplitudes, and were included in Appendix A.

Figure 3.6 Minimum dye concentrations required to achieve a SBR of 2 in tissue phantom at a

working distance of 20 cm. Readings were taken at a range of fluorescent inclusion volumes, inclusion

depths in the tissue phantom and excitation intensities. Results indicate that the 1 mW/cm2 excitation

intensity provided the lowest minimum dye concentrations to achieve the desired SBR in all cases.

Unlike with the previous study on volumes of fluorescent solutions in a dark room, working distance had a more significant effect on fluorescence detection in tissue phantom.

The mean SBR values recorded at 20 cm were significantly less than those found at the 40 or 60 cm working distances. Fluorescent intensity readings tended to decrease with

74 increasing working distance, however the background readings decreased at a faster rate.

Results determined at 40 and 60 cm working distances were not found to be significantly different from each other, as determined via Tukey post-hoc test, Appendix D.1.2.

Varying the excitation intensity also created a difference in detection sensitivity, however not always significant. Unlike with the dark room study, increasing excitation did not always translate to improved fluorescence detection. Optimal detection results were typically attained using an excitation intensity of 1000 µW/cm2 for all working distances and tissue depths. The 500 and 2000 µW/cm2 intensities proved to provide the next best results, while the 250 and 4000 µW/cm2 intensities provided the poorest results.

Analyzing all working distances, volumes and tissue depths, no statistical significance was determined between SBR values when using the 500 or 1000 µW/cm2 excitation intensities. Additionally, so significant difference was determined between the results obtained from the 250, 2000 or 4000 µW/cm2 excitation intensities, either. Significance was determined, however, between these two groupings, Appendix D.1.2.

The fluorescent tissue phantoms were imaged successfully, achieving an SBR of at least 2, under each reported tissue thickness, between 0-10 mm, for a well volume of at least 0.25 mL. Well volumes below 0.25 mL provided unreliable measurements when imaged at depths of 5 mm or more, as well as large amounts of variance at all depths. Due to the large amount of error, these volumes were excluded from further analysis.

Significant differences between SBR values calculated from fluorescence readings were found between each imaged tissue phantom depth, with the exception of between the 3 and

75

5 mm depths. Different well volumes resulted in significantly different fluorescence readings in every case.

3.3.2. Fluorescence Guided Surgical Simulation

Surgical simulations were conducted by a total of 11 volunteers, each operating on three separate phantoms. Results from the simulated surgeries were summarized in Table

3.2. No significant differences were determined in any of the measured metrics of evaluation based on whether the surgery was performed by a doctor, student or engineer.

Engineers did however have the fastest completion times. Evaluation of the metrics attained from surgeries conducted using the head mounted goggle system versus surgeries conducted using the stand mounted system revealed two significant differences via

ANOVA analysis. The measured fluorescent intensities of the resected fluorescent inclusions were significantly greater when using the goggle versus the stand mounted system. Fluorescent intensities measured from the excavated margins were found to be significantly less when using the goggles, as well. The measured time of surgical completion, while not statistically significant, was on average less when using the goggle, particularly with the engineer group. Resected tissue weight was also lighter on average when using the goggle versus the stand, however the difference was not found to be significant. Fluorescent intensities measured from both tissue phantom portions were also significantly greater than the intensity of the negative control phantom, and also significantly less than the intensities from the positive control.

76

Table 3.2

Results from simulated surgeries. The fluorescent intensity of the resected tissue was found to be

significantly greater when using the goggle versus the stand mounted system, and the margin intensity was less. Time to completion and resected tissue weight values were not significantly different between the two

systems, however the goggle time and weight were less on average.

Time (s) Weight (g) Margin Intensity (%) Resected Intensity (%) Surgeon Goggle Stand Goggle Stand Goggle Stand Goggle Stand Doctor 16.36 16.05 7.32 7.56 22.50 28.70 81.19 69.19 Student 15.15 15.95 7.66 7.77 23.97 33.42 79.10 74.88 Engineer 11.88 12.78 7.25 7.44 27.30 27.63 80.44 69.76 Average 14.46 14.93 7.41 7.59 24.59 29.92 80.24 71.28 STD 1.89 1.52 0.18 0.13 2.01 2.51 0.86 2.56 Sig Diff No No Yes Yes

3.3.3. Resolution Testing

Resolution testing of the imaging system was conducted, results summarized in

Table 3.3. Determining resolution based on visual confirmation, i.e. three distinct black bars, produced finer resolution results than was found when setting a 20% contrast limit.

In all cases, and at all distances, the minimum resolution was less than 0.5 mm. When using a 20 cm working distance, the resolution was 0.125 mm or less.

77

Table 3.3

Resolution limit of the goggle cameras. Measurements were taken using a USAF 1951 Resolution Target,

at three working distances. The smallest bar pattern on the target was considered discernable if the

calculated contrast between the adjacent dark and light bars was at least 20%, or if the 3 adjacent bars appeared on a cross-sectional plot of the pattern as troughs (see Figure 3.4). The line width of the smallest

detectable pattern was then set as the system resolution limit.

Working Group/ Resolution Contrast Distance Element (µm) (cm) 20 G2E1 125 20% 40 G0E6 280.62 60 G0E2 445.45 20 G2E3 99.21 Visible 40 G1E2 222.72 on Plot 60 G0E4 353.55

Evaluation of system depth perception resolution produced results similar to those found in Chapter 1. The depth map frames constructed from stereo image pairs showed a significant difference in distances when the measured adjacent stacks had a 2 mm difference in height, as measured from the table top. Attempts made to characterize system depth resolution at height differences of less than 2 mm using 3D printed test patterns proved unsuccessful.

3.3.4. Display Testing

Our three AR displays were tested for their ability to accurately deliver fluorescence and spatial information to the user. The first set of tests asked 11 users to observe and identify minimum visible fluorescence, spatial resolution and depth perception as seen through each of the three tested displays. Results have been summarized in Table

3.4. No volunteers could clearly identify any of the fluorescent targets when excited using

78 a 250 µW/cm2 intensity. Inversely, all volunteers could see all of the fluorescent targets at all working distances when excited using a 2 or 4 mW/cm2 intensity. Minimum fluorescent detection limits were therefore identified between the 500 and 1000 µW/cm2 intensity at various working distances, Table 3.4. The detection limits were set by majority vote, and a majority was found in each case.

Table 3.4

Minimum observable fluorescent emissions as seen through the 3 tested wearable displays (M: MyBud, V:

Vuzix and L: LCD). The minimum dye concentration and excitation intensity is indicated for each working distance and phantom volume. All marks were made resulting from a majority vote by a panel of

11 users. Overall, the Accupix MyBud display achieved the best results.

Excitation (µW/cm2) Working 500 1000 Volume (mL) Distance (cm) Dye Concentration (nM) 62.5 125 62.5 125 0.25 MLV 0.5 ML V 20 1 M LV 2 M LV 0.25 ML V 0.5 MLV 40 1 MLV 2 M LV 0.25 MLV 0.5 MLV 60 1 MLV 2 MLV

Resolution testing was conducted by having each volunteer identify the smallest bar pattern in which they could clearly discern three black lines, Table 3.5. Based on the results from display testing, the Accupix MyBud achieved the best results. The results more clearly indicated a significant improvement in resolution for the MyBud, while our

79 home-made LCD display provided the worst results, which was expected since the display also had the lowest pixel resolution.

Table 3.5

Optimal system resolution as seen through each of the AR displays. Volunteers selected the smallest bar

pattern in the USAF 1951 Resolution Target in which they could distinctly discern three adjacent black

lines. The Accupix MyBud achieved the best results.

Display Distance (cm) Group Resolution (µm) 20 G1E5 157.49 MyBud 40 G0E3 396.85 60 G0E1 500 20 G1E2 227.72 Vuzix 40 G0E2 445.45 60 G-1E4 707.11 20 G1E1 250 LCD 40 G0E1 500 60 G-1E2 890.9

Lastly, depth perception resolution testing was conducted in the same manner as described in Chapter 1. The results from this test were the same as those reported previously, as the volunteers unanimously reported the ability to distinguish between adjacent slide stacks with a 1 mm difference in stack height. Additional comments concerning advantages and disadvantages of each display were listed in Appendix A.

80

3.4. Discussion

3.4.1. Dark Room Study

Multiple assumptions based on findings in the literature, as well as on the early work conducted in Chapter 2, were challenged in this chapter. One expectation going into this study, based on results from Chapter 2, was that the dye concentration required to achieve the tested SBR values would increase with working distance, however this was not always the case. In fact, this was only typically the case with the well plate dilutions. In most cases the difference between dye concentrations over the three working distances was not significantly different, suggesting a degree of consistency in the cameras detection sensitivity within our range of working distances. Significance was only found between the 20 and 60 cm working distance using the 48 well plate volumes, suggesting that the effect of working distance has some dependence on the size and volume of the dilutions.

Comparing these results to previous studies also indicated a dependence on the imaging sensor and lens configuration [14, 94, 105, 202]. Smaller lenses and imaging sensor pixel resolution, as well as pixel depth and quantum efficiency, all appear to contribute to the relationship between working distance and fluorescence detection.

The centrifuge tubes revealed a more consistent trend, however somewhat unexpected. For most excitation intensities, the 40 cm working distance was associated with the lowest required dye concentration to achieve the target SBR values. The reasons for this may be that the background measured at 20 cm was higher than at further distances, due to increased detected random scattering. At further distances, the scattering will have dispersed more in different directions. Additionally, the amount of fluorescence detected

81 at 40 cm would still be slightly greater than that detected at 60 cm, due to optical path length, decreasing the amount of dye required to achieve the requisite SBR values. These statements suggest that the well plates exhibited less scatter than the tubes.

The tube volumes also demonstrated a greater degree of fluorescence emissions for each dye concentration, as compared to the well plates. Internal optical scattering of the excitation light along the length of the centrifuge tubes could have also produced a greater amount of fluorescence from the contained dye by effectively increasing the optical path length of the excitation light, thus increasing the likelihood of fluorophore excitation. Also, the added scatter may have broadened the reflected excitation light, allowing it to be detected by the goggle cameras through the NIR bandpass filters. Whether these effects were due in greater part to tube material or orientation with respect to the cameras and incidence of the excitation is light as yet unknown, however both properties may well have had an effect. Whatever the reasons, findings indicate that systems characterized using dye dilutions held within different containers, such as test tubes, vials and well plates [14, 67,

92, 94, 100, 105, 168, 195, 202], cannot be compared directly in terms of detected fluorescent intensity or SBR.

3.4.2. Tissue Phantom Study

Conducting fluorescence detection studies on tissue simulating phantoms and under lighting conditions typical to the intended use of the system can provide a more useful characterization [34, 184, 185, 188-192, 203, 204]. In this study, we attempted to further elucidate system performance by assessing multiple fluorescent dilutions at varying volumes, tissue depths and working distances as well as excitation intensities. Varying

82 multiple parameters during characterization provides a greater set of reference information than only testing under select fixed conditions [34, 188-190].

Similar to the dark room experiments using the 48 well plates, only the 20 and 60 cm working distances proved to be significantly different. In fact, the calculated SBRs from the 20 cm measurements proved to provide the lowest results. Likely, this is due to the increased background resulting from the increased level of scatter detectable when the imaging sensors were placed nearer to the phantoms. As with the dark room volumes, this scatter dispersed more with distance than the more concentrated signal fluorescence. The remaining variables provided results as expected, with the exception of the excitation intensity. Due to the large variability in measured fluorescence as a function of excitation intensity, only the 500 and 1000 µW/cm2 intensities provided significantly different, as well as improved, results. Expectation was that the larger excitation intensities would result in the best results, however it was the 1000 µW/cm2 intensity that provided optimal fluorescence detection. We believe that this is due to increased background emissions as well as increased bulk scattering of the excitation light, resulting in increased fluorescence excitation and also low-level spectral broadening. When imaging greater dye concentrations, the greater excitation intensity would cause an increase in fluorescence emissions, leading to sensor saturation. Therefore, the background intensity grew faster than the fluorescence signal, reducing SBR.

The use of tissue phantoms for surgical simulations has become a more common method of evaluation for prototype fluorescence imaging testing [34, 96, 117, 184, 185,

188, 197, 205]. Phantoms are an affordable, convenient and quick method for feasibility

83 studies, however quantitative information is not often taken. The trends observed in this study will guide future expectations for animal and clinical trials.

3.4.3. Display Testing

Display testing resulted in the Accupix MyBud achieving the best results, which was largely explained from user feedback, see Appendix B. Compliments made of the

MyBud display included that it had the brightest screens and best contrast. The Vuzix display, conversely, was voted as having the dimmest displays and worst contrast, making it more difficult to discern low levels of fluorescence. The in-house display with the LCD screens was voted as having the largest viewing area, but poorest resolution, limiting the device’s ability to display as much detail as the other tested units. Depth perception sensitivity results were the same on all displays, however user feedback indicated a preference to the Vuzix display in terms of stereo perception. Preference of this display for stereo viewing was due to the adjustable baseline difference between the view screens, allowing users to more accurately position each display in front of their eyes, resulting in more even viewing of the stereo frames. One criticism of the MyBud display was that it was difficult to position for even viewing between the left and right eye, reducing stereo perception.

Display testing was ultimately more qualitative than quantitative. Other techniques for display evaluation have been developed to provide a more quantitative assessment [117,

206]. Such methods rely on imaging the display with a single camera, set to mimic human eye function, and analyzing the recorded images for parameters including fluorescence detection, spatial resolution and frame rate. Using a single common camera allows various

84 displays to be analyzed from a like reference point. Assumptions are made, however, regarding the validity of human eye sight mimicry that the instrument can demonstrate.

While not as quantitative, another study tested display brightness to optimize contrast for an optical see-through display [97]. The feedback obtained from the testing here was valuable in that it provided direction for future work in constructing an AR display for the viewing of stereo medical scenes. Combining the advantages of each display has afforded us the opportunity to create an optimal alternative moving forwards.

3.5. Conclusions

In this chapter, we updated the imaging goggle system with more sensitive CCD imaging sensors and fully characterized it for NIR fluorescence sensitivity and resolution.

In the process, we found that different volumes as well as different shapes and containers can result in different detected fluorescent intensities during characterization. Even in tissue phantom, the optical properties of the phantom and the volume of fluorophore can cause variations in the detected intensities. Therefore, greater standardization, or at least more thorough reporting, is required when publishing results.

Simulated fluorescence tissue resections were conducted using the goggle system and using a stand-mounted system with equivalent detection capabilities. Use of the goggle system resulted in less non-fluorescent tissue being removed during excision, and a greater portion of the fluorescent tissue removed. Additionally, three different AR displays were compared for their ability to deliver the imaged fluorescence intensity and resolution to the user.

85

CHAPTER IV

APPLICATION OF A DENSE FLOW OPTICAL TRACKING ALGORITHM WITH

PULSED LIGHT IMAGING FOR ENHANCED FLUORESCENCE DETECTION

Chapter 4 directly addresses Aim 1, in particular with respect to fluorescence detection sensitivity. In this chapter, methods for improving detection sensitivity (PpIX) are discussed in relation to instrumentation and software analysis of imaging frames.

4.1. Introduction

4.1.1. Enhancing Fluorescence Imaging for Clinical Application

Many methods have been used to improve fluorescence detection for tumor or suspect lesion identification. The most basic of these include the use of software based thresholding and hardware based optical filters to separate the fluorescent emissions from background reflections [25, 59, 70, 207, 208]. The drawback of using these methods alone is a significant reduction in the ability to accurately localize the fluorescent source on the anatomical landscape. Remedies to this pitfall have come in the form of co-registration of the fluorescent information to background data which is simultaneously captured on a

86 separate unfiltered imaging sensor. Such techniques have been implemented through a variety of means, including beam splitter hardware [35, 65, 100, 169] and software based registration [94, 117, 208]. A simpler, though less discrete method, involves the use of a broader imaging filter which allows some near-emission wavelength light into the excitation light source [14]. While these methods help to resolve the localization issue, they do little to improve detection sensitivity in a realistic environment such as a surgical suite.

Additional works have complemented single-band fluorescence detection by analyzing multiple wavelength bands which are unique, or prevalent, to either the exogenous fluorophore or the endogenous background autofluorescence [92, 122, 209].

Comparing the intensity of each fluorescent band can help differentiate healthy and pathological tissue. Multispectral imaging techniques have also been implemented to differentiate between pathological and healthy tissues based on the inherent interaction of the multispectral light with the tissue [122]. Other studies augment the fluorescent detection with spectroscopic scans of a suspect lesion. Fluorescence spectroscopy is used to confirm the presence of wavelengths unique to the exogenous fluorophore [112, 210,

211]. Chemical spectrographic analysis, such as the Raman technique [212, 213], or reflectance spectroscopy [161, 210] may also be conducted to confirm the fluorescent region’s pathology. Optical coherence tomography (OCT) has also been used with fluorescence to gain greater textural and depth information of the suspect lesion [214, 215].

Lastly, surgical has been implemented for cellular level analysis of a suspect lesion and to gain in vivo fluorescence histology of the surgical margin following mass resection [25, 57, 58, 122-124].

87

Pulsed light imaging has emerged as a simple and efficient way to increase fluorescence detection, separating signal from background based on the temporal frequency of fluorescent emissions [40, 95, 103, 104, 216, 217]. Techniques for pulsed light imaging involve synchronizing camera captures with excitation ON and OFF states incurred during light pulsation [92, 95, 105]. In this way, sequential images can be obtained both with and without fluorescence excitation. The images taken during the OFF state of the excitation light source, without fluorescence, serve as a reference background frame to be subtracted from the ON state images, thus effectively removing any non- fluorescence light detected. Other techniques include applying phase modulation to recreate a pulsed style excitation [95], and synchronization of the camera captures with the frequency of the room lights rather than the excitation light [103, 104]. Pulsed light is often used in conjunction with other fluorescence imaging enhancement tools. In particular, the pulsed light method has been implemented successfully for co-registration of fluorescence with color reflection mode imagery [92, 95, 105].

A potential issue arising when subtracting fluorescent and background frames for pulsed light imaging is the appearance of motion artifacts. The imaging systems using pulsed light are typically fixed, stand mounted or handheld used in contact mode. In all cases, target or camera motion is not a typical issue as both the camera and the tissue are stationary. For a mobile intraoperative imaging platform, such as the system described herein, the cameras are head mounted and subject to movement. Even when using relatively high framerates (60-120 fps) head movements and hand motions conducted within the surgical imaging field can result in a visible motion artifact, Figure 4.1. The reason for this is because the position of objects in sequentially subtracted frames are no

88 longer at the same location. Increasing the framerate can decrease inter-image object disparity, but will not remove it. Additionally, the use of narrow band filters on the fluorescence imaging cameras, as well as limiting the camera gain, can reduce the amount of detected background information which may lead to an artifact. However, this will also reduce the amount of fluorescent light detected.

In this chapter, we present a dense optical flow point tracking regime for enhanced fluorescence detection in a highly autofluorescent background using pulsed light. The system was characterized with respect to fluorescent detection sensitivity, using a clinically relevant dye, and the results were compared to conventional steady-state (DC) fluorescence imaging. We then characterized the system with respect to fluorescence detection accuracy and detection error.

Figure 4.1 Example of a motion artifact in pulsed light imaging, resulting from a camera translation

during frame capture. (A) Image of a pig ear with green false-colored fluorescent lesion indicated by the red arrow. The fluorescence has been differentiated through pulsed light imaging. (B) The same pig ear captured immediately following a camera translation. Blue arrows indicate the motion artifacts incurred by

subtracting sequential frames acquired during the movement. The disparity in ear location between

sequential frames caused the motion artifact.

89

4.1.2. Pulsed Light Imaging for Skin Cancer Therapy

One future objective of this study is to improve Aminolevulinic Acid (ALA) dependent photodynamic therapy of non-melanoma skin cancers (NMSCs) and acetic keratosis. Non-melanoma skin cancers have become the most common form of neoplastic growth among Caucasians in the United States, accounting for more than one third of all adult cancers [218-220]. NMSCs include multiple histologic subtypes and generally have a favorable prognosis when treated early. Current treatments include MOHS micrographic surgery, cryosurgery, electrodessication and curettage, , topical chemotherapy, and topical immunomodulation [221]. Photodynamic therapy (PDT) is an attractive alternative to current treatment options of NMSC because it is less invasive and destructive than many of the other therapeutic options, resulting in less scarring [40, 207,

222]. The administration of ALA for NMSC therapy results in cellular uptake of the ALA resulting in the downstream production of the natural fluorophore protoporphyrin IX

(PpIX), which serves as both a fluorophore and a PDT agent when exposed to blue-violet light [25, 39, 59].

NMSCs have been the target of PDT due to the accessibility of the skin to the excitation light. Typically, topical 5-ALA is applied for 1 to 24 hours to allow PpIX production before exposure to light. The bulk tumor is often surgically excised prior to

PDT, except in the case of acetic keratosis. A variety of light sources may be used to cause the photodynamic reaction, exciting PpIX to a higher energy level, resulting in phototoxicity due to the formation of free radicals and reactive oxygen species [39, 216].

Unfortunately, the current efficacy of 5-ALA in PDT in treating NMSCs varies widely.

90

Reasons include varying clearance levels in different tumor types, poor light penetration for larger tumor thicknesses and poor visibility during excision due to high background levels resulting from endogenous porphyrins in the skin [25, 39, 216]. Additionally, exposure to the intense blue-violet light used in PDT can cause cellular damage to both healthy and pathologic tissue, because of the ubiquitous nature of PpIX in skin, and resulting in significant pain and discomfort for the patient [39, 222].

Pulsed excitation light may prove useful in NMSC imaging for fluorescent guided excisions and PDT treatments [25, 40]. While the native PpIX in the skin will be excited at the same frequency as the tumor, studies have shown that the overall background is reduced, both due to the lower concentration of PpIX outside the tumor and that much of the background results from reflected room light and other excited fluorophores such as collagen and other porphyrins. Pulsing the excitation light can also limit tissue exposure during PDT, reducing phototoxicity [216, 222]. The importance of this is twofold. First, the amount of damage to healthy tissue is limited, and corresponding pain reduced [40].

Healthy tissue damage can also be reduced through light collimation, focusing or the use of coherent light sources [222]. The second benefit is the reported increase in optical depth penetration, resulting in deeper tissue treatment and greater PDT reactions [40].

91

4.2. Materials and Methods

4.2.1. Computation

Camera control, display output and coordination of light pulsation was integrated on a mini PC (NUC6i7KYK Intel WA, USA). Computer specifications were as follows:

32GB 2133 MHz DDR4 Ram, 500 GB SSD, Intel® Core™ i7-6770HQ processor at 2.6

MHz per core with Intel® Iris™ Pro 580 integrated graphic card. To achieve frame rates in excess of 15 fps and movement latency of less than 0.1 s, GPU computing was implemented using the integrated graphic card.

A software package was designed for camera control and capture, pulsed light integration, point tracking, fluorescence identification and display output. All software was written using Python 2.7 and associated programming libraries, including the Open source Computer Vision (OpenCV) libraries. The point tracking and fluorescence identification algorithms, located within the main camera capture and display loop, were sent to the GPU for processing using the Just-In-Time (JIT) package within the Numba libraries (Anaconda, TX, USA).

4.2.2. Instrumentation

A redesigned version of the stereoscopic fluorescence imaging camera module was used here. Previously, twin CCD sensors were implemented (Chameleon 3 FLiR, BC,

CAN) for fluorescence imaging. However, the frame rate was effectively halved in this application, due to the combination of fluorescence and non-fluorescence images using the pulsed light regime. To preserve real-time imaging (> 15 fps) it behooved us to integrate

92 either a faster sensor, or to reduce the utilized imaging area, and therefore bandwidth, of the current sensors. Since high resolution was important as well as high speed, we selected new CMOS imaging sensors (MC023MG-SY Ximea, DEU). The sensors operated at a maximum of 170 fps and full-resolution at 2.3 MP. In this study, however, the cameras were set to capture images upon receiving a trigger signal from the computer. The cameras were connected to the computer via USB3.1 connections. Camera parameters were set accordingly: 10 dB gain, 1.2 MP resolution and 10 ms exposure. Fluorescence detection was optimized using high-efficiency, low distortion machine vision lenses (M118FM08

Tamron, JPN). Narrow band filters centered at 632 nm were applied to minimize out of band noise (65-166 Edmund Optics, NJ, USA).

4.2.3. Illumination

A 405 nm LED emitter (M405L3 & SM2F32-A ThorLabs, NJ, USA) was used for fluorescence excitation. The excitation bandwidth was limited between 400 – 450 nm using a bandpass filter (84-781 Edmund Optics, NJ, USA). Additionally, white light for background/anatomical imaging was provided using a white LED emitter, collimated and focused through an adjustable aspheric lens with 400 – 700 nm anti-reflection filter

(MNWHL4 & SM2F32-A ThorLabs, NJ, USA). To prevent contamination of fluorescent emissions with similar reflected wavelengths from the white light source, a 632 ±5 nm bandpass filter was applied to the LED. Both LEDs were operated using adjustable output driver circuits (LEDD1B ThorLabs, NJ, USA).

Light pulsation was triggered using a simple microcontroller timing circuit, implemented on a consumer grade microprocessor developer board (Uno Arduino, ITA).

93

The circuit was programmed to output a square waveform with a 50% duty cycle, a pulse width of 20 ms and an amplitude of 1 or 0. Circuit wiring between the microcontroller board, the LED excitation light source and the computer followed the diagram below,

Figure 4.2. When the output pulse train from the microcontroller reached a high state

(value = 1), the excitation light was connected to power. During the waveform’s low pulse state (value = 0), the light remained off. The microcontroller was also connected to the computer via USB for power and data communication. Using the PySerial library, we were able to read in the output waveform’s current pulse state, during each loop of the cameras’ operational code. Whenever a pulse state was read in, a camera capture was triggered after a brief (10 µs) delay to allow the LED to reach a full ON or OFF state. The returned image was then classified as High or Low, depending on whether the returned state was 1 or 0, respectively.

Figure 4.2 Diagram depicting system connections and basic usage.

94

Sequential High and Low state images were subtracted, Low from High, to return a difference image. Pixels containing fluorescence information appeared bright in the result from the subtraction. Speckle noise also appeared due to frame-to-frame variation in pixel values, and was treated with an erosion filter. Additional background was treated using a low-threshold binary filter on the difference image, and the binary output was used for point tracking. The algorithm is described graphically in Figure 4.3.

Figure 4.3 Signal pulse trains from the microcontroller to the PC and the excitation LED, and the trigger signal from the PC to the camera. The pulses from the microcontroller to the PC and LED were of

the same width, but the LED waveform was delayed by 10 µs from the PC bound waveform to allow for

data transfer time. Camera captures occurred after each high and low pulse was received from the

microcontroller following a 20 µs delay to account for LED rise and fall times, in order to avoid frame capture during a transient state. Camera exposure was set to one half the pulse width for the same reason.

Tests were conducted to count the number of missed states. A missed state is here defined as a high capture taken by the camera during a low or transient pulse state, or vice versa. The presence or absence of fluorescence in each sequential frame indicated by the

95

LED pulse state, and was confirmed using an independent light meter operated in synchronous alignment with the imaging system (i.e. they were both operated at the same time). At each camera capture, the intensity of the LED pulse was read. A high pulse corresponded to a light meter reading of at least 1 mW/cm2, and a low pulse or transient state corresponded to any lower readings. The output from the point tracking and fluorescence labeling regime at each capture time point was compared with the determined pulse state. The imaging system was worn for this test, and the number of missed states were counted over a 2 minute period, beginning after the first several seconds of operation.

Two different operating modes were tested in this way, including a low motion test in which the wearer simply observes the scene and a high motion test during which the user moves about. Each test was averaged over 5 trials.

4.2.4. Fluorescent Point Tracking

Point tracking was also implemented using the OpenCV libraries. In order to properly track which points were pulsing in sync with the excitation light source, all points in the frame, or at least all points within a select region, must be tracked. A sparse flow optical tracking algorithm, such as the Lucas Kanade method [223, 224], proved to be an invalid option in this case, as it did not track all fluorescent pixels. Therefore, we implemented the Farneback dense flow optical tracking method to track pixel flow between imaging frames [224, 225]. The pixel motion, described as positive or negative changes in the x and y coordinate plane, designated Δx and Δy, was used to determine each pixel’s position in the current frame, given that each pixel location was known in the previous frame. All pixel positions were known during the first frame of camera capture,

96 corresponding to their original XY coordinates. In each of the following sequential frames, each pixel’s coordinate changed according to the equations:

푥푐 = 푥푝 + ∆푥푐−푝

푦푐 = 푦 + ∆푦푐−푝

Where c was the current pixel coordinate and p was the previous coordinate in either the x or y axis. The current position of each pixel was monitored, and its corresponding binary value (1 or 0, dependent on the presence or absence of fluorescence, respectively) was recorded. Frames were captured and binary values recorded for each pixel over several loop cycles (N) before the fluorescence identification algorithm began. Pixels which alternated between high and low states in sequence with the signal from the microcontroller over the last N recorded loops were determined to be fluorescent. The locations of the fluorescent pixels in the current frame were overlain with a false color to highlight them for viewing.

4.2.5. Fluorescence Sensitivity

Fluorescence sensitivity measurements were conducted in both pulsatile and non- pulsatile mode on stationary targets, with the goggle cameras fixed at a 40 cm working distance. The purpose of these fluorescence tests was to compare the sensitivity of the system in both modes of operation, to determine whether pulsed imaging improves fluorescence detection. Tests were conducted using 2 mL tissue phantoms prepared as described in Chapter 3, using intralipid and India ink as the scattering and absorbing agents, respectively. Varying concentrations of PPIX were dissolved in DMSO and added to the

97 phantoms, in the following concentrations: 0, 3, 6.25, 12.5, 25, 50, 100, 200 and 300 nM.

Phantoms were dispensed into 24-Well plates (Eppendorf, DEU) for analysis, spacing phantoms to prevent cross contamination of fluorescent light emissions. Fluorescent emissions were induced using excitation intensities of 4000, 2000, 1000, 500 and 250

µW/cm2). The required dye concentrations to achieve signal-to-background ratios (SBRs) of 1.2, 1.5 and 2 were calculated for each excitation intensity. Tests were repeated over n

= 5 trials, and multiple factor ANOVA was used to compare pulsed and non-pulsed results for significant differences (Cl = 95%, α = 0.05). Additionally, post-hoc analyses were conducted using a Tukey test to elucidate pulsed versus steady state grouping.

Evaluation of fluorescence sensitivity limits on mammalian skin was conducted using pig skin (Dave’s Market, OH, USA). Solutions of PPIX in DMSO, at varying dye concentrations, were applied topically to the pig skin in 50 µL volumes via cotton swab.

Due to the high level of naturally occurring porphyrins in skin tissue, we tested higher concentrations for this application than for raw fluorescent detection tests using the well plates, due to the higher autofluorescent background: 30, 60, 100, 200, 300, 500, 1000,

2000 and 4000 nM. Imaging was again conducted using both pulsatile and non-pulsatile modes at the same excitation levels reported above, and the results were compared for difference via ANOVA and Tukey test (n = 5, Cl = 95%, α = 0.05).

4.2.6. Fluorescent Point Tracking Accuracy

The accuracy of the pulsed regime in detecting real fluorescence pixels, versus noise or motion artifacts, was tested. A cell culture dish (Eppendorf, DEU) was filled with a 300 nM PpIX/DMSO solution. The plate was placed on top of a tissue phantom block,

98 which was placed on a black, non-reflective surface, Figure 4.4. The cameras were mounted perpendicular to, and directly overhead of the plate at a 40 cm working distance.

The PpIX solution was illuminated using the pulsed excitation light source. Illumination intensity was set to a level which allowed all visible fluorescence to be uniformly detected at pixel values between 175 and 200 on the gray scale (0-255). In this way, we prevented sensor saturation, and allowed bright pixels to be easily counted within each image. A binary threshold was applied to the image and a simple edge detector was used to determine the outer border of the imaged plate. The number of white pixels within and without the plate borders were averaged over multiple frames (n = 55) to serve as a baseline. Pulsatile imaging was then initiated, using our described method. The number of fluorescent pixels detected within and without the circle of the culture plate were counted at multiple random time points (n = 11) as the goggle cameras were translated in a pattern above the plate. A five-pointed star pattern was used to test fluorescent detection following motion in multiple directions. After every move, counts were taken of the number of fluorescent labeled pixels within the area of the plate. Additional counts were taken within the tissue phantom, but outside of the fluorescent plate, as well as on the dark background of each image. The counts were then divided by the total number of pixels in each region. Counts were taken as a percentage, with the optimal values being 100% for the culture plate and 0% for the phantom and background. The entire motion test was repeated over 5 trials, resulting in n

= 55 frames.

99

Figure 4.4 Test for the accuracy of fluorescent pixel detection and non-fluorescent pixel rejection.

A fluorescent dye filled cell culture dish was placed on top of a non-fluorescent tissue phantom block. The camera position was translated overhead of the block during imaging to determine whether motion artifacts

were detected.

Using the described methods, the number of fluorescent labeled pixels were also counted at the previous location of the tissue phantom, in the current image, following each move. In this way, motion artifacts could be quantified, since such artifacts appear as fluorescent objects in the subtraction images, see Figure 4.1. Using the processed subtraction frame acquired immediately following each movement, but before the system latency could catch up to the current frame, the artifact could be visualized, if present. The number of fluorescent labeled pixels within the phantom’s previous location was divided by the total number of pixels contained in the phantom’s area. The process was repeated varying the number of loops over which binary pixel values were recorded to determine

100 fluorescence (N = 4, 6, 8). Adjustments were made to the binary threshold, used to remove noise, to optimize the percentages. In addition, the average operational framerate of the system and latency were determined during this test.

Following this procedure, the process was repeated using only the synchronized pulse light scheme without the dense optical flow pixel tracking algorithm. A low level threshold was still applied to minimize background noise, and frame rate was set to match the equivalent frame rate determined during the point tracking tests. The test results using the described point tracking method compared to the results obtained without point tracking, including background noise, false positives on the phantom, motion artifacts and false negatives in the fluorescent region. Tabulated results for each test were compared for significant difference between the point tracking and non-point tracking method, using a single tailed student’s paired t-test (n = 55, α = 0.05).

4.3. Results

4.3.1. Pulsed Light Imaging

The system demonstrated the ability to track and capture fluorescent and non- fluorescent images in coordination with the high and low states, respectively, of the LED excitation source, Figure 4.5. Tests were conducted to count missed states in an effort to quantify the efficiency of pulsation coordination throughout the system. A greater number of missed states were counted during the high motion test, which was not unexpected. On average, 21 missed states were counted per minute during low motion observation tests

101 and 75 missed states per minute were counted during the high motion tests. During these tests, the system averaged a total input frame rate of about 91fps, half High state and half

Low state, and a combined framerate of 45.5 fps, for the output fluorescent labeled image frame. Therefore, the percentage of missed frames was 0.77% and 2.75%, during low and high motion states, respectively.

Figure 4.5 Pig ear with topically applied PpIX solution (green), imaged under pulsed excitation.

Clockwise from top left: Low pulse state without fluorescence excitation; High pulse state with excitation; the subtraction result from the High and Low frames, following a threshold and intensity based false color

application; the subtraction result added back to the Low state image.

102

4.3.2. Fluorescent Point Tracking Accuracy

Accuracy results of the fluorescence tracking regime were summarized in Table

4.1. Results indicate that point tracking incurred a greater degree of false positives in the tissue phantom than pulsed light without tracking. Additionally, the point tracking regime had more false negatives in the target fluorescence as well. Optimal results were found using a low noise threshold with a pixel value of 2, and binary pixel values were tracked and stored over N = 6 loops. The number of loops used to determine whether a pixel was pulsing with the excitation light source was set at 6, due to a significant improvement over

4 loops, while no significant improvement was had by increasing to 8 loops. The number of false positives and false negatives were tabulated during each test, with and without point tracking. Significance was determined on the means from each test, between the point tracking and non-tracking methods via student’s t-test (n = 55, α = 0.05).

Table 4.1

Results of the fluorescence point tracking tests, including mean and standard deviation of the percentage of

false positives in the image background and on the tissue phantom, the percent of false negatives in the

target fluorescence, and the amount of motion artifacts as a percentage of the area of the object that has moved between sequential frames. Results were analyzed for significant difference (Sig Diff) between the

point tracking and non-tracking pulsed light methods.

Point Tracking Pulse Only P - Value Mean % STD Mean % STD Phantom 5.530 0.040 3.390 0.300 2.20E-03 False Positives Background 0.023 0.005 0.011 0.010 5.08E-02 False Negatives 0.200 0.040 0.020 0.004 1.17E-05 Positive Identification 99.630 0.370 99.920 0.050 9.79E-02 Target 5.450 0.830 100.000 0.000 6.41E-12 Motion Artifacts Phantom 3.790 0.540 100.000 0.000 3.65E-12

103

4.3.3. Fluorescence Sensitivity

Tests were conducted to investigate the system’s sensitivity towards the detection of PpIX fluorescent emissions, as well as to determine whether the pulsed light regime provided any significant advantage over steady illumination. Fluorescent intensity readings for each dye concentration and at each excitation intensity were tabulated to determine the minimum dye concentration required for the system to achieve a SBR of 1.2,

1.5 and 2, Table 4.2. Results indicated that pulsed light imaging provided significantly improved detection capabilities over steady state at all SBR values and excitation intensities, Appendix D.2. Based on these results, the minimum concentration of PpIX that the system was able to detect in solution with a SBR of 2 was 13.2 nM using steady state excitation and 5.9 nM using the pulsed light regime at an excitation intensity of 4 mW.

Additionally, the minimum detectable dye concentration on pig skin at the same parameters was 227.8 nM in DC mode and 152.8 nM with pulsed light. On average, the pulsed light regime displayed an improvement factor of approximately 2.25 over the conventional DC, steady state excitation.

104

Table 4.2

Minimum dye concentrations required (standard deviation) to achieve Signal-to-Background ratios (SBRs)

of 1.2, 1.5 and 2.0 under various excitation intensities. Experiments were conducted using various concentrations of a 2 mL fluorescent solution of PpIX deposited into the wells of a 24-Well plate, and using

50 µL solutions applied topically to pig skin. Exact dye concentrations required to achieve the listed SBRs

were interpolated. Readings were taken using both steady state (DC) excitation and pulsed light. Pulsed

light imaging resulted in lower minimum dye concentrations for all categories listed below.

Concentration (nM) SBR Intensity (µW) Pig Skin 24-Well Steady Pulsed Steady Pulsed 4000 82.4 (2.6) 42.9 (2.2) 2.5 (0.02) 1.0 (0.03) 2000 94.2 (2.9) 59.1 (2.7) 5.1 (0.05) 1.9 (0.03) 1.2 1000 110.3 (3.1) 70.6 (2.8) 9.3 (0.09) 4.8 (0.08) 500 164.8 (2.8) 85.7 (1.8) 18.8 (0.11) 13.7 (0.13) 250 315.9 (3.2) 140.9 (3.1) 41.2 (0.4) 27.3 (0.32) 4000 139.4 (2.8) 81.5 (2.5) 7.0 (0.05) 2.4 (0.02) 2000 152.6 (2.5) 105.1 (3.1) 12.3 (0.11) 4.9 (0.03) 1.5 1000 209.8 (2.1) 143.6 (3.2) 22.5 (0.25) 9.7 (0.09) 500 296.8 (2.9) 221.9 (2.8) 53.3 (0.68) 24.5 (0.25) 250 543.1 (4.8) 347.2 (3.3) 111.7 (1.1) 48.7 (0.44) 4000 227.8 (2.4) 152.8 (2.9) 13.2 (0.09) 5.9 (0.1) 2000 234.0 (2.8) 160.1 (3.0) 25.9 (0.21) 10.6 (0.16) 2 1000 385.6 (3.1) 222.5 (4.0) 48.5 (0.49) 22.1 (0.26) 500 829.9 (5.4) 351.4 (3.5) 116.5 (1.3) 46.9 (0.48) 250 2477.2 (9.1) 658.8 (6.1) 234.1 (2.3) 105.9 (1.4)

Graphs were also made, tracking the SBR as a function of dye concentration for each of the excitation states, Appendix B. The plots depicted a linear relationship between

SBR and concentration for the 24 Well plate tests, with the exception of the 4 mW pulsed light test, which reached pixel saturation at high concentrations of dye. The pig skin plots also depicted a linear trend at dye concentrations greater than 500 nM.

105

4.4. Discussion

4.4.1. Pulsed Light Imaging

The presence of missed or mismatched states (e.g. the camera identifies a captured frame as High when it has actually captured a Low state) likely corresponded to a temporary increase in memory usage on the PC or an error in the signal transmission from the microcontroller. Illuminating this effect, the increased number of missed states during periods of increased motion was likely due to the increased processing time required when there was a greater amount of change in pixel values between sequential frames, corresponding to increased memory usage and bandwidth. The change in processing time could have desynchronized the camera trigger from the true pulse state. Additionally, if the PC begins running other processes during imaging, the increase in memory usage could result in decreased framerates and missed states. Also, low USB or other data transmission bandwidth from the imaging sensors could result in delays and low framerates.

Improved performance and system reliability (i.e. decreased missed states) could be realized through improved PC bus speed as well as more robust GPU encoding. Parallel processing has been shown to improve system performance elsewhere, when an increased processing load has proven detrimental to efficiency and speed [226-228]. Integration of

JIT, rather than the popular and powerful CUDA library, was conducted for ease of GPU implementation on any integrated or external GPU, not just NVIDIA boards. Also, the system used for this experiment did not have an NVIDIA GPU. Future work could incorporate an external video card and more extensive kernel coding for more efficient

GPU data handling.

106

4.4.2. Fluorescent Point Tracking Accuracy

Fluorescence tracking accuracy as well as sensitivity may have been reduced due to the presence of porphyrins and other endogenous fluorophores in the skin tissue, which were excited to fluoresce at similar wavelengths as the applied PpIX [56, 122, 209, 211].

The point tracking regime detected background fluorophores in addition to the target fluorescence administered onto the tissue, since they both pulsed in time with the excitation light, and additional intensity based thresholding was used to limit background detection.

Dye concentration at the target site must therefore be higher than the naturally occurring fluorophore levels found in the surrounding tissue to achieve noticeable contrast. It should be noted, high background fluorescence imaging is the primary application for this method of pulsed excitation.

Background autofluorescence was mostly eliminated from the imaging frames by application of a pixel intensity based threshold, applied to the difference image prior to binarization. It was important to conduct thresholding only on the difference image since we wanted to avoid background elimination from the combined image. Thresholding on a binary image would not have been practical. Speckle, or salt and pepper, noise appearing in the background was likely due to random variations in the background pixel intensity above the threshold level, and also aberrant motion artifacts which escaped the point tracking algorithm through processing error or by randomly mimicking the pulse rate. A simple closing function or despeckle operation was sufficient to remove most of these noise pixels.

107

Noise was also reduced by increasing the number of loops over which pixel intensities were tracked. Using too many loops and strict adherence to the pulsed regime for segmenting fluorescent pixels from background resulted in lower frame rates and lower positive identification rates. During these experiments, it was found that random pixel variations, noise or object motion could cause a pixel’s recorded values to vary from the ground truth. When implementing a greater number of loops, it was therefore beneficial to allow for an error in the tracked pixel values. For example, when tracking a pixel’s binary sequence over 8 loops, the ground truth of high and low excitation states is found to be 01010101, where 0’s correspond to Low states and 1’s are the high states. However, allowing a tracked pixel with one binary value error, such as 01010001, to be accepted as fluorescence may improve positive detection rates. We have found that allowing for an error is more likely to prove beneficial with an increased number of tracked loops (N > 6).

In fact, it was possible that tracking 8 loops, in the Fluorescence Detection Accuracy section of this chapter, did not provide a significant advantage over 6 loops due to that lack of error allowance. Also, the decrease in frame rate as a result of the increase in processing a greater loop number may have contributed to the poorer performance. Errors like this could be remedied through advanced GPU computing in the future. Also, updated dense optical flow point tracking regimes can be tested and compared to the Farneback method for accuracy and speed of fluorescent pixel tracking [229, 230].

4.4.3. Fluorescence Sensitivity

Pulsed excitation fluorescence imaging has demonstrated improved signal detection [40, 103, 104, 217]. In this study, we have similarly demonstrated enhanced SBR

108 for the detected fluorescence emissions over a range of dye concentrations. The SBR enhancements were due to the separation of the fluorescence signal from the bright background, and subsequent addition of that signal to the low excitation state image.

Further SBR enhancements could potentially be obtained by digitally increasing the brightness of the segmented fluorescence prior to the addition step, or neglecting the addition entirely, which would have the unfortunate side effect of removing spatial information. Imaging under dark room conditions, or at least very low background conditions, will again improve SBR while also reducing the motion artifact effect by reducing the intensity of the subtracted mismatched images. However, such conditions are not always readily achieved in various practical settings (e.g. OR, clinic, in- transit/expedition medicine) [13, 15, 104]. Applying pulsed light with motion tracking can help improve fluorescence imaging utility under any lighting conditions, however the effect may be most useful under non-ideal settings with high background. The pulsed light regime demonstrated in this chapter may also be combined with the fluorescence to color image registration discussed in Chapter 5 to further improve medical scene visualization by both enhancing the target fluorescence while retaining the background.

4.5. Conclusions

In this chapter we demonstrate improved fluorescence detection using a pulsed light, temporal gated regime of fluorescence excitation and detection. On average, the SBR of detected fluorophores using pulsed light was over 2 times larger than that found using conventional DC excitation. Additionally, we test the system for its accuracy in

109 discriminating between fluorescence and background during light pulsation as well as the ability of the system to remove the motion artifacts encountered during sequential image subtraction. Results indicate that temporal gating can improve any fluorescence imaging system and that dense flow optical tracking can be a useful tool in target tracking during real-time imaging.

110

CHAPTER V

MULTIMODAL IMAGING GOGGLE WITH AUGMENTED REALITY COMBINING

FLUORESCENCE, ULTRASOUND AND TOMOGRAPHICAL IMAGING

Aim 3 was the primary focus of this chapter on integrated multimodal imaging.

Microscopy, ultrasound and tomographical (MRI/CT) imaging data were incorporated onto the fluorescence images via fiducial marker registration. The marker and target registration errors were determined, as well as the returned frame rate and processing latencies.

Additionally, the Aim 1 specific goal of fluorescence to color registration was addressed in this chapter, updating the method from Chapter 2.

5.1. Introduction

5.1.1. Single and Multimode Imaging

Limitations on the operational functionality of single mode medical imaging technologies in guided interventions necessitates the inclusion of multiple regimes to attain a fuller measure of the field. Many intraoperative medical imaging modalities have gained more prominent use in the surgical suite in the past twenty years. Fluorescence imaging

111 methods have seen a particular increase in research and clinical support ever since the release of the Novadaq SPY in 2006 [106], and later the highly impactful publication of the FLARE imaging system, released in 2009 [72]. The ability to actively or passively target, label and enhance the visibility of blood vessels, lymph nodes and cancerous or pre- cancerous lesions beneath the tissue surface has found use in many operative procedures and surgical planning [11, 27, 28]. A limitation of fluorescence imaging is the inability to provide significant contrast support at tissue depths beyond a few millimeters from the surface. The development and expansion of near infrared (NIR) fluorescent dye has extended this range to as much as 1-2 cm [41, 44], however alternate modalities are required for comprehensive volumetric imaging.

Tomographical techniques such as MRI and CT provide useful structural and volumetric data at any depth through the living tissue, although they lack the fine resolution of fluorescence and the convenience of ultrasound for real-time imaging [6, 11, 147].

Ultrasound has proven to be a ubiquitous tool in many medical settings, providing quick anatomical readings of soft tissue at depths of several centimeters [16, 17, 139]. Attaining fine resolution in ultrasound images can be difficult, however, and it may prove challenging for the uninitiated to correctly orient the transducer. Multiple studies have been conducted to merge two or more of the aforementioned techniques, leveraging on the strengths of one to offset the weaknesses of the other.

5.1.2. Multimodal Image Registration

Each imaging method has its limits. Fluorescence imaging provides excellent fine resolution and selective targeting, however suffers from low tissue depth penetration [13,

112

15, 33, 34]. Tomographical techniques, such as MRI and CT, have good volumetric resolution but lack the same level of accessibility as ultrasound and the sensitivity of fluorescence imaging [11, 12]. Radiological methods such as PET can aid in selectively distinguishing a pathological target within a tissue volume, however, like fluorescence it lacks spatial localization when used independently. Various methods including hardware and software algorithms have been implemented to facilitate co-registration of intraoperative imaging modalities.

5.1.2.1. Fiducial Marker Based Image Registration

Recognition of a unique physical marker through its optically distinct pattern, infrared emission profile or electromagnetic signature allows researchers to designate an image registration site. Typically, there are two methods of marker based registration: optical based and virtual based. Virtual registration tracks and monitors the real-world locations of the registration markers and translates those locations into virtual space, where the registered imaging data can be displayed. Optical based methods may operate similar to the virtual methods, however they incorporate optical imaging, registering the tracked operative imaging data onto the scene recorded by video camera.

A variety of fiducial markers have been implemented for surgical planning or intraoperative guidance. Infrared markers which attach to a target, such as a patient anatomy or medical instrument, have become a popular option, not requiring optical imaging methods for implementation. Two main categories of IR markers have found regular use, those that emit IR light and those that reflect IR light. Markers are tracked by a separate IR sensitive instrument, such as the NDI Optotrak [231], and must remain within

113 the devices field-of-view (FOV) for continuous tracking. Both markers have found use with ultrasound probe tracking [140, 232-236] as well as with the registration of tomographical imagery [132, 237, 238].

Optically tracked markers use a distinctive shape or pattern for camera vision based recognition. These markers are typically inexpensive and non-complicated, however they must remain in the cameras field-of-view (FOV). Additionally, optically tracked markers typically do not typically rival IR probes or EM trackers for accuracy. Printed patterned markers, such as those incorporated into the Aruco system, provide a robust and very inexpensive solution [239]. Similar methods use other distinctive patterns such as chessboards [240-243], dot arrays [127, 235, 244] or matrix barcodes [245]. Patterned markers have been used for ultrasound tracking for co-registration with video navigation

[236, 246, 247] as well as for tomographical [243, 248, 249] data co-registration. Certain optically tracked registration regimes require the use of fiducials that are visible to multiple imaging modalities. Methods include non-magnetic metal or metallic-filled markers for

MRI or CT registration [250-253] with optical modalities or between pre-operative and intraoperative images, as well as ultrasound sensitive fiducials.

While typically not as accurate as IR tracking methods, EM markers still provide a high degree of acuity without the requirement of FOV imaging. However, EM marker locations must still be oriented with respect to the imaging FOV during optical imaging which is not a trivial task, particularly when the marker is not located within the imaging frame. Nonetheless, EM has proven to be an effective option for medical imaging co-

114 registration, both with ultrasound [254-258] as well as tomographic image [259, 260] representation.

5.1.2.2. Feature Based Registration

A wide variety of feature based registration methods for multimodal imaging have been developed, but only a few general selections will be touched on here, as it is a rather broad field, and not the direct focus of this research. For 3D data, registration is aided by acquiring 3D information of the target surface. Different methods of data acquisition include stereoscopic 3D surface map [261, 262], structured light projection [145, 243, 263] and 3D surface scanning [264]. The acquired data can be used to map multiple feature points on the target surface for 3D imaging data registration.

Additional methods of feature based recognition involve the use of software algorithms to learn and recognize feature points on an anatomy or instrument for use as registration points [254, 262, 265]. Popular feature recognition algorithms used for medical image registration include SIFT and SURF, sometimes coupled with SLAM for pose estimation, when registering 3D objects [145, 265].

5.1.2.3. Optical See-Through Registration

In the registration methods discussed, images were co-registered in software following marker point determination, and then displayed on a monitor, augmented reality

(AR) head-mounted display (HMD) or projected onto the target anatomy. An additional method of hardware based registration is available, reflecting the medical imagery towards the user by means of a semi-transparent half-silvered mirror, which allowed the anatomy behind the mirror to remain partially visible. Particular use for the method has been found

115 in ultrasonography using the Sonic Flashlight [261, 266], and for intraoperative imaging of

MRI data [259, 267].

5.1.3. Augmented Reality Fluorescence Imaging System with Multimode Registration

In this chapter, we introduced a head-mounted fluorescence imaging system with multimodal incorporation. The imaging system was an update on the model presented in

Chapter 2, implementing new imaging sensors with higher resolution and greater quantum efficiency. Preliminary multimodal data, using the system from Chapter 2, can also be found in Appendix C. A new fluorescence to color imaging algorithm was applied here to aid in fluorescence localization via anatomical integration. Multimodal capacity was integrated via incorporation of an ultrasound imaging system and fiber optic microscope.

Tomographical data was also incorporated using optically tracked fiducial markers for data registration. Herein, we describe and characterize system performance with regards to operational frame rate and lag, accuracy of fiducial tracking and image co-registration as well as microscopic resolution and detection sensitivity.

Additionally, procedural guidance for ultrasound imaging was conducted using a

Convolutional Neural Network (CNN) based image classification scheme. Neural networks have been a hot research topic in recent years, particularly with the recent advancements made on the basic LeNet Convolutional Neural Network (CNN) architecture

[268]. These methods, including VGGNet [269], ResNet [270] and GoogLe Net [271], among others [272, 273], use mathematical methods based on the way animal retinal neurons operate to identify images. We implemented a VGGNet CNN architecture to classify ultrasound images.

116

5.2. Materials and Methods

5.2.1. Optical Imaging and Display

Stereoscopic fluorescence imaging was conducted using twin monochrome CCD imaging sensors (CM3-U3-13S2M-CS FLiR, BC, CAN). Color reflectance mode imaging was conducted using two CMOS cameras (USBFDH01M ELP, SZ, CHN). The CCD fluorescence imaging sensors were fitted with 832 nm bandpass filters to optimize the system for Indocyanine Green (ICG) detection, while the CMOS sensors were equipped with NIR cutoff filters, to prevent IR contamination of the color imagery (84-091 & 84-

107 Edmund Optics, NJ, USA). Lensing included 8 mm focal length C-mount lenses

(M118FM08 Tamron, JPN) for the CCD cameras and glass 8 mm M12 lenses for the

CMOS sensors. Sensors were mounted onto a custom 3D printed housing, Figure 5.1, aligning each like pair of sensors together horizontally, and aligning the CCD sensors vertically over the CMOS. A stereoscopic VR display (Wrap 1200DX Vuzix, SZ, CHN) was attached to the back of the camera housing via a secured clip. The assembled imaging module was affixed to a dental loupe-style head mount using articulating arm connectors

(Loc-Line, OR, USA).

117

Figure 5.1 Stereoscopic imaging sensors mounted onto a 3D printed housing. The imaging module

was connected to a loupe-style head mount via articulating modular hose connectors (blue).

Fluorescence excitation of the NIR dye ICG (I2633 Sigma Aldrich MO, USA) was induced using an adjustable focus, heatsink-mounted 780 nm LED emitter with an 800 nm short pass filter (M780LP1, SM1F & FES0800 ThorLabs, NJ, USA). White light reflectance mode illumination was provided using a white light LED emitter with independently adjustable optics and 800 nm short pass filter (MNWHL4 & FES0780

ThorLabs, NJ, USA). Both white light and NIR emitters were connected to separate, independently adjustable LED driver circuits (LEDD1B ThorLabs, NJ, USA).

118

5.2.2. Ultrasound Imaging

The ultrasound imaging system (Ultrasonix SZ, CHN) was integrated into the computational unit via Data Acquisition Card (DAC) (USB-Live2 Hauppauge, NY, USA) and USB connection. The system utilized a 3.5 MHz convex probe and ultrasound transmission gel (Aquasonic 100 Parker, NJ, USA) applied topically to the target surface to enhance sonic transduction and resultant image clarity.

5.2.3. Computation

The imaging and display modules were connected to a mini PC (NUC6i7KYK

Intel, CA, USA) which controlled the cameras, captured and processed input imaging frames from both the ultrasound and camera modules, integrated multimodal registration via optically tracked fiducial markers and output display frames to the head mounted AR unit as well as to a stand-alone monitor. The system was outfitted with 32GB 2133 MHz

DDR4 Ram and a 500 GB SSD, while the CPU operated an Intel® Core™ i7-6770HQ processor at 2.6 MHz per core with Intel® Iris™ Pro 580 integrated graphic card. Camera and ultrasound DAC connections were made using USB 3.0 ports, while the AR display was connected via HDMI.

5.2.4. Camera Calibration

Camera calibration was conducted on all imaging sensors prior to all imaging studies. Calibration served to determine the intrinsic camera properties, eliminate minor distortions and gain information required to later create a disparity map from the stereoscopic images, relating pixel values to real-world working distances. Calibration was

119 conducted by imaging a black and white chessboard pattern at varying orientations and distances, as described by Zhang [199]. Processing of the captured chessboard frames and calculation of the camera properties leveraged on the Open Computer Vision Libraries

(OpenCV 3.3.1) and two excellent guides published over the internet [200, 201, 274].

Briefly, the distinct corners found between the light and dark chessboard squares were identified, each corner point becoming a node on a virtual grid across the face of the pattern.

Camera properties were calculated from a collection of imaged patterns by comparing each virtual grid to how an optimal grid should appear on an imaging frame at the recorded orientation and working distance. Variations in grid node placement were then attributed to distortion, and may be accounted for when calculating the camera intrinsic properties and conducting camera calibration. The camera matrix and distortion parameters returned from the calibration were used in multimodal data registration to ensure accurate fiducial marker localization and 3D object placement.

Additionally, when capturing the same features from both the left and right stereo camera frames the points can be correlated, and used to calculate a disparity map [199,

201, 274-276]. The disparity found when imaging the same real-world point on two stereo cameras, separated by some baseline distance, can be used to estimate the depth of field for that point. The functions implemented using Zhang’s method and the OpenCV libraries use the points captured from the imaged chessboard to develop a correlation matrix between shared points in the left and right camera images [275, 276]. Correlation between the left and right camera points remains valid so long as the camera positions remain fixed with respect to each other, and the optical components are not altered. A depth map of the overlapping regions between the stereo pair was then created. In this study, both CCD and

120

CMOS sensors were calibrated using the chessboard method, however a depth map was only calculated for the CMOS sensors, whose images contain more trackable features.

5.2.5. Fluorescence to Color Registration

The described camera calibration and depth map creation methods, above, were used for determining registration metrics to align the fluorescence imaging data from the

CCD cameras to the color reflectance mode data from the CMOS sensors.

Following calibration procedures, the mounted imaging module was fixed perpendicular to a level bench top. A chessboard pattern was placed in front of, and parallel to, the sensor array. The distance between the imaging sensor plane and the face of the chessboard pattern was continuously monitored using a NIR distance meter (VL6180

Adafruit NY, USA). The chessboard was positioned at distances of 20-60 cm from the imaging plane in 1 cm increments. Image capture was initiated at each incremental distance for all four of the sensors, and the locations of the same four outer corners in each imaged chessboard pattern were recorded for each sensor. These recorded points were then used to calculate a perspective transform matrix between the two sets of stereo images, effectively translating each of the left and right CCD images onto the corresponding CMOS image pair. In this way, the fluorescent and color imaging frames were simultaneously co- registered both on the left and right of the stereo pair. During this process, the monochrome

CCD images were false colored in order to make their location on the co-registered output frames more apparent, Figure 5.2. Co-registered images were analyzed for registration accuracy by inspecting the overlap between the color and fluorescent chessboard images.

In short, this was accomplished by reading the coordinate locations of the co-registered

121 fluorescence and color image chessboard corners taken at each calibrated distance, and averaging the x and y-plane pixel coordinate differences. If the co-registered frames were found to be accurate to within 1 mm, the perspective transform was saved into a library of transformation matrices, and indexed by the working distance.

Following calibration, the registrations were tested for accuracy by wearing the goggle system and looking at a different chessboard target. Light from an 850 nm LED array, described in Chapter 2, was reflected off of the chessboard pattern. The fluorescent imaging sensors on the goggle system detected the in-band light when reflected off of the white chessboard squares, but not the black, effectively creating a fluorescent chessboard pattern. The NIR chessboard pattern was false colored to differentiate it from the black and white pattern imaged by the color sensors. The chessboard patterns seen by both sets of imaging sensors were then co-registered and analyzed for mismatch, Figure 5.2. Tests were repeated 25 times at a variety of randomly selected working distances. Registration error was estimated by comparing color and NIR chessboard corner locations.

122

Figure 5.2 Chessboard pattern with co-registered fluorescent and color images. A NIR LED was

reflected off of the chessboard to make the white squares visible through the NIR filtered fluorescent imaging sensors. The detected NIR squares were false colored blue and registered to the color image frame

using the previously calculated transformation matrix. The correct transformation matrix for accurate

registration was selected using the working distance (lower left) estimated by the concurrently calculated

stereo depth map of the object plane. Registration error was estimated by comparing color and NIR

chessboard corner locations.

During operation, the depth calculated at the center of stereo focus from the disparity map created for the CMOS pair was used as a lookup value for the perspective transform library. When operated within the calibration range, the system would implement a color to fluorescence mode registration corresponding to the calculated

123 distance to the imaging target, to the nearest centimeter. Registration was only activated when a fluorescent target was detected by the CCD cameras.

Dual fluorophore imaging was also realized by applying two separate bandpass filters onto the two CCD fluorescence imaging sensors. Upon fluorescence detection, the image returned from each sensor was assigned a different false color to differentiate them.

Since each fluorophore, corresponding to one of the stereo fluorescence cameras, is only registered to one of the color imaging frames using the prescribed regime, an additional step was taken to co-register each fluorophore to both sensors. Briefly, the registration points taken during calibration also allowed for the calculation of horizontal registration metrics, between like imaging sensors (e.g. color-to-color). In this way, the two fluorescent images were co-registered, and then registered to each color frame. For dual fluorescence imaging, ICG and protoporphyrin IX (PpIX) were applied to a pig ear. Fluorescence excitation was provided PpIX using a 405 nm LED emitter chip (M405L3 ThorLabs, NJ,

USA), as described in Chapter 4, and the corresponding fluorescence imaging lens was filtered with a 632 nm bandpass filter (65-166 Edmund Optics, NJ, USA).

5.2.6. 3D Object Registration

Projection of 3D data (i.e. volumetric renderings of MRI/CT) onto the 2D stereoscopic images was conducted in real-time, using Aruco markers as fiducial targets

[239]. Markers were placed in a quadrilateral array around the real-world target of the registration operation. Real-time detection of the markers was realized through the

OpenCV Aruco library, which worked to identify the markers in each frame and returned their corner locations [239, 277]. The inner corner of each marker was selected as a fiducial

124 point, Figure 5.3A. These points were used to anchor the projection matrix, a cubic 3D volume in virtual space containing the volumetric rendering to be projected onto the 2D image, Figure 5.3B. Registration of the projection matrix was conducted using the

OpenCV libraries to solve the Perspective-n-Point (PnP) problem and find the correct 3D object pose [278, 279], aligning the detected fiducial marker points with corresponding points on the projection matrix, which served as virtual fiducials for the 3D object. In order to achieve correct pose, the registration matrices, including translation, scale and rotation, between the virtual and fiducial points must be calculated. The correct 3D registration matrix was found using OpenCV functions to estimate the appropriate affine transform between the two 3D data sets [280]. Once the registration matrix was known, the projection matrix could be registered to the stereo scene. The 3D object was then loaded into the projection matrix using the Open Graphics Libraries (OpenGL) [281], which was then used to construct a view matrix, as demonstrated by Mulligan [279]. Viewing of the co- registered 2D camera images and 3D object was also enabled via OpenGL. The 2D camera images were drawn on a frame-by-frame basis as an OpenGL background texture.

Meanwhile, the 3D object was drawn onto the scene within the registered projection matrix.

125

Figure 5.3 Registration accuracy assessment test for registering a 3D object with the goggle images.

(A) The 3D printed cube (blue) with fiducial markers placed for optimal alignment. The markers have been detected by the system, and green dots have been placed at the selected registration corners. (B) The virtual cube (red) in 3D space. The cube is contained within the projection matrix (gray box) whose virtual

fiducial points are indicated in red. (C) Co-registered image of the blue cube with the virtual red cube

(purple). The red arrow indicates a small registration misalignment where the blue of the real-world cube

can be seen. (D) Using the stylus with affixed fifth Aruco marker to assign an interior fiducial point for

registration correction.

Registration accuracy was assessed by co-registering rigid bodies with distinct edges which were fully visible in both the virtual and physical world. To this end, a virtual

126

3D cube was created using Blender and registered to a real 3D printed cube which was imaged through the goggle’s color cameras, Figure 5.3A-C. The virtual cube was constructed to the same dimensions of the real cube, 50 mm3. Virtual registration points were set at 50 mm in a diagonal line from each of the virtual cube’s bottom corner points, and Aruco markers were placed at similar intervals from the real cube, Figure 5.3A-B. The goggle’s imaging module was fixed at a 40 cm working distance, and at a 45o angle from the horizontal plane, and directed at a corner of the 3D cube to best visualize misalignments in multiple planes. Registration was activated and observed from the initial 45o perspective view, as well as from an elevated horizontal view of the cube’s sides and a vertical view of the registration on the top of the cube. The co-registered 3D projection and 2D camera frames were analyzed for target registration error (TRE) in terms of translation, rotation and scale, Figure 5.3C. Misalignment was calculated by comparing the locations of the co-registered cube corners. Additionally, fiducial registration error (FRE) was calculated by drawing the both locations of the virtual fiducials and their co-registered Aruco corners onto the scene, as in Figure 5.3A-B, and comparing the coordinates’ locations. The physical distance between registration points was estimated using the Aruco markers, whose dimensions are known.

In order to correct for placement error, additional internal fiducial points were incorporated. Like the previous fiducials, this point was indicated using an Aruco marker.

The fifth Aruco marker was affixed to a digital stylus pen, Figure 5.3D. The vector from the tip of the pen to the Aruco marker was rigid, therefore the tip’s location could be determined and tracked when the marker was detected. The stylus communicated with the

PC via wireless IR link. When a button on the stylus was pressed, the current location of

127 marker tip, as seen through the goggle cameras, was set as a fiducial location. New fiducial registration points were created by first pointing to, then clicking on the location of the misaligned virtual anatomy. Then, the user clicked on the physical location where the previously selected anatomy should be located. The distance between the first and second selected points was calculated in xy pixel coordinates, and this displacement was applied to the location of the originally detected fiducial marker coordinates. Effectively, this allowed the user to shift the registration in the xy plane, Figure 5.4. Additional corrections could be applied by repeating the process. Poorly placed or unwanted fiducial selections could be removed via keystroke. Object registration in the Z-plane (i.e. height/depth) was also adjustable using OpenGL commands, by pressing the up or down arrows on the keyboard to effectively translate the background imaging plane to a closer or further imaging depth, respectively.

The accuracy of the detailed registration method is dependent on correct user placement of the fiducial markers, and subsequent corrections using the fifth marker.

Determining correct marker placement for a particular procedure required calibration.

Calibration of fiducial marker placement involved placing the markers at key reference points for the anatomy being imaged. Testing the system on a non-geometric target was conducted, using a 3D heart model as an example [282], Figure 5.4. Fiducial markers were placed at strategic locations around a tissue phantom simulating a heart. Marker locations corresponded to the approximate locations of the edge of the patient’s aortic arch, apex, right and left atria. Prior to imaging, the 3D heart model was uploaded into the virtual space. The 3D heart was oriented within the projection matrix so that the virtual aortic arch, apex, right and left atria would align with the virtual fiducial markers. Following

128 registration, FRE was assessed in a similar method as described with the two cubes.

Additionally, TRE was checked by placing plastic centrifuge tubes filled with a 100 nM solution of ICG in DMSO at physical locations corresponding to the simulated patient’s left common carotid artery and left ventricle, Figure 5.4. The diameter of the tube was measured in the images, and the center line located. The center of the artery was similarly located and used as a reference point for assessing registration alignment. The described stylus was implemented here to correct for misalignments.

Figure 5.4 The fluorescent tissue phantom (A) used for 3D object registration with fiducial markers

as detected by goggle cameras. (B) The 3D virtual heart model (Credit: Jana [282]).

Temporal and spatial accuracy of the registration techniques were evaluated.

Frame rates were calculated as the number of completed while loops per second during the operation of each object registration mode. Time latency between when all four fiducial markers were detected and the subsequent registration was also calculated.

129

The Aruco markers were detected using the color CMOS imaging sensors for testing purposes, and the fluorescence images, when present, were co-registered. Object registration began only when both CMOS sensors detected all four of the placed fiducial markers. Following initial registration, only 2 Aruco markers needed to be visible in each of the stereo camera’s captured frames for registration to continue. When one or more of the markers was no longer visible, the last known coordinate for that marker was used for registration.

5.2.7. Registration Lite

An additional, computationally lighter method of registration was also implemented.

Rather than projecting an entire 3D object onto the imaging plane, a pre-projected 2D image of the object was registered to the Aruco markers. For this example, the 3D heart was again used. The heart was virtually projected prior to imaging onto a blank virtual canvas in 3D space and visualized using an OpenGL window. The visible 2D projection was saved as a 2D image. Multiple viewing angles, orientations and top-down cut-away views of the 3D object were projected in this way, and saved for 2D image registration.

During system function, anatomical points of interest on the projected images were registered to strategically placed Aruco markers as previously described for 3D registration. In this case, however, the co-registration was between two 2D images, rather than a 3D object. Images could therefore be co-registered more simply by using the

OpenCV perspective transform function. A library of pre-projected images of the 3D anatomical models were constructed and saved beforehand, each orientation or cut-away view corresponding to a separate image. The image being registered was switched with a

130 keystroke during system function. The same tests used for 3D object registration accuracy were repeated for the lite registration method, and the results compared for significant difference. Additionally, the frame rate and latency were measured using the lite method.

5.2.8. Ultrasound Registration

Ultrasound images were registered directly to the transducer using Aruco markers as fiducial points, Figure 5.4A, similar to [247]. The Aruco markers were mounted onto a

3D printed housing before being affixed onto the transducer. The 3D printed housing was topped with a diffuse plastic pane on which the markers sat. Inside the box were placed a set of white and 830 nm LEDs, along with batteries. When one set of LEDs were switched on, the markers were illuminated from below, improving marker visibility by either the color or fluorescence imaging cameras. The LEDs were driven by an RC circuit with steady 3V source, whose parameters were calibrated through trial and error to optimize marker contrast as seen by both sets of camera under ambient room lighting conditions.

131

Figure 5.5 Ultrasound transducer with affixed fiducial marker array (A). The registration corners in

the array were marked in green. The cropped and color-coded ultrasound image (B) was labeled with

virtual fiducial markers in red. The two sets of markers were co-registered using a perspective transform

and the fluorescence was false colored purple for visibility.

The four outer corners of the mounted Aruco markers were used as fiducial locations, Figure 5.5A. These points were registered to four virtual fiducials placed onto a transparent virtual tab which was attached to the bottom of each ultrasound image read into the system, Figure 5.5B. The registration alignment was calibrated by altering the location of the virtual fiducials on the virtual tab. Calibration was conducted to determine optimal virtual fiducial marker placements for minimal TRE using a tissue phantom imbedded with geometrical liquid fluorescent markers which were visible to both the cameras and the

132 ultrasound. The calibration phantoms were constructed using the same formula as in the fluorescence sensitivity tests discussed in Chapter 2, however each of these phantoms had a square well molded into the surface to a depth of 3 cm, Figure 5.5A. Each well was filled with a 300 nM solution of ICG in DMSO.

The ultrasound transducer was then fixed parallel to the horizontal plane (0o) with the Aruco markers facing up. The goggle imaging module was fixed directly overhead at a 40 cm working distance, ensuring both transducer and tissue phantom were in view. The fluorescent dye in the phantom was excited using a 1 mW/cm2 excitation light, and the fluorescence and ultrasound images were co-registered. Alignment was achieved by monitoring the co-registration and adjusting the virtual fiducial locations until the fluorescent wells overlaid precisely with the well locations as seen by the ultrasound,

Figure 5.5B.

Multiple additional tissue phantoms (n=6) were created to test the fidelity of the registration. Each of these phantoms were created with a single square well molded into the surface at a depth of 3 cm, and filled with a 300 nM solution of ICG in DMSO, and the fluorescence was excited at 1 mW/cm2. The goggle imaging module was mounted directly overhead and the ultrasound transducer was observed scanning the phantom, by hand, at imaging distances between 20 and 60 cm to assess registration scalability following calibration. The TRE was evaluated based on the differences in edge location of the square hollow seen in the ultrasound images versus the square fluorescence imaged on the tissue phantom, Figure 5.5. Error was evaluated based on differences in translation (mm), rotation (o) and scale (% size difference) between the co-registered ultrasound and

133 fluorescence images. To obtain more precise measurement locations, the square in each image was outlined with a best fit square contour [283]. The FRE was evaluated by drawing the locations of the virtual fiducials onto the optical image and comparing their locations with the outer registration points on the Aruco markers.

5.2.9. Ultrasound Image Classification

Preliminary studies on procedural guidance for ultrasound imaging were conducted using a Convolutional Neural Network (CNN) based image classification scheme. A

VGGNet style classification algorithm was implemented in python using the Keras neural network library, leveraging TensorFlow. Ultrasound images of a forearm and upper leg were used for preliminary training data. The forearm classifier was trained to recognize 4 different regions within the forearm (distal, mid-distal, mid-proximal, proximal), using 2 transducer orientations (transverse and longitudinal). Several hundred images (700-800) were taken from each location and orientation for training. Images were preprocessed by cropping the ultrasound image and enhancing the contrast using a CLAHE regime.

Training and classification were both conducted on the mini PC running an Intel i7 chip without an external graphics card. Following training, the classifier was operated in real- time (delay <0.1 s) with the ultrasound system. Ultrasound images were classified and labeled, Figure 5.6. Labels were assigned when an image was classified with at least a

50% likelihood to a single category. Training and testing classification accuracy of the regime was quantified.

134

Figure 5.6 Classified ultrasound image using VGGNet neural network classifier, left, and the transducer location from which the image was captured, right. Classification was conducted in real-time (<

0.1 s latency) and at video rate (>15 fps).

5.2.10. Microscopic Imaging

The wide field stereoscopic fluorescence imaging system was supplemented with a microscopic fluorescence imaging module, Figure 5.7. The microscope was built in-house, incorporating a high resolution silicon fiber imaging bundle (IB10001000CTD Schott MA,

USA) as the imaging probe, to deliver excitation and collect emission light from a fluorescent target. The device utilized a 780 nm LED emitter chip connected to a variable output driver to provide fluorescence excitation to the target (M780LP1 & LEDD1B

ThorLabs, NJ, USA). A dichroic mirror (69-883 Edmund Optics, NJ, USA) was implemented to reflect the excitation light towards a focusing objective lens which directed the light onto the proximal end of the fiber imaging bundle, which then delivered the light to the target. The fiber bundle also acted to collect the subsequent fluorescence emissions, passing the collected light to the magnifying objective (MRH00041 Nikon, JPN).

135

Following the objective lens, emission wavelength light was passed through the dichroic mirror, while reflecting away any returned excitation wavelength light. Passed emission light was then focused onto a filtered CCD camera. The microscopic imagery was viewed in picture-in-picture mode on the goggle display.

Figure 5.7 Fiber optic fluorescence microscope (A, B) passed excitation wavelength light to and

from the target via a flexible fiber optic imaging bundle. The beam splitter separated the excitation from

emission wavelengths, preventing image contamination at the imaging sensor. Additional bandpass filtration at the light source and imager further enhanced sensitivity. The system achieved fine resolution

down to approximately 20 µm (C) and returned intricate fluorescent tissue morphology (D).

Testing of the microscopic module included fluorescence sensitivity and resolution.

Fluorescence sensitivity was determined by imaging the same series of ICG labeled tissue phantoms used in Chapter 3, for characterization of the stereoscopic fluorescent imaging

136 sensors. For microscopic imaging, only the 2 mL phantoms were imaged, since the effective imaging area of the system was limited to a 1 mm diameter circle. Concentrations of ICG used for imaging included: 50, 62.5, 100, 200, 300, 500, 1000 and 2000 nM, where the 50 nM solution corresponded to the background level. Intensity readings from the microscope were used to calculate the Signal-to-Background Ratio (SBR) at each dye concentration, and the minimum dye concentrations required to achieve an SBR of 2 was calculated. Due to light loss resultant from focusing and collimating the excitation light through the fiber bundle, a higher excitation intensity was required. The LED intensity had to also be limited, however, to prevent fluorescence wash out due to excess background scatter. Therefore, the excitation intensity for the microscope was set at 2 mW/cm2, measured at the microscopic objective, to achieve optimal SBRs of fluorescent data.

In addition, the resolution of the microscope was determined by imaging a USAF

1951 Resolution Target (R3L3S1P Thorlabs NJ, USA), as described in Chapter 3.

Resolution was determined by selecting the smallest pattern on the target in which three distinct black bars could be visualized with a contrast of 20%.

5.3. Results

5.3.1. Fluorescence to Color Registration

Fluorescence to color image registration accuracy was assessed by measuring and averaging registration error over a range of working distances. The averaged results were

137 tabulated in Table 5.1. Results indicate translation errors of less than 1 mm in either the x or y axis of the imaging plane. Rotational error averaged 1.36o and scaling error was 1.9%.

Table 5.1

Error measurements for the fluorescence to color registration algorithm. Measurements included translation error in the x-axis (Tx) and in the y-axis (Ty), as well as rotation error (R) and scaling error (S).

Tx (mm) Ty (mm) R (o) S (%) Mean 0.73 0.96 1.66 1.65 STD 0.50 0.56 0.91 0.02 Max 2.0 1.9 4 5 Min 0.1 0.1 0.3 0

Registration accuracy was optimal when the cameras were relatively still, as when the user was looking at the target, and worse when the cameras were panning between targets. The reason for higher error during movement was registration latency. The time required for registration to occur following a camera movement was measured at approximately 0.2 seconds on average. Additionally, the frame rate during color registration was measured at 23 frames per second (fps) on average.

Dual fluorophore imaging was also conducted. Detected emissions from ICG and

PpIX were co-registering along with the color anatomical data, Figure 5.8.

138

Figure 5.8 Dual fluorophore fluorescence to color image registration conducted using the goggle system. The stereoscopic fluorescence imaging cameras have been filtered to simultaneously image PpIX using the right camera (A) and ICG using the left camera (B). The fluorescence was co-registered left-to- right and right-to-left, so that each fluorescence imaging frame would contain both fluorophores. Next, the

stereo fluorescence images were registered with color imaging data (C).

5.3.2. 3D Object Registration

Registration of 3D virtual objects to real-world objects in the imaging plane was facilitated by Aruco fiducial marker registration. Object registration is frequently typified by two classes of error: Fiducial Registration Error (FRE) and Target Registration Error

(TRE). Virtual fiducial points, corresponding to 3D virtual object features, were registered to and drawn on the imaged Aruco marker corners, Figure 5.3A-B. The error between the target fiducial marker corners and their corresponding virtual fiducial points in the x and y-axis of the imaging plane were averaged over multiple trials and reported in Table 5.2.

Additionally, TRE was measured between the corners of a 3D printed box as seen in the

139 imaging plane of the color cameras and the corners of a virtual box registered to the fiducial markers, Figure 5.3D.

Table 5.2

Error measurements for 3D object registration using Aruco fiducial markers. Measurements included translational error in the x-axis (Tx) and in the y-axis (Ty), as well as rotational error (R) and scaling error

(S). Both the Fiducial Registration Error (FRE) and Target Registration Error (TRE) were determined.

FRE TRE Tx (mm) Ty (mm) Tx (mm) Ty (mm) R (o) S (%) Mean 1.41 1.11 1.74 1.37 0.95 1.01 STD 0.31 0.37 0.47 0.36 0.63 0.00 Max 1.9 1.8 2.4 2 2.5 1.8 Min 0.8 0.5 0.8 0.8 0.1 0.4

Object registration was also negatively affected by camera movement. During camera panning, the registration points were frequently lost temporarily, until after the movement slowed or stopped. The latency of the system between marker detection and object registration was about 0.1 seconds. Frame rate during object registration depended upon the mode of system operation. Operating only the CCD or only CMOS cameras, the frame rate averaged about 22 fps. When operating fluorescent to color registration and 3D object registration the frame rate dropped to approximately 15 fps.

Testing was conducted of the object registration using both color and fluorescent sensors as well as a practical object. The 3D heart model was uploaded into the system and registered to a tissue phantom heart with incorporated fluorescent markers, Figure 5.9.

During the procedure the fifth marker affixed to the handheld stylus was implemented to correct for registration error. Latency between fifth fiducial marker assignment and

140 registration update was less than 0.1 seconds. The FRE measured during the procedure averaged 1.1 mm, consistent with the findings from Table 5.2. The mean TRE, measured between the center lines of the test tubes and the center lines of the virtual blood vessels to which they mean registered was 1.5 mm.

Figure 5.9 Registration of the 3D heart model to the fluorescent tissue phantom via fiducial markers

as detected by the goggle cameras. The initial registration (A) experienced some misalignment (blue

arrow) between the fluorescent tube (yellow line) and the left common carotid artery (green line). The misalignment was corrected using the stylus to select the misaligned points, causing the registration to shift

(B), and the target anatomies to become more closely aligned.

5.3.3. Ultrasound Registration and Classification

The mean errors of registration were calculated for fiducial and target registration between the ultrasound and fluorescent images. Registration measurements were conducted on a tissue phantom with ultrasound sensitive fluorescent inclusion, Figure 5.10.

The ultrasound images were read into the system via DAC connection and registered to the

141 goggle imaging frames. The mean FRE and TRE values were tabulated in Table 5.3.

Target registration error was calculated between the corners of the best fit boxes applied to the square fluorescent inclusions as they appeared in each of the co-registered images,

Figure 5.10.

Table 5.3

Error measurements for ultrasound to fluorescence image registration using Aruco fiducial markers.

Measurements included translational error in the x-axis (Tx) and in the y-axis (Ty), as well as rotational

error (R) and scaling error (S). Both the Fiducial Registration Error (FRE) and Target Registration Error

(TRE) were determined.

FRE TRE Tx (mm) Ty (mm) Tx (mm) Ty (mm) R (o) S (%) Mean 1.22 1.15 1.86 1.81 2.19 3.89 STD 0.35 0.30 0.53 0.55 1.42 2.07 Max 1.8 1.8 2.50 2.55 6.00 7.50 Min 0.7 0.8 0.80 0.80 0.50 1.10

Image classification accuracy was assessed both on training images and a separate set of test images. Testing images were recorded from the opposite forearm as the training image set. Results were presented as a percent accuracy, Table 5.4.

142

Table 5.4

CNN forearm classifier accuracy based on category. Categories correspond to forearm location (Distal,

Mid Distal, Mid Proximal, Proximal and Middle) and transducer orientation (Transverse and Longitudinal).

Orientation Transverse Longitudinal

Position Distal Mid Distal Mid Proximal Proximal Distal Mid Proximal

Training Accuracy (%) 98.6 98.5 98.6 98.8 98.8 99.0 98.7

Test Accuracy (%) 95.6 96.5 96.6 96.8 97.0 97.1 97.1

Figure 5.10 Registration of the color-coded ultrasound image (purple) to the transducer. The fluid

filled hollow in the transducer image aligned with the fluorescence observed by the goggle cameras in the

tissue phantom (green).

5.3.4. Microscopic Imaging

The fiber optic microscope was tested for fluorescence sensitivity using the same type of well-patterned tissue phantoms containing varying concentrations of ICG as described in Chapter 2. The minimum concentration of ICG required to achieve a SBR of at least 2 was tabulated at about 370 nM. A plot of SBR versus dye concentration was

143 graphed, Figure 5.11. The microscope was also tested for resolution limits using an USAF

1951 resolution target, imaged on top of the fluorescent tissue phantom so as to make the pattern visible through the microscope filters. Minimum resolvable pattern on the target was found to occur at Group 4 Element 6, which had a smallest dimension of 17.54 micrometers.

Figure 5.11 Plot of Signal-to-Background Ratio versus Dye Concentration for the fiber optic microscope, when detecting ICG in a tissue phantom. The background of the phantom was set at a 50 nM

ICG concentration, and an SBR of 2 was achieved at a signal concentration of about 370 nM.

The microscope was operated simultaneously with the imaging goggle, displaying the microscopic image in picture-in-picture mode, Figure 5.12. No latency was observed between microscopic image update and goggle camera capture and subsequent display.

The framerates observed during system operation were not significantly different from

144 those observed during the operation of just the goggle cameras (26 fps for fluorescence imaging alone and 23 fps for fluorescence to color registration).

Figure 5.12 Frame from the fluorescence imaging cameras with the image from the fiber microscope displayed in the top left corner. A microscopic fluorescent boundary not resolved by the goggle cameras is indicated by the green arrow. The red arrow indicates ICG fluorescence in the near infrared spectrum, and

the fiber optic probe tip can be visualized in use (Orange Arrow).

5.4. Discussion

5.4.1. Fluorescence to Color Registration

The fluorescence-to-color registration algorithm proved to be accurate when properly calibrated. Although the stereo cameras could visualize a 1-2 mm difference in depth, see Chapter 3, calibrating the registration at 1 cm increments proved adequate for

145 accurate registration without much jitter when varying working distances within the calibrated range. Nonetheless, a finer calibration could improve co-registration error.

Registration error may be due to system operation at a working distance in between the calibrated values. Additional registration error may also result from miscalculations by the disparity map of the working distance. Lower registration errors were found in some studies utilizing filtered imaging sensors, dichroic mirrors or beam splitter technology [45,

284]. However, this improvement in accuracy comes at the cost of size, weight, hardware complexity and money. Many studies also fail to provide a quantification of registration accuracy [67], while others neglect to discuss methods of accuracy assessment [45, 92].

Alternate software based registration algorithms have reported error similar to our system, or slightly worse [94, 96, 117, 118]. Lastly, optical see-through methods rely on the alignment between the displayed fluorescence and the user’s line-of-sight to the medical landscape. When aligned, these methods often claim perfect registration or make no evaluation of registration accuracy, but do not present a means of quantification [116, 117,

131]. Additionally, optical see-through displays are typically tinted to enhance the contrast of the projected display. The side effect, however, is to mute the colors seen directly through the semi-transparent shield.

The use of dual stereo cameras may not be necessary. Fluorescence imaging could be conducted using only one sensor, and then registered to each of the left and right color frames. However, dual fluorescence sensors provide additional imaging options. The incorporation of two fluorescence sensors widens the FOV, allows for independent stereoscopic fluorescence imaging without color integration and also enables dual fluorophore imaging.

146

5.4.2. 3D Object and Ultrasound Registration

The use of printed as registration points for ultrasound and MRI imaging proved convenient, cost-effective and accurate. Many systems utilizing some type of fiducial marker based registration have reported both FRE and TRE in the 1-2 mm range in the literature [133, 233, 243, 249, 251, 285, 286]. Systems exceeding the 1 mm threshold in registration accuracy often use a more accurate fiducial tracking system, such as the NDI

Optotrak or Aurora, incorporating high resolution/high sensitivity sensors [132, 258, 287,

288]. Disadvantages of these systems include high monetary costs and bulky equipment, most suited for use in an operating room or clinical suite. Additionally, higher registration accuracies may be achieved on fixed, stationary targets, as is the case typically with head and neck image guided procedures [250, 260, 289]. Likewise, registration during freehand or laparoscopic ultrasound was sometimes less accurate, perhaps due to the increase in motion, although the reasons were not thoroughly evaluated [235, 241, 242, 290].

Consistent with the literature, best results were achieved in this study when the imaging system was stationary, as quick panning and rapid movements could cause a temporary loss of registration. Additionally, when more than 2 markers leave the frame, the registration was lost. Wider FOV imaging for the color sensors may have helped to keep the markers in the imaging frame, and higher resolution would aid in more accurate marker identification and localization. Ultrasound registration could also be improved through the incorporation of multiple markers on all sides of the transducer, rather than only on one side.

147

Further improvements could come in the form of motion compensation algorithms, used to maintain registration accuracy during regular patient motions, such as breathing

[234, 241, 291]. Alternate methods could involve digital video stabilization for jitter reduction, which should also improve marker registration on the user’s end. Furthermore, if the target anatomy of registration will be changing during the procedure, the markers may need to be mounted on a framework, which can rest on a stable surface around or over the patient, rather than directly on top of.

5.4.3. Microscopic Imaging

The fiber optic microscope proved effective in identifying fine fluorescent detail, however the utility of the device may be further improved through the addition of multiple magnification lenses. Similar devices have reported high level cellular imaging using greater magnification [125]. Adding lensing to the distal tip of the fiber imaging bundle could also improve function by increasing the FOV.

5.5. Conclusions

Multimodal imaging leverages on the strengths of each integrated method to offset the weaknesses. In this study, we introduce a head-mounted AR fluorescence imaging system incorporating ultrasound, 3D tomographical imaging and microscopy. The system was tested for image registration accuracy, temporal performance and fiber optic sensitivity. Future work will improve microscopic functionality through additional lensing and analyze methods of improving fiducial marker detection.

148

CHAPTER VI

REAL TIME 3D IMAGING AND AUGMENTED REALITY FOR FORENSIC

APPLICATIONS

In this chapter, the application of the system for forensic analysis was conducted.

Aim 1 was herein evaluated with respect to the fluorescence dye Fluorescein. The system performance was also evaluated in regards to traditional methods for forensic imaging, partially addressing Aim 2. Lastly, a third version of the microscopy module was integrated and evaluated, partially addressing Aim 3.

6.1. Introduction

Forensic crime scene analysis relies on scientific advances to provide fast and effective means of obtaining evidence. In this paper, a system for crime scene mapping and the imaging of blood traces is presented. Imaging modalities utilizing chemical reagents, Alternative Light Sources (ALS), infrared imaging (IR) and spectral imaging have been implemented to assist in forensic imaging studies [292-309]. Such techniques have been utilized to identify, enhance or qualify forensic targets including blood stains

149

[292-301, 303-305, 307, 309], urine [301, 305], semen [301, 302, 305], saliva [301, 302,

305], fingerprints [304, 306, 308], shoeprints [306], gunshot residue [303, 304], obscured or non-visible writing [303], paint [304] and other objects of interest.

The visibility of bloodstains are commonly enhanced through the application of chemical reagents such as hydrogen peroxide or various fluorophores including fluorescein, Hemascein®, Luminol and Bluestar® [292-300]. The sensitivity, advantages and disadvantages of each reagent has been analyzed and compared. Spectroscopy [306-

309], IR [294, 303-305] and ALS [301, 302] have also been used to detect bloodstains without the use of reagents. Additionally, spectroscopy has been utilized in the identification of hand- or shoeprints [306, 308]. Infrared imaging has been assessed in the detection of multiple forensic traces including writing, gunshot residue, liquids and biological fluids [303-305], and ALS have been utilized in the detection of other biological fluids in addition to blood [301, 302].

Augmented reality (AR) and 3D imaging techniques can help investigators in the visualization of crime scenes and forensic data [305, 310-320]. Types of forensic data include crime scene photographs, drawings, witness testimony and descriptions from investigators, all of which have been utilized in creating 3D models of crime scenes [310,

311, 315, 320]. Virtual reality devices have been used to view and even interact with these models. Three dimensional imaging technologies including laser scanners [312, 313], stereo imaging cameras [305, 316-319], and time-of-flight sensors [314] have also been leveraged to create interactive virtual crime scene models. Additionally, studies have

150 integrated multiple types of imaging data with 3D models, including chemical [320], radiological [315, 320] and IR data [305, 315, 316, 320].

In this chapter, we conducted real-time stereoscopic imaging of simulated crime scenes, using the chemical reagent Hemascein along with its reagent Hydrogen Peroxide

(H2O2) to identify diluted artificial blood samples. The imaging system used for this study is based on a previous model developed for stereoscopic medical imaging applications, see the previous Chapters of this dissertation for details [321, 322]. The system allowed an investigator to conduct 3D, point-of-view fluorescence imaging. Recorded stereoscopic images were later reconstructed into 3D models of the crime scene. In addition, we incorporated a hand-held fluorescence microscopy unit into the system for analyzing small targets of interest [323]. The system was tested for detection sensitivity of low- concentration blood stains identified by Hemascein applied to multiple surfaces including cotton fabric, carpet, glass and ceramic tile, and vinyl, wood and laminate flooring.

Multiple simulated crime scenes were imaged and translated into 3D models post hoc.

The work in this chapter contributed to the integrated medical imaging field by deploying a fluorescence imaging system with nanomolar detection sensitivity onto a laptop PC as well as on a compact microprocessor unit, and deploying the system outside of a hospital setting. Demonstrated by this deployment was a fully portable, self-contained imaging system operating at high performance standards in a field-based setting.

Performance was indicated in detecting low concentrations of a target analyte using an airborne fluorescence tag, and has wide ranging application. Additionally, we incorporated

151 high resolution 3D scene reconstruction, and also improved upon the fluorescence microscopy module from Chapter 2.

6.2. Materials and Methods

6.2.1. Forensic Imaging System

The system implemented a pair of stereo cameras (USBFHD01M ELP SZ, CHN) mounted in parallel to an augmented reality (AR) display (MyBud Accupix, SZ, CHN).

The assembly was housed in a 3D printed enclosure and mounted onto a medical loupe style headmount, which allowed the user to position the system over the eyes, enabling line-of-sight imaging, Figure 6.1. The cameras utilized were highly sensitive to green light

(the emission color of Hemascein), and provided high resolution through Sony IMX322

CMOS imaging sensors. Small form-factor M12 lenses (54-854 Edmund Optics, NJ, USA) were used to keep the system lightweight while providing low distortion imaging. The lenses were filtered for Hemascein emissions using 12.5 mm diameter bandpass filters (86-

343 Edmund Optics, NJ, USA) with a center wavelength of 525 nm and a 45 nm bandwidth.

152

Figure 6.1 Forensic imaging goggle. Red Arrow indicates a 3D printed housing containing stereoscopic imaging sensors and M12 lenses. Blue Arrow points out the stereoscopic display, mounted on the back of the 3D printed sensor housing, in-line with the imaging sensors to provide direct line-of-sight

imaging. Green Arrow targets the adjustable medical-loupe style headmount.

Fluorescence excitation of Hemascein was stimulated using blue light at 450 nm.

Blue light was generated with a Cree XLamp XP-E2 LED emitter (LED Supply, VT, USA) fitted into an aluminum handheld flashlight body with an adjustable focusing lens. The

LED was filtered to narrow the excitation band using a 50 mm bandpass filter with a 450 nm central wavelength and 50 nm bandwidth (84-794 Edmund Optics, NJ). Adjusting the focusing lens allowed the diameter of the excitation light to be varied along with the intensity, affording the user the ability to optimize lighting conditions. A variable intensity

153 white light source using a neutral white LED emitter bulb (Ace Hardware, IL, USA) was also built to provide controlled background lighting for when the ambient room lighting was too intense or insufficient. Using both light sources allowed for simultaneous fluorescence and background imaging control.

A hand-held microscope, ProScope HRTM (Bodelin, OR, USA), was integrated into our imaging system via USB connection. The ProScope was used to analyze very small suspect targets which were difficult to identify using the stereoscopic cameras. The resultant imagery was displayed in real-time along with the stereo camera images. For this study, the microscope was used with a ProScope 100X magnification lens (Bodelin, OR,

USA). The microscope was modified to conduct fluorescence imaging on Hemascein treated blood-dilutions. We have replaced the white LEDs inside one of the 100X lenses with 5 mm diameter 420 nm LEDs (LEDSupply, VT, USA), so as to provide a direct excitation source for the Hemascein. The power to the LEDs was also rerouted from the

USB port in order to add in a potentiometer, so that the output light intensity could be controlled by the user. Additionally, we fixed another 12.5 mm 525 nm bandpass filter to the back of the lens, in front of the imaging sensor to improve fluorescence contrast.

All components were integrated on one of two computational systems. A laptop computer (Q551L ASUS, TWN) with the following specifications was used for most testing in this report: Intel® CoreTM i7-5500U CPU @ 2.40GHz, with 8 GB 1666MHz

DDR3 RAM, utilizing NVIDIA 940M GPU and running 64-bit Windows 10 operating system. Alternatively, a compact, self-contained model had been constructed, integrating all components onto an Up Board (AAEON, TWN), which is a small, portable board level

154 computer. The Up Board was housed in a 3D printed casing which could be clipped to the belt or headmount, and was powered by a 10,000mAh USB battery pack. The Up Board specifications were: Intel Atom x5-Z8350 CPU @ 1.92 GHz, 4 GB DDR3L RAM, utilizing Intel HD 400 graphics and running 64-bit Ubuntu 16.04 operating system. The advantage of this version was that it provided the user with greater mobility, as no external cables were required.

The software for the system was written in-house using the Python 2.7 coding language and leveraging OpenCV libraries [170]. The software program operates the cameras, conducts real-time image processing, outputs a display to both the laptop and VR display screens, and records stereoscopic data. Rendering the captured stereo frames into a 3D scene was conducted separately after video capture was completed.

6.2.2. Fluorescence Detection Sensitivity

Our imaging sensors have been tested for blood detection sensitivity using

Hemascein® fluorescent dye kit (Abacus Diagnostics, CA, USA). Serial dilutions were made using artificial blood (CrimeScene, AZ, USA) and deionized (DI) water at the following concentrations of blood to water: 1:1, 1:3, 1:5, 1:10, 1:30, 1:50, 1:100, 1:300,

1:500, 1:1000, 1:3000, 1:5000, 1:10000, 1:30000, 1:50000, 1:100000. All dilutions were applied by either a topical smear using a cotton swab or by droplet applicator onto a variety of surfaces, including: vinyl flooring, vinyl wood flooring, wood flooring, carpet, glass tile, ceramic tile, plastic table top, cotton fabric and ash tree branch. Each dilution was applied

® in a 100 µL volume and allowed to dry prior to the application of Hemascein and H2O2.

155

A full serial dilution series was applied to each surface and imaged using identical working distances and illuminations conditions. Dilutions were imaged using a consistent working distance, defined here as the distance from the imaging sensors to the fluorescence target, of 60 cm. The distance between the excitation light source and dilutions was also set to 60 cm, to match the working distance. The focusing lens of the light source was set to provide an illumination intensity of 250 µW/cm2, as detected by a sensitive silicone diode light meter (PM16-120 ThorLabs, NJ, USA). To simulate realistic conditions of use, the target was also illuminated by the white light source set at an average ambient intensity of 60 µW/cm2, equivalent to medium-intensity room lighting as determined using light meter measurements. A positive detection of the fluorescence (i.e. positive detection of latent blood stain) was determined to occur when the average detected fluorescence was at least twice the intensity of the background (non-fluorescent) surface, also known as a

Signal-to-Background Ratio (SBR) of 2.

The modified ProScope was tested in a similar fashion. The transparent plastic nose cone on the end of the microscope lens was placed onto the dilution site for maximum collection efficiency, as per manufacturer usage guidelines. The LEDs housed inside the microscope lens assembly were set to maximum intensity (500 µW/cm2) for all quantification measurements. Multiple dilution series were tested (n = 5) and fluorescence detection results were averaged for both goggle and microscopic imaging studies.

Goggle camera and microscope fluorescence and reflectance resolution were determined using an Air Force 1951 Resolution Target (R3L3S1P ThorLabs, NJ, USA).

Imaging was conducted under the same lighting conditions as with the dilution series tests,

156 using 20, 40 and 60 cm working distances for the goggle. The microscope was only used in contact with the substrate, and so was only tested at that working distance. Prior to imaging, the resolution target was sprayed with a 500 nM concentration solution of

Fluorescein dissolved in DI water. The spray was aimed just over the target, so the fine mist would settle onto the surface as micro droplets. The minimum diameter of the detectably fluorescent drops was estimated using the nearby bar patterns as a reference.

Results were compared to the width of the smallest visible set of bar patterns observed by the cameras in reflectance mode. In this way, both fluorescence and reflectance resolutions were determined.

6.2.3. Simulated Crime Scene

Imaging of simulated crime scenes was conducted using a variety of settings including: an office or study with wooden flooring and furniture, a bathroom with ceramic, porcelain and tile, a dusty basement with cement, mold, dirt and dry leaves, and a set of carpet samples. Diluted artificial blood was applied at various locations within each simulated crime scene by an assistant, using the 1:30 or 1:50 concentration for optimal fluorescence. Excitation light was provided using the described LED source, at a constant setting of 250 µW/cm2 for all samples. Controlled ambient lighting was used for the imaging of each scene to allow for background visualization, which helped with user orientation and blood localization within the environment. The imaging system was worn for each of these studies by a separate user, who did not apply the samples. The user then applied the Hemascein by prescribed standards provided by the manufacturer, and

157 identified the fluorescent regions using the system. Still images were selected from the saved videos for display.

6.2.4. Image Processing

Minimal processing was conducted on the frames taken from the stereo cameras during quantification aside from resizing the images for display on the laptop screen and

AR headset device. Processing steps during the simulated crime scene imaging included gamma correction to optimize light balance and contrast enhancement using the CLAHE regime. Additionally, a smoothing algorithm followed by an adaptive thresholding regime, implemented using OpenCV libraries, was applied in to segment the bright fluorescent blood samples from the background of the image. The fluorescent segments were then brightened, false colored and added back to the original image, further improving fluorescence contrast. Segmentation was not conducted for every trial to demonstrate the effect that the application of computer vision had on fluorescence enhancement.

Microscopic images were processed with gamma correction and contrast enhancement as well, but also received an alternate additional treatment. The images were recolored using a Jet colormap, provided through the OpenCV libraries, to further enhance fluorescence contrast. Background intensity was found to be minimal, so additional thresholding and segmentation was not required. Frames from the microscope were then resized and displayed in a picture-in-picture (PiP) format on each of the stereo camera frames. When worn, the PiP images would overlap and appear to the user as a single 2D image. The location of stereo and microscope frames could also be switched, so that the

158

Proscope image occupied the main portion of the screen and the goggle’s stereo images were in PiP mode.

6.2.5. 3D Scene Creation

Reconstruction of 3D scenes from stereoscopic imaging data was conducted using previously developed camera calibration and stereo mapping techniques [324, 325] implemented through the OpenCV libraries [326]. A single 3D scene was constructed from a still pair of stereoscopic frames taken using the forensic imaging goggles. For accurate stereo-to-3D reconstruction to occur, the cameras were carefully calibrated prior to use.

Calibration was conducted using the standard chessboard method [324-326].

Intrinsic camera parameters such as the perspective transform matrix (Q) and various undistortion parameters were calculated in this way. The undistortion parameters were used to minimize any optical distortions found during calibration. This step is required to optimize 3D reconstruction, as distortions could provide false depth information. A disparity map was then calculated from two side-by-side imaging frames using OpenCV’s

StereoBM regime [326]. The calculated map allowed the system to estimate the relative distance from the imaging plane to various object planes occurring within both camera’s fields-of-view.

Reconstruction of a 3D scene required the depth values from the disparity map together with a perspective transformation matrix, which provided linear transformation metrics between the left and right cameras. The XYZ spatial coordinates from each pixel in the disparity map (where Z is depth) were crossed with Q and normalized to create a 3D projection. Each projected was associated with a BGR value, taken from the voxel’s

159 corresponding pixel found in the common stereoscopic frame associated with the disparity map. Objects must appear in both imaging frames to have stereo information. Therefore, the unique outer edges of each frame were discarded for disparity map calculation and 3D reconstruction.

6.3. Results

6.3.1. Fluorescence Detection Sensitivity

Imaging sensors were tested for their ability to detect Hemascein fluorescence emissions on a series of artificial blood/water dilutions. The results were summarized in

Table 6.1. Fluorescence emissions were detected on all surfaces for at least some of the dilutions. It was observed that the brightest emissions detected came from the 1:30 or 1:50 dilutions. Fabrics and carpeting showed the lowest degree of fluorescent radiation, while hard surfaces such as wood flooring and ceramic tile showed the highest degree.

Similar sensitivity tests were conducted using the modified ProScope device, to detect Hemascein emissions from the same applied dilution series. The ProScope performance was similar to that of the stereo cameras, however with somewhat lower detection sensitivity on certain substrates, Table 6.1. Additionally, the detected fluorescent emissions were of lower pixel intensity than the same emissions detected by the stereo imaging sensors.

160

Table 6.1

Dilution ratio of artificial blood in water versus the material on which they were applied. Boxes were

marked ‘X’ where the goggle cameras were able to detect Hemascein fluorescence, and ‘y’ where the

ProScope was able to detect fluorescence. Positive detection was marked when the fluorescent signal was

recorded at an intensity of at least 2 times the background.

Dilution Ratio Substrate Light Dark Light Vinyl Light Dark Light Dark Maple Tree Plastic Ceramic Tile Glass Tile Dark Vinyl Carpet Carpet Vinyl Wood Wood Wood Cotton Cotton Branch Table 1:1 Xy Xy Xy Xy Xy Xy Xy Xy Xy Xy X Xy Xy 1:3 Xy Xy Xy Xy Xy Xy Xy Xy Xy Xy X Xy Xy 1:5 Xy Xy Xy Xy Xy Xy Xy Xy Xy Xy X Xy Xy 1:10 Xy Xy Xy Xy Xy Xy Xy Xy Xy Xy X Xy Xy 1:30 Xy X Xy Xy Xy Xy Xy Xy Xy Xy X Xy Xy 1:50 Xy X Xy Xy Xy Xy Xy Xy Xy Xy X Xy Xy 1:100 Xy X Xy Xy Xy Xy Xy Xy Xy Xy X Xy Xy 1:300 Xy X Xy Xy Xy Xy Xy Xy Xy X X Xy Xy 1:500 Xy X X X X Xy Xy Xy X X Xy X 1:1000 Xy X X X X Xy Xy Xy Xy X 1:3000 Xy X Xy Xy Xy 1:5000 Xy X Xy Xy Xy 1:10000 Xy Xy Xy X 1:30000 Xy Xy Xy X 1:50000 Xy X X X 1:100000 Xy X X X

Additionally, both the imaging sensors were tested for optical resolution using the

Air Force Resolution Target, Figure 6.2. The width of the minimum resolvable bar patterns were reported in Table 6.2 along with the average diameter of the smallest resolvable fluorescent droplets.

161

Figure 6.2 Air Force 1951 Resolution Target. (A) Reflectance mode imaging with illumination

intensity set to optimize contrast in the center grouping. (B) Target with Fluorescein spray settled on the

surface in microdroplets (pseudo-colored magenta). White light illumination has been dimmed to enhance

fluorescent visibility.

Table 6.2

Optical and fluorescence resolution of the goggle cameras and microscope.

Working Distance (cm) Group Element Resolution (µm)

20 G2E5 78.75 Goggle 40 G1E4 176.78 60 G1E1 250 Microscope 0 G5E5 9.84

6.3.2. Simulated Crime Scene

Multiple simulated crime scenes were imaged, Figure 6.3. Typically, the fluorescence emissions, were visible with ambient light from the overhead room lighting.

The exception to this was observed in the bathroom, were room light reflected off of the

162 ceramic sink rendered the fluorescence emissions undetectable at that location. To remedy this issue, room lighting was turned off and the external white light source with variable illumination intensity was brought in to provide background lighting. The position and intensity of the external light was adjusted to minimize the glare from reflective surfaces, providing a balance between observable background intensity while still achieving fluorescence detection at an SBR of 2 or greater.

Additional studies were conducted applying blood dilutions to small carpet samples. The scenes were imaged, following Hemascein application, with both the stereo imaging sensors and the ProScope device, Figure 6.4. In this scene, the microscopic image has been false colored using a Jet colormap, and placed in PiP mode. The use of microscopy was of greater benefit when attempting to identify small targets, or targets in thicker carpeting or cloth, where the wide field-of-view imaging had difficulty observing.

163

Figure 6.3 Still frames from the simulated crime scenes. (A) Blood dilutions (1:30 & 1:50)

spattered onto a wood floor and nearby door frame. The detected fluorescent emissions were segmented from the background using adaptive thresholding, then normalized and false colored red before being added back to onto the original image. (B) Blood dilutions (1:30 on sink and floor, 1:100 on tub) scattered around

a tiled bathroom. The detected fluorescent emissions were again segmented from the background and

enhanced, but not normalized to a uniform intensity. Red Arrow: The large blood stain on the sink was difficult to visualize due to the reflected glare off of the ceramic surface. Using a separate adjustable white

light source, rather than room lighting, reduced this effect. (C) Dilutions (1:50) applied to wooden

furniture. Fluorescent emissions were processed as in (A). Note the ability of the system to detect small

droplets of blood on the furniture legs. (D) Blood dilutions (1:30) on the floor of a dirty basement,

processed as in (B).

164

Figure 6.4 Image taken from stereo camera, depicting the ProScope device being used to analyze a

fluorescent target (1:50 blood/water dilution) on a carpet sample. The stereo imaging sensors provided a

wide field-of-view, while the ProScope gave a close-up, useful for detecting small or faint targets. The

ProScope image has been recolored using a Jet colormap and displayed in picture-in-picture mode.

6.3.3. 3D Scene Creation

Reconstruction of 3D scenes was implemented using stereo imagery and the

OpenCV libraries. The 3D reconstruction was conducted offline to individual scenes of interest, Figure 6.5. The resulting 3D object can be manipulated to view the scene from various angles, as detected by the imaging sensor. The fluorescence information remains visible and properly spatially oriented, having been directly mapped onto the 3D projection from the stereoscopic fluorescence images. Depth resolution of the system, using the same methods outlined in Chapter 2, was determined to be about 2 mm.

165

Figure 6.5 3D Reconstruction of a simulated crime scene from captured stereo images, viewed using

MeshLab software. The chair, ottoman and floor were all on different optical planes. Object depth varied

continuously from front to back. The fluorescent emissions from a treated blood stain (1:30 dilution) was

visible and accurately projected onto the floor (red pseudo color). The 3D scene could be rotated and

viewed from different angles.

6.4. Discussion

6.4.1. Fluorescence Detection Sensitivity

Detection sensitivity varied depending on the texture of the substrate surface. The reason lower detection efficiency on fabric or carpet substrates was likely due to absorbance of the blood dilution deeper into the woven material or carpet fibers. The fluorescent agent and reagent as well as the excitation light may have had a more difficult

166 time penetrating to the entirety of the dilution in these cases. Additionally, fluorescence emissions from deeper within the fabric may have been absorbed or back scattered.

Conversely, hard, flat and non-absorbent opaque surfaces like the ceramic tiles proved ideal for fluorescence imaging. Similar trends were found in previous studies [292, 294,

295]. The glass tile substrate had similar advantages of being flat and non-absorbent, however the transparency of the material allowed the fluorescent emissions to scatter in all directions, reducing the amount of fluorescent light received by the imaging sensors.

Previous studies indicated that the brightest fluorescence emissions came from between the 1:1000 and 1:100,000 dilution, depending on substrate and study [292, 299].

In this study we have reported a peak fluorescence emission between the 1:30 and 1:50 dilution. Since the dilution series ratios, fluorescence agents and application methods as well as types of substrates used appear to be equivalent between these studies and our own, we could only conclude that the cause for this discrepancy is the use of artificial blood in our study, while the priors utilized real blood. Previous work has indicated that artificial blood could cause variations in peak detection location and intensity, an effect partially confirmed in this study [295].

Detection efficiency in some samples may have been reduced due to background fluorescence as well. Mildly activated Hemascein, when in contact with the H2O2 reagent sprayed over the scene, appeared as a speckle pattern on the background of several substrates, Figure 6.6. The fluorescent targets also tended to attain a speckle pattern, due to same reasons. Therefore, the spray method, while effective for identifying chemical targets over a wide suspect area, also resulted in the target sample having the

167 same textural appearance as the speckled background, particularly on hard surfaces, which could make target identification somewhat more difficult.

Figure 6.6 Speckle pattern of detected fluorescence emissions. (A) Hemascein and Hydrogen

Peroxide were sprayed over three blood stains (1:30, 1:50, 1:100 dilutions, left to right). In this case, the target fluorescence from the blood dilutions was much stronger than the background, however the speckled

background fluorescence was still visible. (B) Similar speckled pattern of treated blood stain (1:10 dilution) on black cotton fabric. The background was brighter here, however the speckled appearance was

distinctly less, likely the result of liquid absorption by the fabric.

Fluorescence resolution was found to be equivalent to the optical spatial resolution.

It could otherwise be postulated that fluorescent resolution measurements were limited by the optical reflectance mode resolution. It is more likely that both resolution measurements were limited by lens capabilities and imaging sensor pixel density, more so than quantum efficiency for fluorescence detection sensitivity. The ProScope microscope reported a finer resolution, but a lower fluorescence detection sensitivity than the stereo cameras. The reason for this was likely the lower collection efficiency encountered by the microscope, due to a smaller imaging area and narrowed optical light collection.

168

6.4.2. Simulated Crime Scene

Various scenes were imaged using multiple blood dilutions applied to realistic surfaces (i.e. wooden floor, painted dry wall, carpet, etc.). While the system demonstrated an ability to image the fluorescent emissions from the treated dilutions with the room lights on, fluorescence contrast was improved with the lights off. The down-side to a lights-off imaging scenario is the loss of spatial information. In practice, Hemascein is often imaged with ambient light, relying on the brightness of the fluorescent emissions to provide sufficient contrast against the background [294, 295, 303]. Future work could further enhance contrast by utilizing a variable white light source with a notch filter to remove most of the optical contamination in the Hemascein fluorescence emission spectrum (520-

540 nm). Additionally, incorporating an independent pulsed excitation light source with synchronized fluorescent emission detection, as in Chapter 4, could further improve detection contrast.

6.4.3. 3D Scene Creation

Two limitations to the current 3D scene must be noted. The projections were all made from the point-of-view of the stereo cameras during scene recording. Therefore, if the user had imaged only the front of a chair, as in Figure 6.5, the back of the chair would not be included in this 3D scene. When the 3D scene was rotated to look at the back of the chair, the user would only see the hollow shell formed by the 3D projection of the chair front. For the purpose of this study, the area of interest was in front and not behind the chair, making the limitation inconsequential. Future work, however, may include processing algorithms which can combine multiple viewpoints (e.g. the front and back of

169 the chair) into a single 3D scene. Some work has already been conducted in this area, through the combination of adjacent scenes to create a larger, more complete 3D representation of an area [317, 320]. A second limitation is that the 3D scene reconstruction was only conducted on recorded images in post-processing. Real-time 3D imaging may be conducted in future works with the incorporation of parallel processing through GPU computing.

6.5. Conclusions

We have introduced a new wearable forensic imaging system. The device utilized line-of-site stereo imaging to provide the user with a realistic point-of-view of the crime scene incorporating analytical data. Recorded stereo images were later reconstructed into a 3-dimensional scene. The system had been configured to detect Hemascein fluorescent emissions. A series of artificial blood/ DI water dilutions were created and applied to a variety of surfaces, then treated with the Hemascein regiment, to test the system’s fluorescence detection sensitivity. To aid in fluorescence detection and forensic analysis, a modified version of the ProScope hand-held microscope was incorporated into the system as well. Computer vision algorithms were applied to segment and enhance detected fluorescence in simulated crime scenes. The imaging system has succeeded in detecting a wide range of blood dilutions, consistent with the prior works, in a variety of settings while proving to be compact, lightweight and easy to use.

170

CHAPTER VII

REAL-TIME DUAL-MODAL VEIN IMAGING SYSTEM

Chapter 7 is largely reprinted from the manuscript: “Mela CA, Lemmer DP, Bao

FS, Papay F, Hicks T and Liu Y, (2018) Real-time Dual-Modal Vein Imaging System, Int

J Cars, in press”. In this chapter, an infrared imaging system was developed for vein imaging. Aims 2 & 3 were in part addressed through the integration of this additional imaging modality to the system, and its subsequent comparison with traditional methods of visual vein identification.

7.1. Introduction

Vein imaging devices have been developed to enhance the efficiency of , vein diagnostics and biometrics. Many of these devices detect veins close to the skin surface using NIR light imaging techniques [327-343]. Advantages of using NIR light are multifold. NIR is considered a low-energy, less reactive wavelength of light, the opposite of high-energy UV light, and as such is widely considered safe for clinical use

[334, 344]. Additionally, this property allows NIR light to penetrate further through tissue

171 before becoming uniformly diffused. Another advantage is the “NIR window”, or the wavelength band between 700 and 1000 nm where water absorption of the NIR band is low [33, 41]. Hemoglobin, on the other hand, is highly absorbent of NIR light in this range

[345]. Therefore, when imaging NIR light through water-laden tissue, a stark contrast between blood vessels and tissue can be observed.

The use of NIR light for vein identification has been previously demonstrated for clinical imaging as well as for biometric analysis [330, 331, 335, 336, 343, 346, 347].

Typically, NIR vein imaging takes one of two forms, where the NIR light is reflected off the surface or transmitted through the target tissue. Key reflectance mode studies include the VeinViewer (Luminex, Memphis, TN) and AccuVein (AccuVein LLC, Cold Spring

Harbor, NY) devices [327, 328, 348, 349]. One such study, utilizing the VeinViewer device, indicated a 3-fold improvement in vein detection versus visual inspection [328].

Meanwhile, the similar AccuVein has been reported to show top contrast for imaging superficial vessels, but not as good results for deeper veins [350]. Additional reflectance mode publications include the biometric work of Michael et al on palm print and palm vein identification [336] using NIR wavelengths from 850-920 nm, the palm vein analysis study by Crisan et al [331] using the 740-760 nm range, and the 850 nm comparative analysis of hand veins conducted by Wang et al [346]. On the clinical side, Wang et al developed a

NIR multispectral vein detection system [351], while Paquit et al used a multi wavelength

NIR approach with structured light 3D modelling to enhance vein localization [352]. A study by Chen et al combines stereoscopic NIR reflectance mode vein imaging with ultrasound to create a 3D rendering of the target vessels and surface tissue while guiding needle placement [353]. Ai et al also implements stereo cameras for NIR reflectance mode

172 vein imaging, using the depth information for accurate back projection of false-colored veins onto the tissue [339]. Ahmed et al have developed a NIR video based automated needle placement regime for , also leveraging on the stereo effect for depth estimation [354].

While the majority of vein detection studies focus on reflectance mode imaging, several transmission or transillumination mode studies have also been published. Zharov et al utilized two methods of reflectance mode illumination, and one method which combined reflectance with a fiber-coupled trans-illumination scheme [329]. Fuksis et al found transmission mode illumination to provide enhanced palm vein imaging contrast, which is particularly useful for biometric vein pattern matching [330]. Additionally, Lee et al successfully implemented transmission mode for finger vein biometric imaging [335].

In a clinical imaging setting, Kim et al corroborated the benefits of transmission mode imaging contrast, and advanced the field by implementing an angled transillumination approach for better depth penetration [342]. The Pen Torch [337], Veinlite [355] as well as the VascoLuminator [338] also utilize a transillumination approach.

Comparative studies report a wide variety of vein detection abilities for various commercially available and academic devices. The VeinViewer device was reported to be able to detect clinically relevant veins at depths of 7-10 mm [328, 339, 342]. The

VascuLuminator (de Konigh Medical Systems, Arnhem, the Netherlands) reports vein visibility at a maximum depth of 5.5 mm for 3.6 mm diameter simulated veins in a tissue phantom, and a max depth of 2.6 mm for 1 mm diameter veins [338]. The excellent study by Zharov et al reports a 3 mm depth penetration using both reflectance and transmission

173 modes [329]. Seker and Engin reported on a polarized reflectance mode system which could discern a 1 mm diameter tube under a tissue phantom depth of 3 mm [341]. Lastly,

Kim et al reports vein visibility at a depth of 5 mm for the VeinLite EMS (TransLite LLC,

Sugarland, TX, USA) device as well as a stunning 15 mm max visualization depth for a new prototype system developed in his recent preliminary study [342].

The resultant vein map from NIR imaging is often displayed in one of two ways: as an image on a monitor or as a false-colored projection onto the skin surface [327, 328,

339, 348-350, 356-358]. Projection devices have the advantage of being light weight, mobile and potentially useful on any part of the body which is non-invasively accessible.

A potential disadvantage is that the vein image is limited in size, and may obscure the surface topology when projected onto the skin. Also, slightly darkened lighting conditions and proper device orientation may be required to properly see the projection. Systems which only display the NIR vein image on a monitor entirely neglect surface anatomy and suffer from ineffectual vein localization.

Image processing algorithms are frequently implemented to further enhance vein visibility. Perhaps the most common techniques include histogram equalization and contrast enhancement, particularly the CLAHE regime [329, 332, 336, 340-342, 354, 359-

361], various forms of thresholding [330-332, 335, 336, 340-342, 346, 354, 360, 361] and skeletonization [331, 341, 342, 346]. Additional steps include Region-of-Interest (ROI) identification [340, 342, 346, 361], noise removal via high frequency filtration and blurring

[336, 340, 346, 354], sharpening [361], erosion or closing operations [331, 342, 354], highpass or morphological windowed filtering [335, 336, 339, 340], edge detection [331,

174

340, 361], line detection and template matching [330, 335, 339, 346] and pseudo-coloration

[332].

In this chapter, we introduced a direct transmission mode vein imaging system that combined NIR vein images with visible light (VIS) structural images using beam splitter technology and computer vision to create a natural looking enhanced vein image. Our aims and contributions from this study were as follows:

 Evaluate direct transmission mode NIR imaging combined with VIS

reflectance mode imaging of the human hand by obtaining hand/vein images

from multiple volunteers.

 Design and implement a test for determining proper beam splitter alignment

while evaluating the resolution of both imaging sensors.

 Determine the tissue depths at which direct transmission mode NIR imaging

can visualize veins, using animal tissue and simulated vessels.

 Conduct statistical analysis between the vein counts taken from the hands of

our test subjects by our imaging system and by visual examination.

 Conduct statistical analysis between the vein counts taken from Dominant

versus Non-Dominant hands as imaged by our system.

175

7.2. Materials and Methods

7.2.1. Dual-Mode Imaging

The beam splitter, Figure 7.1, was constructed using an 805 nm shortpass dichroic mirror housed within a post-mounted cage cube (DMSP805 & C6W Thorlabs, NJ, USA).

A C-mount, 8 mm fixed focal length imaging lens was used as the objective (M111FM08

Tamron, JPN). Achromatic doublets housed within slotted lens tubes were used as focusing lenses (SM1L20C & ACN254-040-B Thorlabs, NJ, USA). Each doublet had an anti-reflection coating applied to its imaging surface; one doublet was coated for visible spectrum imaging and one for NIR. Two Sony ICX445 charge-coupled device (CCD) imaging sensors (Chameleon 3 FLiR, BC, CAN), one monochrome and the other color, were respectively used for NIR and VIS light detection. Final collimation of light onto the imaging sensors was achieved by placement of an adjustable iris style aperture in between each focusing lens and its corresponding imaging sensor. The shortpass dichroic mirror passed the VIS light portion of the image directly to the color CCD sensor while reflecting the narrow band NIR light image to the monochrome CCD sensor.

Images were displayed sequentially in real-time (30 fps) on a monitor as well as on an augmented realty (AR) display (MyBud Accupix, SZ, CHN). The display was worn by the administrator of the vein imaging tests.

176

Figure 7.1 NIR/VIS vein imaging system. (A) Schematic of the beam splitter device. (B) Beam

splitter system in use for hand vein imaging. The subject’s hand was placed over a NIR LED array to conduct transmission mode vein imaging. Room lighting provided illumination for reflectance mode VIS

imaging.

7.2.2. Illumination

Illumination was provided by a circular array of 48 850 nm, 5 mm diameter epoxy encapsulated LEDs (LED Supply, VT USA). In the center of the array, a photodiode switch was placed so that the LEDs would only be activated when an object (i.e. hand, forearm) was placed over the array, shielding the diode from the room light. A 220 grit ground glass diffusor (DG20-220 ThorLabs, NJ, USA) was placed over the LED array to provide a more even distribution of light. Broad spectrum room lighting was used in reflectance mode for

VIS imaging.

177

7.2.3. Image Processing

Our software program was written in house using the Python coding language. VIS and NIR imaging frames were read-in simultaneously using multi-core encoding and processed in real-time using the Open Source Computer Vision (OpenCV) as well as

Numerical Python (NumPy) libraries. Processing conducted on the NIR images included gamma correction, histogram equalization for image normalization and contrast enhancement, resolution enhancement, segmentation and false coloring.

NIR frames were pre-processed to normalize brightness levels using a gamma correction algorithm. Contrast enhancement of the adjusted vein image was conducted using the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm in

OpenCV [362, 363]. The CLAHE method can result in a grainy output in low light settings, so a mild smoothing operation was conducted after this step using a 3 x 3 Gaussian window with sigma value of 1. Next, resolution enhancement was conducted using an unsharp mask, implemented through the subtraction of a Gaussian blurred image to the original smoothed and contrast enhanced image. Additional processing steps conducted for specific imaging tasks, including segmentation and false coloring, will be discussed in the following sections. Gamma correction was conducted for the VIS images as well.

7.2.4. Alignment and Resolution

Accurate initial alignment of the VIS and NIR imaging frames was achieved solely using the beam splitter hardware. Any differences between the doublet lens’ magnifications, pixel resolution between the CCD cameras or the relative alignment of like

178 optical components (i.e. sensor placement, focal lengths) were minimized during component selection and calibration to avoid minor image misalignments.

Alignment accuracy was assessed through the imaging of a transparent Air Force

1951 Resolution Target, Figure 7.2A. Following the described pre-processing steps, the transmission mode NIR bar pattern image was thresholded and binarized then inverted to remove background while making the bars appear bright, Figure 7.2B. Salt and pepper noise was removed through a closing operation and the image was false-colored green to improve contrast. The processed NIR image was combined with the reflectance mode color VIS image using an even 50/50 weighted addition. A precise alignment exactly overlaid the green bars from the NIR image to the black bars from the VIS image, Figure

7.2C. A shift between the locations of the two sets of bars indicated a hardware misalignment, Figure 7.2D.

The spatial resolution of the device was also tested using the Air Force Resolution

Target, for both the NIR and VIS imaging sensors. Resolution was assessed at working distances of 40, 30, and 20 cm from the distal end of the objective lens to the target surface.

179

Figure 7.2 Bar patterns captured via the imaging system for beam splitter alignment. (A) Reflection

mode VIS image. (B) Transmission mode NIR image. (C) Properly aligned combination of the inverted

and pseudo colored NIR (green bars) and VIS images. (D) Improperly aligned combined images. Note

that the green NIR bar pattern is offset from the black VIS bar pattern.

7.2.5. Depth Penetration

Studies were conducted to estimate the optical depth penetration of the system, or the maximum tissue thickness through which an opaque or an NIR absorbing object could be distinguished. The use of silicone or plastic tubing embedded in a tissue phantom for vein visibility depth penetration testing has been previously demonstrated [338, 341, 342].

Simulated blood vessels were used for this study, made of transparent silicone tubing with

180 outer diameters of 0.5, 1, and 2 mm. The tubes were filled with a solution of artificial blood (CrimeScene, AZ, USA) and India ink. India ink was used as an absorber, since the artificial blood was found to be transparent to our 850 nm LED light. The India ink was added to the artificial blood until the solution’s measured absorbance was 0.5 mm-1 at 850 nm using a desktop spectrophotometer (ThermoFisher, MA, USA). An absorbance of 0.5 mm-1 approximated that of venous blood [338].

The vessels were placed on top of a 3 mm thick layer of porcine tissue, used to simulate the hand pad. The hand pad layer was later increased to 6 mm to simulate a thicker hand. Layers of thinly sliced, homogenous porcine tissue were then stacked over the simulated vessels in 1.5 mm increments, up to 7.5 mm. The entire assembly was placed directly over our NIR LED array, with a thin layer of optically transparent plastic wrap to separate the tissue from the array, Figure 7.3. Imaging was conducted in the same manner as in the Vein Imaging section of this paper, only a 70/30 semi-transparent weighted addition of the NIR and VIS images was used here.

181

Figure 7.3 Schematic of the experimental setup for the determination of optical depth penetration.

7.2.6. Imaging Studies

Vein counting and imaging studies were conducted on the hands of 25 participants with the approval of the University of Akron Institutional Review Board, IRB #20181011.

7.2.6.1. Vein Counting

Imaging studies were conducted after system alignment had been set and accuracy validated using the resolution target. The left and right hands of 25 test subjects were imaged. Subjects placed their hands on top of the illumination source, Figure 7.1B, in a variety of poses to obtain imagery of a plurality of veins. The number of veins apparent in

182 either hand were counted both by visual examination and using the imaging system, and those numbers were compared. Additionally, vein counts between Dominant and Non-

Dominant hands were compared.

Only veins seen through the back of the hand were counted, excluding fingers. A branching vein, defined as having a Y-shape, was counted as consisting of 2 rather than 1 or 3 individual veins. If there was uncertainty whether a target of interest was indeed a vein (i.e. the target may be a blemish, hair or bone/callous), it was not included in the count.

Rules for counting were consistent between both methods of inspection.

7.2.6.2. Vein Imaging

Additional image processing was applied during vein imaging, after the described pre-processing regime. Processing steps included assignment of a false color to the gray scale NIR image. The false color was assigned to help blend the gray-scale vein map into the color anatomical image. The RGB false color values were determined using an automated feature tracking algorithm applied to the VIS images. The program continuously searched for the circular plastic case around the NIR LED array using the

Hough Circle technique in OpenCV. When the circle is covered, the tracking was interrupted, prompting the program to conduct a non-binary, to-zero thresholding of the

VIS image. Since the background was always a uniform black, it became trivial to isolate and segment pixels containing skin tone values only. The averaged RGB skin tone values are then applied to the normalized NIR vein image. Following the false-color step, a 70/30 weighted addition was then conducted to combine the false-colored NIR image to the VIS image, with a 0.5 alpha transparency. The level of weighting was varied to find an optimal

183 balance of visible anatomical structure and vein contrast for different skin tones or lighting conditions.

An alternative NIR to VIS registration algorithm was conducted in a similar manner as described in the Alignment and Resolution section of this paper. Following the prescribed pre-processing regime, the gray scale vein image was thresholded to remove background, binarized, and inverted to make the veins appear bright. Next, a closing operation was applied to remove salt and pepper noise and to help better define vein borders. The processed NIR vein image was then false-colored green for visibility and combined with the VIS image using an equal weighted addition, without transparency.

7.2.7. Statistical Analysis

A pilot study of 7 human subjects was conducted to determine the sample size required to determine whether the number of veins detected by the system in hands was significantly different than the number of veins detected by visual examination. A sample size power analysis for a paired t-test was conducted using this preliminary data, with an alpha value of 0.05 and a power of 0.9. A requirement of a total of 25 test subjects was determined.

Statistical comparison of the number of veins detected between the imaging system and by visual analysis was conducted via a paired t-test with alpha value of 0.05 and an alternate hypothesis (H1) that the means are different. Separate analyses were conducted for Dominant hand and Non-Dominant hand veins. Additionally, a similar paired t-test was conducted to determine whether the vein counts from Dominant hands were different from Non-Dominant hands.

184

7.3. Results

7.3.1. Alignment and Resolution

Detected resolution using the Air Force 1951 Resolution Target was found to be equivalent between VIS and NIR imaging sensors. Minimum resolvable dimensions were determined to be: 445 µm at 40 cm working distance, 281 µm at 30cm, and 140 µm at 20 cm.

7.3.2. Depth Penetration

The optical depth penetration of the system was evaluated using porcine tissue and simulated blood vessels. Representative results were depicted in Figure 7.4. Vessel detectability was determined based on Signal-to-Background Ratio (SBR) between the vessel and surrounding tissue. All vessels were clearly distinguishable when placed under

1.5 & 3.0 mm of tissue with an SBR of more than 2 for most cases. However, beneath 3 mm or more of tissue the vessel boarders became less distinct and the 0.5 mm vessel was difficult to delineate. Results were summarized in Table 7.1, which depicts vessel visibility versus tissue depth as a function of SBR.

185

Figure 7.4 Depth penetration study. (A) Three transparent silicone tubes of 2, 1, and 0.5 mm

diameter filled with simulated blood were placed on the surface of a layer of porcine tissue, which was

placed atop the NIR illumination source. (B) A 1.5 mm of layer of porcine tissue was placed over the

vessels. All vessels remained clearly distinguishable. (C) An additional 1.5 mm layer (3 mm total) was added on top of the vessels. The tubes remained readily visible, though less distinct. (D) Under 4.5 mm all vessel edges became indistinct, appearing as broad diffused lines. (E) All vessels became increasingly less

distinct under tissue depths exceeding 5 mm.

186

Table 7.1

Optical depth penetration of system. The Depth of the synthetic blood vessels beneath layers of tissue was set against the vessel Diameters. The ‘X’ indicates the depths at which a vessel was visible with SBR of at

least 2, while ‘x’ marks visibility at SBR > 1.5, and ‘o’ marks partial visibility at SBR > 1.2.

Depth (mm) Diameter (mm) 0.0 1.5 3.0 4.5 6.0 7.5 0.5 X X x o - - 1.0 X X X x x - 2.0 X X X X X o

Increasing the thickness of the tissue layer beneath the simulated vessels and over the light source (i.e. the simulated hand pad) from 3 mm to 6 mm had little effect on the system’s ability to image the vessels, and significantly less effect than increasing the thickness of tissue on top of the vessels. Contrast was slightly decreased, but vessel SBR was not significantly diminished with increased sub-vessel tissue thickness.

7.3.3. Imaging Studies

7.3.3.1. Vein Counting

Veins from both the left and right hands of 25 test subjects were counted. Counts were made first by unassisted visual inspection and then using the vein imaging system,

Table 7.2. Using the system, a greater number of clearly distinguishable veins were observed in all subjects. The magnitude of this increase varied by patient as well as by hand (Dominant versus Non-Dominant). Dominant hand vein counts were improved by a

187 factor of 2.0 on average, whereas Non-Dominant hand vein counts improved by an average factor of 1.6.

Table 7.2

Vein counts by visual inspection and using the imaging system for our 25 test subjects. Improvement

Factor was defined as the ratio between system and visual counts. The alpha values from paired t-tests

between visual and assisted vein counts indicated that the means were statistically different.

Counts Mean STD Improvement P - Hand Visual System Visual System Visual System Factor Value

Dominant 183 375 8 16.3 2.4 4.6 2.0 1.73E-10

Non-Dominant 211 341 9.2 14.8 3.0 3.5 1.6 2.84E-11

7.3.3.2. Vein Imaging

VIS and NIR images were recorded both independently and combined using our beam splitter based imaging system, yielding an image which preserved anatomical localization, coloration and topological features while adding in the NIR vein information,

Figure 7.5. Two processing regimes were implemented for combining the images, Figure

7.5C and 7.5D.

188

Figure 7.5 Example of combined beam splitter images. (A) Reflectance mode VIS image. (B)

Transmission mode NIR image. (C) NIR image combined with the VIS color image, where the gray scale

NIR image has been blended to match the subject’s skin tone, providing a more natural appearance. (D)

Alternative processing for the combined NIR/VIS image. The NIR image has been inverted and false

colored green before combining with VIS image to enhance contrast.

The focus of infrared vein imaging was on the hands, however images of the subjects’ wrists and fingers were also recorded, Figure 7.6. Many bold, intricate, and varied venous structures were identified from all subjects. The ability to distinguish differences in morphology was demonstrated.

Similar to the hands in thickness, the finger veins were also easily distinguished,

Figure 7.6. Veins in the wrist were imaged as well, however the contrast and detail was not as great as with the fingers and hands due to the increased thickness of the bone and

189 soft tissue. Imaging of the veins in the upper forearm was not achieved using transmitted

NIR light for this same reason.

Figure 7.6 Various examples of vein imaging of the hand, fingers and wrist. Imaging was

successfully conducted on subjects of various skin tones and body types.

7.4. Discussion

7.4.1. Depth Penetration

A 3-6 mm thick layer of porcine tissue was placed between the artificial veins and the LED light array to simulate the hand pad or other tissue through which the light must

190 penetrate before reaching the target veins. The thickness is an estimate, not intended to simulate actual anatomical dimensions, but rather to more accurately estimate the diffusion of light through tissue, based on observations with hand vein imaging. Varying the tissue thickness used for this experiment allowed us to simulate appropriate amounts of light diffusion for a selected anatomy.

Infrared imaging was practically limited to the hands and fingers for this study.

Combining NIR with color imagery allowed for a more comprehensive diagnostic model of the hand, displaying accurate vein location and morphology along with hand topography and coloration. Improvements could potentially be made on depth penetration by increasing the output power and directional efficiency of the LED array. A more sensitive

NIR imaging sensor may also improve these results. However it should be noted that dynamic range and resolution are just as important as increased quantum efficiency.

Increasing the directionality and output power of the light source will also increase the amount of heat transferred to the patient’s tissue, which may lead to discomfort or harm.

7.4.2. Imaging Studies

7.4.2.1. Vein Counting

The average increase in vein counts using the system compared to visual inspection was found to be greater for the Dominant hands of subjects than for Non-Dominant hands.

The cause of this is likely due to more developed musculature and reduced subcutaneous fat in the dominant hand. Subjects whose hands visibly appeared more muscular yielded greater vein counts than those with less muscled hands. Greater musculature may also benefit vein visualization by pushing veins outwards towards the skin.

191

The improvement factor using the imaging system for vein identification versus visual inspection varied significantly between subjects, as did the number of veins counted.

While an imaging system is not required for finding a vein during IV insertion in patients, this device has been demonstrated to at least improve vein visibility for all subjects tested.

7.4.2.2. Vein Imaging

Two methods were used for combining color VIS and gray scale NIR images. The first involved blending the NIR image into the VIS image, and is likely the more accurate of the two. The registration was based purely on optics, using an aligned beam splitter, without any significant morphological effects to add error to the accuracy of this technique.

The second method segmented the veins from the background in the NIR image and applied a false color before adding them to the VIS image. Using this process, the contrast of the veins increased against the color background. However, this also resulted in an image that looks less realistic, and may be prone to higher error due to the morphological processes involved during segmentation, particularly during thresholding for noise removal and thinning for improved vein definition. Additional thresholding/thinning may help reduce larger segmented veins to a more appropriate size, however smaller veins may be lost in the process. Additionally, this method was more likely to encounter false positives due to dense hair or thicker bone/callous regions which may decrease transmitted NIR light in a fashion similar to a vein.

The vein image can be partially obstructed by double reflections from the dichroic mirror, if not properly calibrated. Potential double reflections, or ghosting aberrations, tend to be more apparent with broad spectrum imaging; reflecting the monochrome NIR

192 light instead can minimize this effect. A slight ghosting aberration was still observed in the reflected NIR image, however the ghost image was found to be of low intensity and offset from the desired image. Adjusting the aperture diameter, properly calibrating the dynamic range, setting an appropriate software threshold level and beam splitter angle were typically sufficient to remove visible ghosting effects.

No additional light fixture was required for VIS imaging, as the reflected room lighting proved sufficient intensity for accurate color detection with good resolution.

Additionally, no aberrations or double reflections were detected in the VIS images.

7.5. Conclusion

In this chapter, we have presented a NIR vein imaging system which uses beam splitter technology to combine functional and anatomical image data in real-time. Two methods of combining the NIR and VIS images have been demonstrated here, for the purpose of enhancing diagnostic capacity and vein localization. Vein counts were taken from the hands of 25 test subjects using the system, and the results compared to naked-eye visual analysis. The system provided a 2-fold enhancement in vein visibility. Additionally, we have conducted experiments to determine the system’s resolution, alignment accuracy, and depth penetration. The accuracy, functionality, and ease-of-use of the device makes it useful for IV placement, morphological analysis for disease state detection, and biometric analysis.

193

CHAPTER VIII

CONCLUSION

In this dissertation, a multimodal intraoperative medical imaging and display system has been developed and tested. The system incorporated fluorescence imaging with portable ultrasound, tomographical image representation, microscopy and infrared vein imaging. Optical imaging was conducted via head mounted, stereoscopic color and fluorescence imaging sensors. Display of the imaging data was provided to the user via augmented reality headsets mounted in-line with the imaging sensors. In this way, line-of- sight, stereoscopic, real-time imaging was enabled.

Computer vision and augmented reality algorithms were developed to enhance system operations. Fluorescence detection was improved using a combination of hardware and software techniques, including narrow band filtration with adaptive thresholds and pulsed excitation imaging. Accuracy of fluorescent differentiation from background noise on a wearable imaging platform was also enhanced through the incorporation of a dense flow optical point tracking regime. Optical fiducial marker detection was incorporated, providing a convenient and inexpensive tool to aid in ultrasound and tomographic image

194

registration. Additionally, stereoscopic depth-of-field measurements were used along with a library look-up take of transformation matrices to conduct a distance based fluorescence- to-color video rate co-registration scheme.

Evaluation of the Aims outlined in the introduction to this study was conducted by assessing system performance over a variety of metrics. Fluorescence imaging sensitivity was evaluated to determine minimum detectable concentrations of fluorescent dye. Results have indicated that while all minimum detection limits have been exceeded, fluorescence detection limits can vary widely based upon excitation intensity, dye concentration, methods of evaluation and working distance. Therefore, setting a single detection limit may in fact be inadequate. System resolution was investigated in both planar spatial coordinates and depth-of-field measurements, and minimal resolvable dimensions were determined to be less than 1 mm. Fluorescence and resolution testing was not only conducted on the imaging sensors, however. The fluorescence identification and resolution performance parameters of three augmented reality displays were also tested, since the displayed results are as important as what was detected. In addition, the ultrasound and

3D object registration regimes were tested for fiducial recognition and target registration errors, both of which were universally evaluated at less than 2 mm. A fluorescence-to- color registration scheme provided similar accuracy. Lastly, the system was demonstrated to compare positively with traditional imaging and diagnostic methods through surgical tissue phantom simulations, forensic analyses and vein imaging.

195

The compact size, light weight, and portability of the system makes it suitable not only for surgeries in an operating room, but for use in any number of clinical situations. In particular, ongoing collaborations with NASA and the USAF have allowed us to demonstrate the feasibility of system functionality for in-transit and expeditionary medicines. Medical guidance systems within the limited confines of a spaceship or aircraft must be compact, lightweight and broadly applicable, all criteria which the system meets.

While the system has already been tested in a number of medically relevant diagnostic studies and simulations, future work will bring the goggle into the operating room for human clinical trials on fluorescence guided tumor resections. In summary, we have developed a platform intraoperative multimodal imaging, computer vision, and augmented reality system for guiding surgeries and assisting in medical diagnostics.

196

BIBLIOGRAPHY

[1] I. Ng, "Integrated Intra-operative Room Design," Acta Neurochir Suppl, vol. 109, pp. 199- 205, 2011. [2] R. M. Terra et al., "Applications for a hybrid operating room in thoracic surgery: from multidisciplinary procedures to ••image-guided video-assisted thoracoscopic surgery," J Bras Pneumol, vol. 42, no. 5, pp. 387-390, 2016. [3] M. Matsumae et al., "Multimodality Imaging Suite: Neo-Futuristic Diagnostic Imaging Operating Suite Marks a Significant Milestone for Innovation in Medical Technology," Acta Neurochir Suppl, vol. 109, pp. 215-218, 2011. [4] J. M. Rubin and D. J. Quint, "Intraoperative US versus Intraoperative MR Imaging for Guidance during Intracranial Neurosurgery," Radiology, vol. 215, no. 3, pp. 917-918, 2000. [5] M. R. Chicoine et al., "Implementation and Preliminary Clinical Experience with the Use of Ceiling Mounted Mobile High Field Intraoperative Magnetic Resonance Imaging between Two Operating Rooms," Acta Neurochir Suppl, vol. 109, pp. 97-102, 2011. [6] J. M. Mislow, A. J. Golby, and P. M. Black, "Origins of Intraoperative MRI," Magn Reson Imaging Clin N Am, vol. 18, no. 1, pp. 1-10, 2010. [7] P. Black, F. A. Jolesz, and K. Medani, "From vision to reality: the origins of intraoperative MR imaging," Acta Neurochir Suppl, vol. 109, pp. 3-7, 2011. [8] X. Chen, B.-n. Xu, X. Meng, J. Zhang, X. Yu, and D. Zhou, "Dual-Room 1.5-T Intraoperative Magnetic Resonance Imaging Suite with a Movable Magnet: Implementation and Preliminary Experience," Neurosurg Rev, vol. 35, no. 1, pp. 95-109, 2012. [9] T. Wagner, J. Buscombe, G. Gnanasegaran, and S. Navalkissoor, "SPECT/CT in sentinel node imaging," Nucl Med Commun, vol. 34, no. 3, pp. 191-202, 2013. [10] D. M. Trifiletti et al., "Intraoperative breast radiation therapy with image guidance: Findings from CT images obtained in a prospective trial of intraoperative high-dose-rate brachytherapy with CT on rails," Brachytherapy, vol. 14, no. 6, pp. 919-924, 2015. [11] M. Koch and V. Ntziachristos, "Advancing Surgical Vision with Fluorescence Imaging," Annu Rev Med, vol. 67, pp. 153-164, 2016. [12] J. He et al., "Combination of Fluorescence-Guided Surgery With Photodynamic Therapy for the Treatment of Cancer," Mol Imaging, vol. 16, 2017, Art. no. 1536012117722911. [13] A. V. DSouza, H. Lin, E. R. Henderson, K. S. Samkoe, and B. W. Pogue, "Review of fluorescence guided surgery systems: identification of key performance capabilities beyond indocyanine green imaging," J Biomed Opt, vol. 21, no. 8, 2016, Art. no. 080901.

197

[14] C. A. Mela, C. L. Patterson, and Y. Liu, "A Miniature Wearable Optical Imaging System for Guiding Surgeries," in SPIE Photonics West, San Fransisco, CA, USA, 2015, vol. 9311: SPIE. [15] B. W. Pogue et al., "Vision 20/20: Molecular-guided surgical oncology based upon tumor metabolism or immunologic phenotype: Technological pathways for point of care imaging and intervention," Med Phys, vol. 43, no. 6, pp. 3143-3156, 2016. [16] N. Č. Sikošek, A. Dovnik, D. Arko, and I. Takač, "The role of intraoperative ultrasound in breast-conserving surgery of nonpalpable breast cancer," Wien Klin Wochenschr, vol. 126, no. 3-4, pp. 90-94, 2014. [17] M. Ramos, J. Díez, T. Ramos, R. Ruano, M. Sancho, and J. González-Orús, "Intraoperative ultrasound in conservative surgery for non-palpable breast cancer after neoadjuvant chemotherapy," Int J Surg, vol. 12, no. 6, pp. 572-577, 2014. [18] B. Ertas et al., "Intraoperative ultrasonography is useful in surgical management of neck metastases in differentiated thyroid cancers," Endocrine, vol. 48, no. 1, pp. 248-253, 2015. [19] S. Zhang, S. Jiang, Z. Yang, and R. Liu, "2D Ultrasound and 3D MR Image Registration of the Prostate for Brachytherapy Surgical Navigation," Medicine, vol. 94, no. 40, 2015, Art. no. e1643. [20] S. Schafer et al., "Intraoperative Imaging for Patient Safety and QA: Detection of Intracranial Hemorrhage Using C-Arm Cone-Beam CT," in Medical Imaging 2013: Image- Guided Procedures, Robotic Interventions, and Modeling, 2013, vol. 8671: SPIE. [21] A. Nayyar, K. K. Gallagher, and K. P. McGuire, "Definition and Management of Positive Margins for Invasive Breast Cancer," Surg Clin North Am, vol. 98, no. 4, pp. 761-771, 2018. [22] P. Shah et al., "Positive Surgical Margins Increase Risk of Recurrence after Partial Nephrectomy for High Risk Renal Tumors," Urol Oncol, vol. 196, no. 2, pp. 327-334, 2016. [23] G. Oh, C. R. Farley, A. G. Lopez-Aguiar, M. C. Russell, K. A. Delman, and M. C. Lowe, "Recurrence Patterns after Primary Excision of Invasive Melanoma with Melanoma in situ at the Margin," Am Surg, vol. 84, no. 8, pp. 1319-1325, 2018. [24] E. Sevick-Muraca, "Translation of near-infrared fluorescence imaging technologies: Emerging clinical applications," Annu Rev Med, vol. 63, pp. 217-231, 2012. [25] E. Belykh et al., "Intraoperative Fluorescence Imaging for Personalized Brain Tumor Resection: Current State and Future Directions," Front Surg, vol. 3, 2016, Art. no. 55. [26] K. Tipirneni et al., "Fluorescence Imaging for Cancer Screening and Surveillance," Mol Imaging Biol, vol. 19, pp. 645-655, 2017. [27] T. Nagaya, Y. A. Nakamura, P. L. Choyke, and H. Kobayashi, "Fluorescence-Guided Surgery," Front Oncol, vol. 7, 2017, Art. no. 314. [28] M. J. Landau, D. J. Gould, and K. M. Patel, "Advances in fluorescent-image guided surgery," Ann Transl Med, vol. 2, no. 20, 2016, Art. no. 392. [29] R. Hofmeyr, G. Tolken, and R. De Decker, "Expedition medicine: A southern African perspective," S Afr Med J, vol. 107, no. 8, pp. 659-663, 2017. [30] R. Y. Tsien, "The Green Fluorescent Protein," Annu Rev Biochem, vol. 67, pp. 509-544, 1998.

198

[31] O. Shimomura, F. H. Johnson, and Y. Saiga, "Extraction, purification and properties of aequorin, a bioluminescent protein from the luminous hydromedusan, Aequorea," J Cell Comp Physiol, vol. 59, pp. 223-139, 1962. [32] T. Sugie et al., "Evaluation of the Clinical Utility of the ICG Fluorescence Method Compared with the Radioisotope Method for Sentinel Lymph Node Biopsy in Breast Cancer," Ann Surg Oncol, vol. 23, no. 1, pp. 44-50, 2016. [33] A. Haque, S. H. Faizi, J. A. Rather, and M. S. Khan, "Next generation NIR fluorophores for tumor imaging and fluorescence-guided surgery: A review," Bioorg Med Chem, vol. 25, no. 7, pp. 2017-2034, 2017. [34] K. S. Samkoe et al., "Application of Fluorescence-Guided Surgery to Subsurface Cancers Requiring Wide Local Excision: Literature Review and Novel Developments Toward Indirect Visualization," Cancer Control, vol. 25, no. 1, pp. 1-11, 2018. [35] K. He et al., "Comparison between the indocyanine green fluorescence and blue dye methods for sentinel lymph node biopsy using novel fluorescence image-guided resection equipment in different types of hospitals," Translational Research, vol. 178, pp. 74-80, 2016. [36] L. E. Kelderhouse et al., "Development of tumor-targeted near infrared probes for fluorescence guided surgery," Bioconjug Chem, vol. 24, no. 6, pp. 1075-1080, 2013. [37] S. Yano et al., "Fluorescence-guided surgery of a highly-metastatic variant of human triple-negative breast cancer targeted with a cancerspecific GFP adenovirus prevents recurrence," Oncotarget, vol. 7, no. 46, pp. 75635-75647, 2016. [38] A. K. L. Fujita et al., "Fluorescence evaluations for porphyrin formation during topical PDT using ALA and methyl-ALA mixtures in pig skin models," Photodiagnosis Photodyn Ther, vol. 15, pp. 236-244, 2016. [39] E. Papakonstantinou, F. Löhr, and U. Raap, "Photodynamic Therapy and Skin Cancer," in Dermatologic Surgery and Procedures, P. Vereecken, Ed.: IntechOpen, 2018, pp. 127- 150. [40] D. Piccolo and D. Kostaki, "Photodynamic Therapy Activated by Intense Pulsed Light in the Treatment of Nonmelanoma Skin Cancer," Biomedicines, vol. 6, no. 1, 2018, Art. no. 18. [41] A. L. Antaris et al., "A small-molecule dye for NIR-II imaging," Nat Mater, vol. 15, no. 2, pp. 235-242, 2016. [42] J. V. Frangioni, "In vivo near-infrared fluorescence imaging," Curr Opin Chem Biol, vol. 7, no. 5, pp. 626-634, 2003. [43] S. Gioux, H. S. Choi, and J. V. Frangioni, "Image-guided surgery using invisible near- infrared light: fundamentals of clinical translation," Mol Imaging, vol. 9, no. 5, pp. 237- 255, 2010. [44] S. Luo, E. Zhang, Y. Su, T. Cheng, and C. Shi, "A review of NIR dyes in cancer targeting and imaging," Biomaterials, vol. 32, no. 29, pp. 7127-7138, 2011. [45] M. Garcia et al., "Bio-inspired imager improves sensitivity in near-infrared fluorescence image-guided surgery," Optica, vol. 5, no. 4, pp. 413-422, 2018. [46] Z. Starosolski, R. Bhavane, K. B. Ghaghada, S. A. Vasudevan, A. Kaay, and A. Annapragada, "Indocyanine green fluorescence in second near-infrared (NIR-II) window," PLoS One, vol. 12, no. 11, 2017, Art. no. e0187563.

199

[47] L. T. Babu and P. Paira, "Current Application of Quantum Dots (QD) in Cancer Therapy: A Review," Mini Rev Med Chem, vol. 17, no. 14, pp. 1406-1415, 2017. [48] H. Wang, J. Bi, B.-W. Zhu, and M. Tan, "Multicolorful Carbon Dots for Tumor Theranostics," Curr Med Chem, vol. 25, no. 25, pp. 2894-2909, 2018. [49] E. S. Molina, J. Wölfer, C. Ewelt, A. Ehrhardt, B. Brokinkel, and W. Stummer, "Dual- labeling with 5–aminolevulinic acid and fluorescein for fluorescence-guided resection of high-grade gliomas: technical note," J Neurosurg, vol. 128, no. 2, pp. 399-405, 2018. [50] M. Swierczewska, S. Lee, and X. Chen, "The Design and Application of Fluorophore–gold Nanoparticle Activatable Probes," Phys Chem Chem Phys, vol. 13, no. 21, pp. 9929-9941, 2013. [51] J. Brunetti et al., "Near-infrared quantum dots labelled with a tumor selective tetrabranched peptide for in vivo imaging," J Nanobiotechnology, vol. 16, no. 21, 2018. [52] B. B. Manshian, J. Jiménez, U. Himmelreich, and S. J. Soenen, "Personalized medicine and follow-up of therapeutic delivery through exploitation of quantum dot toxicity," Biomaterials, vol. 127, pp. 1-12, 2017. [53] G. A. Hartung and G. A. Mansoori, "In vivo General Trends, Filtration and Toxicity of Nanoparticles," J Nanomater Mol Nanotechnol, vol. 2, no. 3, 2013. [54] P. B. Garcia-Allende, J. Glatz, M. Koch, and V. Ntziachristos, "Enriching the Interventional Vision of Cancer with Fluorescence and Optoacoustic Imaging," J Nucl Med, vol. 54, no. 5, pp. 664-667, 2013. [55] S. Casey et al., "Use of Protoporphyrin Fluorescence to Determine Clinical Target Volume for Nonmelanotic Skin Cancers Treated with Primary Radiotherapy," Cureus, vol. 8, no. 9, 2016, Art. no. e767. [56] A. N. Yaroslavsky, X. Feng, and V. A. Neel, "Optical Mapping of Nonmelanoma Skin Cancers—A Pilot Clinical Study," Surg Med, vol. 49, no. 9, pp. 803-809, 2017. [57] N. L. Martirosyan et al., "Potential application of a handheld confocal endomicroscope imaging system using a variety of fluorophores in experimental gliomas and normal brain," Neurosurg Focus, vol. 36, no. 2, 2014, Art. no. E16. [58] N. Sanai et al., "Intraoperative confocal microscopy in the visualization of 5- aminolevulinic acid fluorescence in low-grade gliomas," J Neurosurg, vol. 115, no. 4, pp. 740-748, 2011. [59] H. Stepp and W. Stummer, "5-ALA in the Management of Malignant Glioma," Lasers Surg Med, vol. 50, no. 5, pp. 399-419, 2018. [60] Y. Inoue et al., "Anatomical Liver Resections Guided by 3-Dimensional Parenchymal Staining Using Fusion Indocyanine Green Fluorescence Imaging," Ann Surg, vol. 262, no. 1, pp. 105-111, 2015. [61] T. Ishizawa, A. Saiura, and N. Kokudo, "Clinical application of indocyanine green- fluorescence imaging during hepatectomy," HepatoBiliary Surg Nutr, vol. 5, no. 4, pp. 322-328, 2015. [62] N. Kitagawa et al., "Navigation using indocyanine green fluorescence imaging for hepatoblastoma pulmonary metastases surgery," Pediatr Surg Int, vol. 31, no. 4, pp. 407-411, 2015.

200

[63] A. Peloso et al., "Combined use of intraoperative ultrasound and indocyanine green fluorescence imaging to detect liver metastases from colorectal cancer," HPB, vol. 15, no. 12, pp. 928-934, 2013. [64] J. Keating et al., "Identification of Breast Cancer Margins Using Intraoperative Near- Infrared Imaging," J Surg Oncol, vol. 113, no. 5, pp. 508-514, 2016. [65] Y. Liu et al., "First in-human intraoperative imaging of HCC using the fluorescence goggle system and transarterial delivery of near-infrared fluorescent imaging agent: a pilot study," Transl Res, vol. 162, no. 5, pp. 324-331, 2013. [66] J. M. Warram et al., "Fluorescence-guided resection of experimental malignant glioma using cetuximab-IRDye 800CW," Brit J Neurosurg, vol. 29, no. 6, pp. 850-858, 2015. [67] P. van Driel et al., "Characterization and Evaluation of the Artemis Camera for Fluorescence-Guided Cancer Surgery," Mol Imaging Biol, vol. 17, no. 3, pp. 413-423, 2015. [68] H. Wada et al., "Pancreas-Targeted NIR Fluorophores for Dual-Channel Image-Guided Abdominal Surgery," Theranostics, vol. 5, no. 1, pp. 1-11, 2015. [69] J. T. Unkart, S. L. Chen, I. L. Wapnir, J. E. Gonzalez, A. Harootunian, and A. M. Wallace, "Intraoperative Tumor Detection Using a Ratiometric Activatable Fluorescent Peptide: A First-in-Human Phase 1 Study," Ann Surg Oncol, vol. 24, no. 11, pp. 3167–3173, 2017. [70] J. M. Warram et al., "A Ratiometric Threshold for Determining Presence of Cancer During Fluorescence-Guided Surgery," J Surg Oncol, vol. 112, no. 1, pp. 2-8, 2015. [71] N. Hokimoto et al., "A Novel Color Fluorescence Navigation System for Intraoperative Transcutaneous Lymphatic Mapping and Resection of Sentinel Lymph Nodes in Breast Cancer: Comparison with the Combination of Gamma Probe Scanning and Visible Dye Methods," Oncology, vol. 94, no. 2, pp. 99-106, 2018. [72] S. L. Troyan et al., "The FLARE™ intraoperative near-infrared fluorescence imaging system: A first-in-human clinical trial in breast cancer sentinel lymph node mapping," Ann Surg Oncol, vol. 16, no. 10, pp. 2943-2952, 2009. [73] Q. R. Tummers et al., "Near-infrared fluorescence sentinel lymph node detection in gastric cancer: A pilot study," World J Gastroenterol, vol. 22, no. 13, pp. 3644-3651, 2016. [74] J. R. van der Vorst et al., "Near-infrared fluorescence sentinel lymph node mapping of the oral cavity in head and neck cancer patients," Oral Oncol, vol. 49, no. 1, pp. 15-19, 2013. [75] F. P. Verbeek et al., "Near-infrared fluorescence sentinel lymph node mapping in breast cancer: a multicenter experience," Breast Cancer Res Treat, vol. 143, no. 2, pp. 333-342, 2014. [76] F. P. Verbeek et al., "Sentinel Lymph Node Biopsy in Vulvar Cancer using Combined Radioactive and Fluorescence Guidance," Int J Gynecol Cancer, vol. 25, no. 6, pp. 1086- 1093, 2015. [77] H. Wada et al., "Sentinel Lymph Node Mapping of Liver," Ann Surg Oncol, vol. 22, no. S3, pp. S1147-S1155, 2015. [78] Y. Ashitate, A. Stockdale, H. S. Choi, R. G. Laurence, and J. V. Frangioni, "Real-Time Simultaneous Near-Infrared Fluorescence Imaging of Bile Duct and Arterial Anatomy," J Surg Res, vol. 176, no. 1, pp. 7-13, 2012.

201

[79] A. Matsui et al., "Real-time Intra-operative Near-infrared Fluorescence Identification of the Extrahepatic Bile Ducts Using Clinically Available Contrast Agents," Surgery, vol. 148, no. 1, pp. 87-95, 2012. [80] T. H. Degett, H. S. Andersen, and I. Gögenur, "Indocyanine green fluorescence for intraoperative assessment of gastrointestinal anastomotic perfusion: a systematic review of clinical trials," Langenbecks Arch Surg, vol. 401, no. 6, pp. 767-775, 2016. [81] T. Sato et al., "Development of a new high-resolution intraoperative imaging system (dual-image videoangiography, DIVA) to simultaneously visualize light and near-infrared fluorescence images of indocyanine green angiography," Acta Neurochir, vol. 157, pp. 1295–1301, 2015. [82] A. J. M. Cornelissen et al., "Near-infrared fluorescence image-guidance in plastic surgery: A systematic review," Eur J Plast Surg, vol. 41, no. 3, pp. 269-278, 2018. [83] J. T. Nguyen et al., "Face Transplant Perfusion Assessment Using Near-infrared Fluorescence Imaging," J Surg Res, vol. 177, no. 2, pp. E83-E88, 2012. [84] J. T. Nguyen et al., "Bone Flap Perfusion Assessment using Near-Infrared Fluorescence Imaging," J Surg Res, vol. 178, no. 2, pp. e43-350, 2013. [85] C. Echalier, I. Pluvy, and J. Pauchot, "Use of indocyanine green angiography in reconstructive surgery: Brief review," Ann Chir Plast Esthet, vol. 61, no. 6, pp. 858-867, 2016. [86] A. Bajwa, R. Aman, and A. K. Reddy, "A comprehensive review of diagnostic imaging technologies to evaluate the and the optic disk," Int Ophthalmol, vol. 35, no. 5, pp. 733-755, 2015. [87] D. Dumas et al., "Infrared camera based on a curved retina," Opt Lett, vol. 37, no. 4, pp. 653-655, 2012. [88] R. F. Spaide, J. M. Klancnik Jr, and M. J. Cooney, "Retinal vascular layers imaged by and optical coherence tomography angiography," JAMA Ophthalmol, vol. 133, no. 1, pp. 45-50, 2015. [89] M. P. Fernandez et al., "Fluorescein angiography findings in diffuse retinoblastoma: two case reports with clinicopathologic correlation," J AAPOS, vol. 21, no. 4, pp. 337-339, 2017. [90] C. Kakucs, I.-A. Florian, G. Ungureanu, and I.-S. Florian, "Fluorescein Angiography in Intracranial Surgery: A Helpful Method to Evaluate the Security of Clipping and Observe Blood Flow," World Neurosurg, vol. 105, no. 406-411, 2017. [91] H. Kobayashi, M. R. Longmire, and P. L. Choyke, "Polychromatic in vivo Imaging of Multiple Targets Using Visible and Near Infrared Light," Adv Drug Deliv Rev, vol. 65, no. 8, pp. 1112-1119, 2013. [92] A. Behrooz et al., "Multispectral open-air intraoperative fluorescence imaging," Optics Letters, vol. 42, no. 15, pp. 2964-2967, 2017. [93] C. A. Mela, F. A. Papay, and Y. Liu, "Intraoperative Multimodal Imaging Using Goggle System," in In Vivo Fluorescence Imaging, M. Bai, Ed. (Methods in Molecular Biology, no. 1444) New York, NY, USA: Humana Press, 2016, pp. 85-95.

202

[94] C. A. Mela, C. Patterson, W. K. Thompson, F. Papay, and Y. Liu, "Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance," PLoS ONE, vol. 10, no. 11, 2015, Art. no. e0141956. [95] Z. Chen, N. Zhu, S. Pacheco, X. Wang, and R. Liang, "Single camera imaging system for color and near-infrared fluorescence image guided surgery," Biomed Opt Express, vol. 5, no. 8, pp. 2791-2797, 2014. [96] Z. Zhang et al., "A Wearable Goggle Navigation System for Dual-Mode Optical and Ultrasound Localization of Suspicious Lesions: Validation Studies Using Tissue-Simulating Phantoms and an Ex Vivo Human Breast Tissue Model," PLoS ONE, vol. 11, no. 7, p. e0157854, 2016. [97] N. Zhu et al., "Compact wearable dual-mode imaging system for real-time fluorescence image-guided surgery," JBO, vol. 20, no. 9, pp. 1-6, 2015, Art. no. 096010. [98] Y. Liu et al., "Hands-free, wireless goggles for near-infrared fluorescence and real-time image-guided surgery," Surgery, vol. 149, no. 5, pp. 689-698, 2011. [99] Y. Liu et al., "Near-infrared fluorescence goggle system with complementary metal– oxide–semiconductor imaging sensor and see-through display," J Biomed Opt, vol. 18, no. 10, 2013, Art. no. 101303. [100] S. B. Mondal et al., "Binocular Goggle Augmented Imaging and Navigation System provides real-time fluorescence image guidance for tumor resection and sentinel lymph node mapping," Scientific Reports, vol. 5, 2015, Art. no. 12117. [101] A. Christensen et al., "uPAR-targeted optical near-infrared (NIR) fluorescence imaging and PET for image-guided surgery in head and neck cancer: proof-of-concept in orthotopic xenograft model," Oncotarget, vol. 8, no. 9, pp. 15407-15419, 2017. [102] E. Mery et al., "Fluorescence-guided surgery for cancer patients: a proof of concept study on human xenografts in mice and spontaneous tumors in pets," Oncotarget, vol. 8, no. 65, pp. 109559-109574, 2017. [103] K. Sexton et al., "Pulsed-light imaging for fluorescence guided surgery under normal room lighting," Opt Lett, vol. 38, no. 17, pp. 3249-3252, 2013. [104] K. J. Sexton, Y. Zhao, S. C. Davis, S. Jiang, and B. W. Pogue, "Optimization of fluorescent imaging in the operating room through pulsed acquisition and gating to ambient background cycling," Biomed Opt Express, vol. 8, no. 5, pp. 2635-2648, 2017. [105] N. S. van den Berg et al., "(Near-Infrared) Fluorescence-Guided Surgery Under Ambient Light Conditions: A Next Step to Embedment of the Technology in Clinical Routine," Ann Surg Oncol, vol. 23, no. 8, pp. 2586-2595, 2016. [106] Novadaq. (2018, August 16). SPY ELITE FLUORESCENCE IMAGING SYSTEM. Available: http://novadaq.com/products/spy-elite/ [107] Visionsense. (8/16/2018). Iridium. Available: http://www.visionsense.com/iridium/ [108] Q. M. Imaging. (2018, August 16). QUEST SPECTRUM PLATFORM. Available: http://www.quest-mi.com/products/spectrum-platform/ [109] S. D. Mieog et al., "Toward Optimization of Imaging System and Lymphatic Tracer for Near-Infrared Fluorescent Sentinel Lymph Node Mapping in Breast Cancer," Ann Surg Oncol, vol. 18, pp. 2483-2491, 2011. [110] Fluoptics. (2017, August 16 ). FLUOBEAM® The most gifted camera of its generation. Available: http://fluoptics.com/en/fluobeam/

203

[111] C. Hirche et al., "An experimental study to evaluate the Fluobeam 800 imaging system for fluorescence-guided lymphatic imaging and sentinel node biopsy," Surg Innov, vol. 20, no. 5, pp. 516-523, 2013. [112] A. M. Mohs et al., "An Integrated Widefield Imaging and Spectroscopy System for Contrast-Enhanced, Image-Guided Resection of Tumors," IEEE Trans Biomed Eng, vol. 62, no. 5, pp. 1416-1424, 2015. [113] J. Kang et al., "Real-time sentinel lymph node biopsy guidance using combined ultrasound, photoacoustic, fluorescence imaging: in vivo proof-of-principle and validation with nodal obstruction," Sci Rep, vol. 7, 2017, Art. no. 45008. [114] Novadaq. (2018, August 16). SPY-PHI SPY PORTABLE HANDHELD IMAGING SYSTEM. Available: http://novadaq.com/products/spy-phi/ [115] Hamamatsu. (2018, March 15). PDE Near infrared fluorescence imager. Available: https://www.hamamatsu.com/jp/en/product/type/C10935-400/index.html [116] K. He et al., "A novel wireless wearable fluorescence image-guided surgery system," in 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Orlando, FL, USA, 2016, pp. 5208-5211. [117] P. Shao et al., "Designing a wearable navigation system for image-guided cancer resection surgery," Ann Biomed Eng, vol. 42, no. 11, pp. 2228-2237, 2014. [118] S. Gao, S. B. Mondal, N. Zhu, R. Liang, S. Achilefu, and V. Gruev, "Image overlay solution based on threshold detection for a compact near infrared fluorescence goggle system," JBO, vol. 20, no. 1, pp. 1-9, 2015, Art. no. 016018 [119] S. B. Mondal et al., "Optical See-Through Cancer Vision Goggles Enable Direct Patient Visualization and Real-Time Fluorescence-Guided Oncologic Surgery," Ann Surg Oncol, vol. 24, pp. 1897-1903, 2017. [120] H. Zhang et al., "Dual-Modality Imaging of Prostate Cancer with a Fluorescent and Radiogallium-Labeled Gastrin-Releasing Peptide Receptor Antagonist," J Nucl Med, vol. 58, no. 1, pp. 29-35, 2017. [121] E. Cinotti et al., "Handheld reflectance confocal microscopy for the diagnosis of conjunctival tumors," Am J Ophthalmol, vol. 159, no. 2, pp. 324-333, 2015. [122] J. Malvehy and G. Pellacani, "Dermoscopy, Confocal Microscopy and other Non-invasive Tools for the Diagnosis of Non-Melanoma Skin Cancers and Other Skin Conditions," Acta Derm Venereol, vol. Suppl 218, pp. 22-30, 2017. [123] J. Park, P. Mroz, and M. R. Hamblin, "Dye-enhanced multimodal confocal microscopy for noninvasive detection of skin cancers in mouse models," J Biomed Opt, vol. 15, no. 2, 2010, Art. no. 026023. [124] N. L. Martirosyan et al., "Handheld confocal laser endomicroscopic imaging utilizing tumor-specific fluorescent labeling to identify experimental glioma cells in vivo," Surg Neurol Int, vol. 12, no. 7(Suppl 40), pp. S995-S1003, 2016. [125] D. Shin, M. C. Pierce, A. M. Gillenwater, M. D. Williams, and R. R. Richards-Kortum, "A Fiber-Optic Fluorescence Microscope Using a Consumer-Grade Digital Camera for In Vivo Cellular Imaging," PLoS One, vol. 5, no. 6, 2010, Art. no. e11218. [126] T. P. Kingham, S. Jayaraman, L. W. Clements, M. A. Scherer, J. D. Stefansic, and W. R. Jarnagin, "Evolution of image-guided liver surgery: Transition from open to laparoscopic procedures," J Gastrointest Surg, vol. 17, no. 7, pp. 1274–1282, 2013.

204

[127] L. Ma et al., "Image-guided Laparoscopic Pelvic Lymph Node Dissection Using Stereo Visual Tracking Free-hand Laparoscopic Ultrasound," in 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, South Korea, 2017, pp. 3240-3243: IEEE. [128] D. C. Gray et al., "Dual-mode laparoscopic fluorescence image-guided surgery using a single camera," Biomed Opt Express, vol. 3, no. 8, pp. 1880-1890, 2012. [129] Y. Oh et al., "Thoracoscopic Color and Fluorescence Imaging System for Sentinel Lymph Node Mapping in Porcine Using Indocyanine Green-Neomannosyl Human Serum Albumin: Intraoperative Image-Guided Sentinel Nodes Navigation," Ann Surg Oncol, vol. 21, no. 4, pp. 1182-1188, 2014. [130] V. Venugopal et al., "Real-time endoscopic guidance using near-infrared fluorescent light for thoracic surgery," 2013, vol. 8572: SPIE. [131] N. Cui, P. Kharel, and V. Gruev, "Augmented reality with Microsoft HoloLens holograms for near infrared fluorescence based image guided surgery," in SPIE BIOS, San Francisco, CA, USA, 2017, vol. 10049: SPIE. [132] X. Chen et al., "Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display," J Biomed Inform, vol. 55, pp. 124- 131, 2015. [133] E. Watanabe, M. Satoh, T. Konno, M. Hirai, and T. Yamaguchi, "The Trans-Visible Navigator: A See-Through Neuronavigation System Using Augmented Reality," World Neurosurg, vol. 87, pp. 399-405, 2016. [134] R. E. Patterson, Human Factors of Stereoscopic 3D Displays, 1 ed. London, UK: Springer- Verlag London, 2015. [135] S. Reeve and J. Flock, "Basic Principles of Stereoscopic 3D," Sky2010, Available: https://www.sky.com/shop/__PDF/3D/Basic_Principles_of_Stereoscopic_3D_v1.pdf, Accessed on: September 10 2015. [136] J. Q. Nguyen et al., "Development of a modular fluorescence overlay tissue imaging system for wide-field intraoperative surgical guidance," J Med Imaging, vol. 5, no. 2, 2018, Art. no. 021220. [137] Q. Gan et al., "Benchtop and Animal Validation of a Projective Imaging System for Potential Use in Intraoperative Surgical Guidance," PLoS ONE, vol. 11, no. 7, 2016, Art. no. e0157794. [138] P. Sarder, K. Gullicksrud, S. Mondal, G. P. Sudlow, S. Achilefu, and W. J. Akers, "Dynamic optical projection of acquired luminescence for aiding oncologic surgery," J Biomed Opt, vol. 18, no. 12, 2013, Art. no. 120501. [139] A. Moiyadi and P. Shetty, "Navigable intraoperative ultrasound and fluorescence-guided resections are complementary in resection control of malignant gliomas: one size does not fit all," J Neurol Surg A Cent Eur Neurosurg, vol. 75, no. 6, pp. 434-441, 2014. [140] R. Sastry et al., "Applications of Ultrasound in the Resection of Brain Tumors," J Neuroimaging, vol. 27, no. 1, pp. 5-15, 2017. [141] K. Uchiyama et al., "Combined Intraoperative use of Contrast-enhanced Ultrasonography Imaging Using a Sonazoid and Fluorescence Navigation System with Indocyanine Green During Anatomical Hepatectomy," Langenbecks Arch Surg, vol. 396, no. 7, pp. 1101-1107, 2011.

205

[142] H. Luo et al., "ImmunoPET and Near-Infrared Fluorescence Imaging of Pancreatic Cancer with a Dual-Labeled Bispecific Antibody Fragment," Mol Pharmaceutics, vol. 14, pp. 1646-1655, 2017. [143] F.-F. An, M. Chan, H. Kommidi, and R. Ting, "Dual PET and Near-Infrared Fluorescence Imaging Probes as Tools for Imaging in Oncology," AJR Am J Roentgenol., vol. 207, no. 2, pp. 266-273, 2016. [144] N. S. van den Berg et al., "Multimodal surgical guidance during sentinel node biopsy for melanoma: combined gamma tracing and fluorescence imaging of the sentinel node through use of the hybrid tracer indocyanine green-(99m)Tc-nanocolloid," Radiology, vol. 275, no. 2, pp. 521-529, 2015. [145] T. T. Quang, H.-Y. Kim, F. S. Bao, F. A. Papay, W. B. Edwards, and Y. Liu, "Fluorescence Imaging Topography Scanning System for intraoperative multimodal imaging," PLoS One, vol. 12, no. 4, 2017, Art. no. e0174928. [146] A. Erten et al., "Enhancing Magnetic Resonance Imaging Tumor Detection with Fluorescence Intensity and Lifetime Imaging," J Biomed Opt, vol. 15, no. 6, 2010, Art. no. 066012. [147] F. Gessler et al., "Combination of Intraoperative Magnetic Resonance Imaging and Intraoperative Fluorescence to Enhance the Resection of Contrast Enhancing Gliomas," Neurosurgery, vol. 77, no. 1, pp. 16-22, 2015. [148] S. B. Hauser, R. A. Kockro, B. Actor, J. Sarnthein, and R.-L. Bernays, "Combining 5- Aminolevulinic Acid Fluorescence and Intraoperative Magnetic Resonance Imaging in Glioblastoma Surgery: A Histology-Based Evaluation," Neurosurgery, vol. 78, no. 4, pp. 475-483, 2016. [149] C. Floridi et al., "C-arm cone-beam computed tomography in interventional oncology: technical aspects and clinical applications," Radiol Med, vol. 119, no. 7, pp. 521-532, 2014. [150] Z. Rumboldt, W. Huda, and J. All, "Review of portable CT with assessment of a dedicated head CT scanner," AJNR Am J Neuroradiol, vol. 30, no. 9, pp. 1630-1636, 2009. [151] D. Sun et al., "Radioimmunoguided surgery (RIGS), PET/CT image‐guided surgery, and fluorescence image‐guided surgery: Past, present, and future," J Surg Oncol, vol. 96, no. 4, pp. 297–308, 2007. [152] Y. Yuasa et al., "Sentinel Lymph Node Biopsy Using Intraoperative Indocyanine Green Fluorescence Imaging Navigated with Preoperative CT Lymphography for Superficial Esophageal Cancer," Ann Surg Oncol, vol. 19, no. 2, pp. 486-493, 2012. [153] L. Crane et al., "Intraoperative near-infrared fluorescence imaging for sentinel lymph node detection in vulvar cancer: First clinical results," Gynecol Oncol, vol. 120, no. 2, pp. 291-295, 2011. [154] P. L. Kubben, K. J. ter Meulen, O. E. Schijns, M. P. ter Laak-Poort, J. J. van Overbeeke, and H. van Santbrink, "Intraoperative MRI-guided resection of glioblastoma multiforme: a systematic review," Lancet, vol. 12, no. 11, pp. 1062-1070, 2011. [155] M. J. Pallone, S. P. Poplack, H. B. R. Avutu, K. D. Paulsen, and R. J. Barth Jr, "Supine Breast MRI and 3D Optical Scanning: A Novel Approach to Improve Tumor Localization for Breast Conserving Surgery," Ann Surg Oncol, vol. 21, no. 7, pp. 2203-2208, 2014.

206

[156] D. Lam, L. Mitsumori, P. Neligan, B. Warren, W. Shuman, and T. Dubinsky, "Pre- operative CT angiography and three-dimensional image post processing for deep inferior epigastric perforator flap breast reconstructive surgery," Br J Radiol, vol. 85, no. 1020, pp. e1293–e1297, 2012. [157] W. H. Nam, D.-G. Kang, D. Lee, J. Y. Lee, and J. B. Ra, "Automatic registration between 3D intra-operative ultrasound and pre-operative CT images of the liver based on robust edge matching," Phys Med Biol, vol. 57, no. 1, pp. 69-91, 2012. [158] T. P. Kingham, M. A. Scherer, B. W. Neese, L. W. Clements, J. D. Stefansic, and W. R. Jarnagin, "Image-guided liver surgery: intraoperative projection of computed tomography images utilizing tracked ultrasound," HPB, vol. 14, no. 9, pp. 594-603, 2012. [159] O. R. Hughes, N. Stone, M. Kraft, C. Arens, and M. A. Birchall, "Optical and molecular techniques to identify tumor margins within the larynx," Head Neck, vol. 32, no. 11, pp. 1544-1553, 2010. [160] Q. T. Nguyen et al., "Surgery with molecular fluorescence imaging using activatable cell- penetrating peptides decreases residual cancer and improves survival," PNAS, vol. 107, no. 9, pp. 4317-4322, 2012. [161] M. D. Keller et al., "Autofluorescence and diffuse reflectance spectroscopy and spectral imaging for breast surgical margin analysis," Lasers Surg Med, vol. 42, no. 1, pp. 15-23, 2010. [162] L. G. Wilke et al., "Rapid noninvasive optical imaging of tissue composition in breast tumor margins," Am J Surg, vol. 198, no. 4, pp. 566-574, 2009. [163] Y. Hiroshima et al., "Hand-held high-resolution fluorescence imaging system for fluorescence-guided surgery of patient and cell-line pancreatic tumors growing orthotopically in nude mice," J Surg Res, vol. 187, no. 2, pp. 510-517, 2014. [164] X. Wang, S. Bhaumik, V. P. Staudinger, and S. Yazdanfar, "Compact instrument for fluorescence imageguided surgery," J Biomed Opt, vol. 15, no. 2, 2010, Art. no. 020509. [165] J. S. D. Mieog et al., "Novel intraoperative near-infrared fluorescence camera system for optical image-guided cancer surgery," Mol Imaging, vol. 9, no. 4, pp. 223-231, 2010. [166] B. E. Schaafsma et al., "The clinical use of indocyanine green as a near-infrared fluorescent contrast agent for image-guided oncologic surgery," J Surg Onc, vol. 104, no. 3, pp. 323-332, 2011. [167] A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. van de Velde, and J. V. Frangioni, "Image-guided cancer surgery using near-infrared fluorescence," Nat Rev Clin Oncol, vol. 10, no. 9, pp. 507-518, 2013. [168] Y. Liu et al., "Hands-free, Wireless Goggles for Near-infrared Fluorescence and Real-time Image-guided Surgery," Surgery, vol. 149, no. 5, pp. 689-698, 2011. [169] Y. Liu et al., "Near-infrared fluorescence goggle system with complementary metal- oxide-semiconductor imaging sensor and see-through display," J Biomed Opt, vol. 18, no. 10, 2013, Art. no. 101303. [170] OpenCV. (2018, June 3). OpenCV Library. Available: https://opencv.org/ [171] E. Buhr, S. Guenther-Kohfahl, and U. Neitzel, "Simple method for modulation transfer function determination of digital imaging detectors from edge images," in Medical Imaging 2003: Physics of Medical Imaging, Sand Diego, CA, USA, 2003, vol. 5030, pp. 877-884: SPIE.

207

[172] E. H. Stelzer, "Contrast, resolution, pixelation, dynamic range and signal-to-noise ratio: fundamental limits to resolution in fluorescence light microscopy," J Microscopy, vol. 189, no. 1, pp. 15-24, 1998. [173] R. T. Held and T. T. Hui, "A guide to stereoscopic 3D displays in medicine," Acad Radiol, vol. 18, no. 8, pp. 1035-1048, 2011. [174] M. van Beurden, W. Ijsselsteijn, and J. Juola, "Effectiveness of stereoscopic displays in medicine: a review," 3D Res, vol. 3, no. 1, 2012, Art. no. 54. [175] H. Liao, "3D medical imaging and augmented reality for image-guided surgery," in Handbook of Augmented Reality, B. Furht, Ed. 1 ed. New York, NY, USA: Springer, 2011, pp. 589-602. [176] S. Wang, J. Chen, Z. Dong, and R. Ledley, "SMIS–A real-time stereoscopic medical imaging system," in Symposium on Computer-Based Medical Systems, Bethesda, MD, USA, 2004, pp. 18-24: IEEE. [177] S.-A. Zhou and A. Brahme, "Development of high-resolution molecular phase-contrast stereoscopic x-ray imaging for accurate cancer diagnostics," Radiat Prot Dosimetry, vol. 139, no. 1-3, pp. 334-338, 2010. [178] X. H. Wang et al., "Compare display schemes for lung nodule CT screening," J Digit Imaging, vol. 24, no. 3, pp. 478-484, 2010. [179] D. J. Getty and P. J. Green, "Clinical applications for stereoscopic 3-D displays," J SID, vol. 15, no. 6, pp. 377-384, 2007. [180] R. Gill, The Physics and Technology of Diagnostic Ultrasound: A Practitioner's Guide. Sydney, NSW, AUS: High Frequency Publishing, 2012. [181] M. V. Marshall et al., "Near-infrared fluorescence imaging in humans with indocyanine green: A review and update," Open Surg Oncol J, vol. 2, no. 2, pp. 12-25, 2012. [182] R. Souza, T. Santos, D. Oliveira, A. Alvarenga, and R. Costa-Felix, "Standard operating procedure to prepare agar phantoms," J Phys: Conf Ser, vol. 733, 2016, Art. no. 012044. [183] G. Wagnieres et al., "An Optical Phantom with Tissue-like Properties in the Visible for Use in PDT and Fluorescence Spectroscopy," Phys Med Biol, vol. 42, pp. 1415–1426, 1997. [184] A. M. De Grand et al., "Tissue-Like Phantoms for Near-Infrared Fluorescence Imaging System Assessment and the Training of Surgeons," J Biomed Opt, vol. 11, no. 1, 2006, Art. no. 014007. [185] R. Pleijhuis, A. Timmermans, J. De Jong, E. De Boer, V. Ntziachristos, and G. Van Dam, "Tissue-simulating Phantoms for Assessing Potential Near-infrared Fluorescence Imaging Applications in Breast Cancer Surgery," J Vis Exp, no. 91, 2014, Art. no. e51776. [186] M. Marois, J. Bravo, S. C. Davis, and S. C. Kanick, "Characterization and standardization of tissue-simulating protoporphyrin IX optical phantoms," J Biomed Opt, vol. 21, no. 3, 2016, Art. no. 035003. [187] S. Iva, A. Tanabe, T. Maeda, H. Funamizu, and Y. Aizu, "Development of Non- Deterioration-Type Skin Tissue Phantom Using Silicone Material," Opt Rev, vol. 21, no. 3, pp. 353-358, 2014. [188] D. Wirth, K. Kolste, S. Kanick, D. W. Roberts, F. Leblond, and K. D. Paulsen, "Fluorescence depth estimation from wide-field optical imaging data for guiding brain tumor resection:

208

a multi-inclusion phantom study," Biomed Opt Express, vol. 8, no. 8, pp. 3656-3670, 2017. [189] K. D. Patel, "Assessment of Optical Transmission and Image Contrast at Infrared Wavelengths Using Tissue Simulating Phantoms and Biological Tissues," Master od Science, Biomedical Engineering, Rutgers, New Brunswick, NJ, USA, 2017. [190] M. Anastasopoulou et al., "Comprehensive phantom for interventional fluorescence molecular imaging," J Biomed Opt, vol. 21, no. 9, p. 10, 2016, Art. no. 091309. [191] F. Ayers, A. Grant, D. Kuo, D. J. Cuccia, and A. J. Durkin, "Fabrication and characterization of silicone-based tissue phantoms with tunable optical properties in the visible and near infrared domain," 2008, vol. 6870: SPIE. [192] M. Lualdi, A. Colombo, B. Farina, S. Tomatis, and R. Marchesini, "A Phantom With Tissue-Like Optical Properties in the Visible and Near Infrared for Use in Photomedicine," Lasers Surg Med, vol. 28, no. 3, pp. 237-243, 2001. [193] V. H. S. Felipe Wilker Grillo1*†, Renan Hiroshi Matsuda1, Carlo Rondinoni1, Theo Zeferino Pavan1, and H. R. M. a. A. A. O. C. Oswaldo Baffa1, "Patient-specific neurosurgical phantom: assessment of visual quality, accuracy, and scaling effects," 3D Print Med, vol. 4, no. 3, 2018. [194] S. Suri et al., "In vivo fluorescence imaging of biomaterial-associated inflammation and infection in a minimally invasive manner," J Biomed Mater Res A, vol. 103, no. 1, pp. 76- 83, 2014. [195] O. T. Okusanya et al., "Small Portable Interchangeable Imager of Fluorescence for Fluorescence Guided Surgery and Research," Technol Cancer Res Treat, vol. 14, no. 2, pp. 213-220, 2015. [196] T. Moffitt, Y.-C. Chen, and S. Prahl, "Preparation and characterization of polyurethane optical phantoms," J Biomed Opt, vol. 11, no. 4, 2006, Art. no. 041103. [197] K. K. Kolste et al., "Macroscopic optical imaging technique for wide-field estimation of fluorescence depth in optically turbid media for application in brain tumor surgical guidance," J Biomed Opt, vol. 20, no. 2, 2015, Art. no. 026002. [198] NIH. (5/22/2018). ImageJ: Image Prossesing and Analysis in Java. Available: https://imagej.nih.gov/ij/index.html [199] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000. [200] OpenCV. (2017, 10/22). Camera Calibration. Available: https://docs.opencv.org/3.3.1/dc/dbb/tutorial_py_calibration.html [201] A. Armea. (2017, 11/1). Calculating a depth map from a stereo camera with OpenCV. Available: https://albertarmea.com/post/opencv-stereo-camera/ [202] N. Zhu et al., "Compact wearable dual-mode imaging system for real-time fluorescence image-guided surgery," J Biomed Opt, vol. 20, no. 9, p. 6, 2015, Art. no. 096010. [203] B. W. Pogue and M. Patterson, "Review of tissue simulating phantoms for optical spectroscopy, imaging and dosimetry," J Biomed Opt, vol. 11, no. 4, 2006, Art. no. 41102. [204] M. Roy, A. Kim, F. N. Dadani, and B. C. Wilson, "Homogenized tissue phantoms for quantitative evaluation of subsurface fluorescence contrast," J Biomed Opt, vol. 16, no. 1, 2011, Art. no. 016013.

209

[205] F. W. Grillo et al., "Patient-specific neurosurgical phantom: assessment of visual quality, accuracy, and scaling effects," 3D Print Med, journal article vol. 4, no. 3, March 13 2018. [206] E. Eisenberg. (2018, May 18). Replicating human vision for accurate testing of AR/VR displays [PowerPoint Slides]. Available: https://www.radiantvisionsystems.com/learn/webinars/replicating-human-vision- testing-arvr-displays [207] O. Kulyk, S. H. Ibbotson, H. Moseley, R. M. Valentine, and I. D. Samuel, "Development of a handheld fluorescence imaging device to investigate the characteristics of protoporphyrin IX fluorescence in healthy and diseased skin," Photodiagnosis Photodyn Ther, vol. 12, no. 4, pp. 630-639, 2015. [208] S. Gao, S. B. Mondal, N. Zhu, R. Liang, S. Achilefu, and V. Grueva, "Image overlay solution based on threshold detection for a compact near infrared fluorescence goggle system," J Biomed Opt, vol. 20, no. 1, 2015, Art. no. 016018. [209] T. Minamikawa et al., "Simplified and optimized multispectral imaging for 5-ALAbased fluorescence diagnosis of malignant lesions," Sci Rep, vol. 6, 2016, Art. no. 25530. [210] D. Holt et al., "Intraoperative near-infrared fluorescence imaging and spectroscopy identifies residual tumor cells in wounds," J Biomed Opt, vol. 20, no. 7, 2015, Art. no. 076002. [211] P. Kałużyński, Z. Opilski, I. Niedzielska, N. Sitek-Ignac, and D. Kogut, "Luminescence spectroscopy measurements for skin cancer research," Photonics Let Pol, vol. 10, no. 1, pp. 5-7, 2018. [212] H. Lui, J. Zhao, D. McLean, and H. Zeng, "Real-time Raman spectroscopy for in vivo skin cancer diagnosis," Cancer Res, vol. 72, no. 10, pp. 2491-2500, 2012. [213] N. Kourkoumelis, I. Balatsoukas, V. Moulia, A. Elka, G. Gaitanis, and I. D. Bassukas, "Advances in the in vivo Raman spectroscopy of malignant skin tumors using portable instrumentation," Int J Mol Sci, vol. 16, no. 7, pp. 14554-14570, 2015. [214] M. Boone, M. Suppa, M. Miyamoto, A. Marneffe, G. Jemec, and V. Del Marmol, "In vivo assessment of optical properties of basal cell carcinoma and differentiation of BCC subtypes by highdefinition optical coherence tomography," Biomed Opt Express, vol. 7, no. 6, pp. 2269-2284, 2016. [215] A. A. Hussain, L. Themstrup, and G. B. E. Jemec, "Optical coherence tomography in the diagnosis of basal cell carcinoma," Arch Dermatol Res, vol. 307, no. 1, pp. 1-10, 2015. [216] C. Boudreau, T.-L. Wee, Y.-R. Duh, M. P. Couto, K. H. Ardakani, and C. M. Brown, "Excitation Light Dose Engineering to Reduce Photo-bleaching and Photo-toxicity," Sci Rep, vol. 6, 2016, Art. no. 30892. [217] B. Zhu, J. C. Rasmussen, and E. M. Sevick-Muraca, "Non-invasive fluorescence imaging under ambient light conditions using a modulated ICCD and laser diode," Biomed Opt Express, vol. 5, no. 2, pp. 562-572, 2014. [218] Z. Apalla, A. Lallas, E. Sotiriou, E. Lazaridou, and D. Ioannides, "Epidemiological trends in skin cancer," Dermatol Pract Concept, vol. 7, no. 2, pp. 1-6, 2017. [219] H. W. Rogers, M. A. Weinstock, S. R. Feldman, and B. M. Coldiron, "Incidence estimate of nonmelanoma skin cancer (keratinocyte carcinomas) in the US population," JAMA Dermatol, vol. 151, no. 10, pp. 1081-1086, 2015.

210

[220] G. P. Guy Jr, C. C. Thomas, T. Thompson, M. Watson, G. M. Massetti, and L. C. Richardson, "Vital signs: Melanoma incidence and mortality trends and projections— United States, 1982-2030," MMWR Morb Mortal Wkly Rep, vol. 64, no. 21, pp. 591-596, 2015. [221] D. I. Kuijpers, N. W. Smeets, G. A. Krekels, and M. R. Thissen, "Photodynamic therapy as adjuvant treatment of extensive basal cell carcinoma treated with Mohs micrographic surgery," Dermatol Surg, vol. 30, no. 5, pp. 794-798, 2004. [222] P. P. Laissue, R. A. Alghamdi, P. Tomancak, E. G. Reynaud, and H. Shroff, "Assessing phototoxicity in live fluorescence imaging," Nat Methods, vol. 14, no. 7, pp. 657-661, 2017. [223] B. D. Lucas and T. Kanade, "An Iterative Image Registration Technique with an Application to Stereo Vision," in Proceedings of Imaging Understanding Workshop, Vancouver, BC, CAN, 1981, vol. 2, pp. 674-679: Morgan Kaufmann Publishers Inc. [224] OpenCV. (2018, January 8). Optical Flow. Available: https://docs.opencv.org/3.4/d7/d8b/tutorial_py_lucas_kanade.html [225] G. Farnebäck, "Two-Frame Motion Estimation Based on Polynomial Expansion," in Scandinavian Conference on Image Analysis, Halmstad, SWE, 2003, vol. 2749: Springer Berlin Heidelberg. [226] S.-Y. Park, S.-I. Choi, J. Moon, J. Kim, and Y. W. Park, "Real-time 3D registration of stereo- vision based range images using GPU," presented at the Workshop on Applications of Computer Vision, Snowbird, UT, YSA, December 7-8, 2009. [227] S. Drouin et al., "IBIS: an OR ready open-source platform for image-guided neurosurgery," Int J CARS, vol. 12, pp. 363-378, 2017. [228] S. Se and N. Pears, "Passive 3D Imaging," in 3D Imaging, Analysis and Applications, N. Pears, Y. Liu, and P. Bunting, Eds.: Springer-Verlag London, 2012. [229] J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid. (2015, May 5). EpicFlow: Edge- Preserving Interpolation of Correspondences for Optical Flow. Available: https://hal.inria.fr/hal-01097477v2 [230] S. Zweig and L. Wolf. (2016). InterpoNet, A brain inspired neural network for optical flow dense interpolation. Available: https://arxiv.org/abs/1611.09803 [231] N. D. Inc. (2018, January 10). Optotrak Certus. Available: https://www.ndigital.com/msci/products/optotrak-certus/ [232] F. Sauer, A. Khamene, B. Bascle, L. Schimmang, F. Wenzel, and S. Vogt, "Augmented Reality Visualization of Ultrasound Images: System Description, Calibration, and Features," in IEEE and ACM International Symposium on Augmented Reality, New York, NY, USA, 2001: IEEE. [233] R. S. Decker, A. Shademan, J. D. Opfermann, S. Leonard, P. C. Kim, and A. Krieger, "Biocompatible Near-Infrared Three-Dimensional Tracking System," IEEE Trans Biomed Eng, vol. 64, no. 3, pp. 549-556, 2017. [234] S.-W. Chung, C.-C. Shih, and C.-C. Huang, "Freehand three-dimensional ultrasound imaging of carotid artery using motion tracking technology," Ultrasonics, vol. 74, pp. 11- 20, 2017.

211

[235] L. Zhang, M. Ye, P.-L. Chan, and G.-Z. Yang, "Real-time surgical tool tracking and pose estimation using a hybrid cylindrical marker," Int J CARS, vol. 12, no. 6, pp. 921-930, 2017. [236] L. Yang et al., "Self-registration of Ultrasound Imaging Device to Navigation System Using Surgical Instrument Kinematics in Minimally Invasive Procedure," in Computer Aided Surgery, M. G. Fujie, Ed.: Springer Tokyo, 2016. [237] I. J. Gerard and D. L. Collins, "An analysis of tracking error in image-guided neurosurgery," Int J CARS, vol. 10, no. 10, pp. 1579-1588, 2015. [238] F. Šuligoj, M. Švaco, B. Jerbić, B. Šekoranja, and J. Vidaković, "Automated Marker Localization in the Planning Phase of Robotic Neurosurgery," IEEE Access, vol. 5, pp. 12265 - 12274, 2017. [239] S. Garrido-Jurado, R. Muñoz-Salinas, F. Madrid-Cuevas, and M. Marín-Jiménez, "Automatic generation and detection of highly reliable fiducial markers under occlusion," Pattern Recogn, vol. 47, no. 6, pp. 2280-2292, 2014. [240] P. Edgcumbe, C. Nguan, and R. Rohling, "Calibration and Stereo Tracking of a Laparoscopic Ultrasound Transducer for Augmented Reality in Surgery," in Augmented Reality Environments for Medical Imaging and Computer-Assisted Interventions, H. Liao, C. Linte, K. Masamune, T. Peters, and G. Zheng, Eds. (Lecture Notes in Computer Science, no. 8090): Springer-Verlag Berlin Heidelberg, 2013, pp. 258-267. [241] L. Zhang, M. Ye, S. Giannarou, P. Pratt, and G.-Z. Yang, "Motion-Compensated Autonomous Scanning for Tumour Localisation Using Intraoperative Ultrasound," in Medical Image Computing and Computer-Assisted Intervention, M. Descoteaux, L. Maier-Hein, A. Franz, P. Jannin, D. Collins, and S. Duchesne, Eds. (Lecture Notes in Computer Science, no. 10434): Springer Cham, 2017, pp. 619-627. [242] U. L. Jayarathne, A. J. McLeod, T. M. Peters, and E. C. Chen, "Robust Intraoperative US Probe Tracking Using a Monocular Endoscopic Camera," in Medical Image Computing and Computer-Assisted Intervention, K. Mori, I. Sakuma, Y. Sato, C. Barillot, and N. Navab, Eds. (Lecture Notes in Computer Science, no. 8151): Springer Berlin Heidelberg, 2013. [243] N. D. Serej, A. Ahmadian, S. Mohagheghi, and S. M. Sadrehosseini, "A projected landmark method for reduction of registration error in image-guided surgery systems," Int J CARS, vol. 10, no. 5, pp. 541-554, 2015. [244] P. Pratt et al., "Robust ultrasound probe tracking: initial clinical experiences during robot-assisted partial nephrectomy," Int J CARS, vol. 12, no. 12, pp. 1905-1913, 2015. [245] C. L. Palmer, B. O. Haugen, E. Tegnanderzx, S. H. Eik-Neszx, H. Torp, and G. Kiss, "Mobile 3D augmented-reality system for ultrasound applications," in International Ultrasonics Symposium, Taipei, Taiwan, 2015: IEEE. [246] M. Jansson, "A 3D-ultrasound guidance device for central venous catheter placement using augmented reality," Master of Science, School of Technology and Health, KTH Royal Institute of Technology, Stockholm, SWE, 2017. [247] P. K. Kanithi, J. Chatterjee, and D. Sheet, "Immersive Augmented Reality System for Assisting Needle Positioning During Ultrasound Guided Intervention," in Indian Conference on Computer Vision, Graphics and Image Processing, Guwahati, Assam, India, 2016: ACM.

212

[248] H. Cho et al., "Augmented reality in bone tumour resection," Bone Joint Res, vol. 6, no. 3, pp. 137-143, 2017. [249] S. L. Perkins, M. A. Lin, S. Srinivasan, A. J. Wheeler, B. A. Hargreaves, and B. L. Daniel, "A Mixed-Reality System for Breast Surgical Planning," in International Symposium on Mixed and Augmented Reality, Nantes, France, 2017, pp. 269-274: IEEE. [250] A. N. Glud et al., "A fiducial skull marker for precise MRI-based stereotaxic surgery in large animal models," J Neurosci Methods, vol. 285, pp. 45-48, 2017. [251] J. N. Bentley et al., "A simple, inexpensive method for subcortical stereotactic targeting in nonhuman primates," J Neurosci Methods, vol. 305, pp. 89-97, 2018. [252] L. B. Tabrizi and M. Mahvash, "Augmented reality–guided neurosurgery: accuracy and intraoperative application of an image projection technique," J Neurosurg, vol. 123, no. 1, pp. 206-211, 2015. [253] M. Perwög, Z. Bardosi, and W. Freysinger, "Experimental validation of predicted application accuracies for computer-assisted (CAS) intraoperative navigation with paired-point registration," Int J CARS, vol. 13, pp. 425-441, 2018. [254] W. Plishker, X. Liu, and R. Shekhar, "Hybrid Tracking for Improved Registration of Laparoscopic Ultrasound and Laparoscopic Video for Augmented Reality," in Computer Assisted and Robotic Endoscopy and Clinical Image-Based Procedures, C. M, Ed. (Lecture Notes in Computer Science, no. 10550): Springer Cham, 2017, pp. 170-179. [255] M. Bajura, H. Fuchs, and R. Ohbuchi, "Merging Virtual Objects with the Real World: Seeing Ultrasound Imagery within the Patient," Computer Graphics, vol. 26, no. 2, pp. 203-210, 1992. [256] T. Ungi et al., "Navigated Breast Tumor Excision Using Electromagnetically Tracked Ultrasound and Surgical Instruments," IEEE Trans Biomed Eng, vol. 63, no. 3, pp. 600- 606, 2016. [257] K. Punithakumar et al., "Multiview Fusion using an Electromagnetic Tracking System," in International Conference of the IEEE Engineering in Medicine and Biology Society, Orlando, FL, USA, 2016, pp. 1078-1081: IEEE. [258] X. Liu, L. Gu, H. Xie, and S. Zhang, "CT-Ultrasound Registration for Electromagnetic Navigation of Cardiac Intervention," in International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, Shanghai, China, 2017: IEEE. [259] X. Zhang, Z. Fan, J. Wang, and H. Liao, "3D Augmented Reality Based Orthopaedic Interventions," in Lecture Notes in Computational Vision and Biomechanics, G. Zheng and S. Li, Eds. no. 23): Springer Cham, 2016, pp. 71-90. [260] A. A. Fanous et al., "Frameless and Maskless Stereotactic Navigation with a Skull- Mounted Tracker," World Neurosurg, vol. 102, pp. 661-667, 2017. [261] J. Wang, S. Horvath, G. Stetten, M. Siegel, and J. Galeotti, "Real-Time Registration of Video with Ultrasound using Stereo Disparity," in Medical Imaging 2012: Image-Guided Procedures, Robotic Interventions, and Modeling, San Diego, CA, USA, 2012, vol. 8316: SPIE. [262] J. Wang, C. Che, J. Galeotti, S. Horvath, V. Gorantla, and G. Stetten, "Ultrasound tracking using ProbeSight: Camera pose estimation relative to external anatomy by inverse rendering of a prior high-resolution 3D surface map," presented at the Winter

213

Conference on Applications of Computer Vision, Santa Rosa, CA, USA, March 24-31, 2017. [263] S. Horvath et al., "Towards an Ultrasound Probe with Vision: Structured Light to Determine Surface Orientation," in Augmented Environments for Computer-Assisted Interventions, C. Linte, J. Moore, E. Chen, and D. Holmes, Eds. (Lecture Notes in Computer Science, no. 7264): Springer Berlin Heidelberg, 2012. [264] K.-H. Kwon, S.-H. Lee, and M. Y. Kim, "A three-dimensional surface registration method using a spherical unwrapping method and HK curvature descriptors for patient-to-CT registration of image guided surgery," in International Conference on Control, Automation and Systems, Gyeongju, South Korea, 2016: IEEE. [265] M. Baba, "A Low-Cost Camera-based Transducer Tracking System for Freehand Three- Dimensional Ultrasound Imaging," Master of Applied Science, Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada, 2016. [266] G. Stetten et al., "Towards a clinically useful sonic flashlight," in International Symposium on Biomedical Imaging, Waschington DC, USA, 2002: IEEE. [267] J. Wang et al., "3D Surgical Overlay with Markerless Image Registration Using a Single Camera," in Augmented Environments for Computer-Assisted Interventions, Linte C., Yaniv Z., and F. P, Eds. (Lecture Notes in Computer Science, no. 9365): Springer Cham, 2015, pp. 124-133. [268] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," 1998, vol. 86, no. 11, pp. 2278-2324: IEEE. [269] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," in International Conference on Learning Representations, Sand Diego, CA, USA, 2015, pp. 1-14. [270] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778: IEEE. [271] C. Szegedy et al., "Going deeper with convolutions," in IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 1-9: IEEE. [272] A. Krizhevsky, I. Sutskever, and G. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks, Advances in neural information processing systems," in International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 2012, vol. 1, pp. 1097-1105: Curran Associates Inc. [273] M. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in European Conference on Computer Vision, 2014, vol. 8689, pp. 818-833: Springer. [274] D. Lee. (2014, June 5). Calibrating a stereo camera with Python [Website]. Available: https://erget.wordpress.com/2014/02/28/calibrating-a-stereo-pair-with-python/ [275] S. Fidler. (2015, June 8). Depth from Stereo [Presentation]. Available: http://www.cs.toronto.edu/~fidler/slides/2015/CSC420/lecture12_hres.pdf [276] OpenCV. (2017, January 17). Depth Map from Stereo Images [Website]. Available: https://docs.opencv.org/3.3.1/dd/d53/tutorial_py_depthmap.html [277] OpenCV. (2017, October 10). Detection of ArUco Markers. Available: https://docs.opencv.org/3.3.1/d5/dae/tutorial_aruco_detection.html

214

[278] S. Mallick. (2016, September 18). Head Pose Estimation using OpenCV and Dlib [Internet]. Available: https://www.learnopencv.com/head-pose-estimation-using- opencv-and-dlib/ [279] R. Milligan. (2015, May 5). Augmented Reality using OpenCV, OpenGL and Blender [Website]. Available: https://rdmilligan.wordpress.com/2015/10/15/augmented-reality- using-opencv-opengl-and-blender/ [280] OpenCV. (2014, March 15). Camera Calibration and 3D Reconstruction. Available: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconst ruction.html#solvepnpransac [281] K. Group. (2018, June 24). OpenGL: The Industry's Foundation for High Performance Graphics. Available: https://www.opengl.org/ [282] D. Jana. (2016, March 14). 3d Animated Realistic Human Heart - V2.0. Available: https://sketchfab.com/models/168b474fba564f688048212e99b4159d [283] OpenCV. (2017, February 14). Contour Features [Website]. Available: https://docs.opencv.org/3.3.1/dd/d49/tutorial_py_contour_features.html [284] C. A. Mela, D. P. Lemmer, F. S. Bao, F. Papay, T. Hicks, and Y. Liu, "Real-time dual-modal vein imaging system," Int J CARS, pp. 1-11, 2018. [285] S. Jeon, J. Chien, C. Song, and J. Hong, "A Preliminary Study on Precision Image Guidance for Electrode Placement in an EEG Study," Brain Topogr, vol. 31, no. 2, pp. 174-185, 2018. [286] G. Badiali et al., "Augmented reality as an aid in maxillofacial surgery: Validation of a wearable system allowing maxillary repositioning," J Craniomaxillofac Surg, vol. 42, no. 8, pp. 1970-1976, 2014. [287] C. Rathgeb et al., "Accuracy and feasibility of a dedicated image guidance solution for endoscopic lateral skull base surgery," Eur Arch Otorhinolaryngol, vol. 275, no. 4, pp. 905-911, 2018. [288] B. J. Rasquinha, A. W. Dickinson, G. Venne, D. R. Pichora, and R. E. Ellis, "Crossing-Lines Registration for Direct Electromagnetic Navigation," in Medical Image Computing and Computer-Assisted Intervention, N. Navab, J. Hornegger, W. Wells, and A. Frangi, Eds. (Lecture Notes in Computer Science, no. 9350): Springer Cham, 2015, pp. 321-328. [289] B. Zeng, F. Meng, H. Ding, and G. Wang, "A surgical robot with augmented reality visualization for stereoelectroencephalography electrode implantation," Int J CARS, vol. 12, no. 8, pp. 1355-1368, 2017. [290] F. Vasconcelos, D. Peebles, S. Ourselin, and D. Stoyanov, "Spatial calibration of a 2D/ using a tracked needle," Int J CARS, vol. 11, no. 6, pp. 1091-1099, 2016. [291] G. Flaccavento, P. Lawrence, and R. Rohling, "Patient and Probe Tracking During Freehand Ultrasound," in Medical Image Computing and Computer-Assisted Intervention, Barillot C., Haynor D.R., and H. P., Eds. (Lecture Notes in Computer Science, no. 3217): Springer Berlin Heidelberg, 2004, pp. 585-593. [292] T. Lowis, K. Leslie, L. E. Barksdale, and D. O. Carter, "Determining the Sensitivity and Reliability of Hemascein," J Forensic Ident, vol. 62, no. 3, pp. 204-214, 2012. [293] T. Young, "A Photographic Comparison of Luminol, Fluorescein, and Bluestar," J Forensic Ident, vol. 56, no. 6, pp. 906-912, 2006.

215

[294] J. Finnis, J. Lewis, and A. Davidson, "Comparison of methods for visualizing blood on dark surfaces," Science and Justice, vol. 53, no. 2, pp. 178-186, 2013. [295] S. J. Seashols, H. D. Cross, D. L. Shrader, and A. Rief, "A Comparison of Chemical Enhancements for the Detection of Latent Blood," J Forensic Sci, vol. 58, no. 1, pp. 130- 133, 2013. [296] R. Cheeseman and L. A. DiMeo, "Fluorescein as a field-worthy latent bloodstain detection system," J Forensic Ident, vol. 45, no. 6, pp. 631-646, 1995. [297] R. Cheeseman, "Direct sensitivity comparison of the fluorescein and luminol bloodstain enhancement techniques," J Forensic Ident, vol. 49, no. 3, pp. 261-268, 1999. [298] B. Budowle, J. L. Leggitt, D. A. Defenbaugh, K. M. Keys, and S. F. Malkiewica, "The presumptive reagent fluorescein for detection of dilute bloodstains and subsequent STR typing of recovered DNA," J Forensic Sci, vol. 45, no. 5, pp. 1090-1092, 2000. [299] P. Bilous, M. McCombs, M. Sparkmon, and J. Sasaki, "Detecting Burnt Bloodstain Samples with Light-Emitting Blood Enhancement Reagents," in American Academy of Forensic Sciences, 62nd Annual Scientific Meeting, Seattle, Washington, 2010. [300] F. Barni, S. W. Lewis, A. Berti, G. M. Miskelly, and G. Lago, "Forensic application of the luminol reaction as a presumptive test for latent blood detection " Talanta, vol. 72, no. 3, pp. 896-913, 2007. [301] G. E. Miranda, F. B. Prado, F. Delwing, and E. D. Junior, "Analysis of the fluorescence of body fluids on different surfaces and times," Science and Justice, vol. 54, no. 6, pp. 427- 431, 2014. [302] K. Sheppard, J. P. Cassella, S. Fieldhouse, and R. King, "The adaptation of a 360° camera utilising an alternate light source (ALS) for the detection of biological fluids at crime scenes," Science and Justice, vol. 57, no. 4, pp. 239-249, 2017. [303] A. C.-Y. Lin, H.-M. Hsieh, L.-C. Tsai, A. Linacre, and J. C.-I. Lee, "Forensic Applications of Infrared Imaging for the Detection and Recording of Latent Evidence," J Forensic Sci, vol. 52, no. 5, pp. 1148-1150, 2007. [304] J. Kuula et al., "Using VIS/NIR and IR spectral cameras for detecting and separating crime scene details," in Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense XI, Baltimore, Maryland, 2012, vol. 8359, p. 83590P: SPIE. [305] M. van Iersel, H. Veerman, and W. van der Mark, "Modelling a crime scene in 3D and adding thermal information " in Electro-Optical and Infrared Systems: Technology and Applications VI, Berlin, Germany, 2009, vol. 7481, p. 74810M: SPIE. [306] G. M. Miskelly and J. H. Wagner, "Using spectral information in forensic imaging," Forensic Sci Int, vol. 155, no. 2-3, pp. 112-118, 2005. [307] R. H. Bremmer, G. J. Edelman, T. Vegter, T. Bijvoets, and M. C. G. Aalders, "Remote spectroscopic identification of blood stains," J Forensic Sci, vol. 56, no. 6, pp. 1471-1475, 2011. [308] G. J. Edelman, E. Gaston, T. G. van Leeuwen, P. J. Cullen, and M. C. G. Aalders, "Hyperspectral imaging for non-contact analysis of forensic traces," Forensic Sci Int, vol. 223, no. 1-3, pp. 28-39, 2012.

216

[309] G. J. Edelman, T. G. van Leeuwen, and M. C. G. Aalders, "Hyperspectral imaging of the crime scene for the automatic detection and identification of blood stains," Forensic Sci Int, vol. 223, no. 1-3, pp. 72-77, 2012. [310] R. O. Abu Hana, C. O. A. Freitas, L. S. Oliveira, and F. Bortolozzi, "Crime Scene Representation (2D, 3D, Stereoscopic Projection) and Classification " JUCS, vol. 14, no. 18, pp. 2953-2966, 2008. [311] L. C. Ebert, T. T. Nguyen, R. Breitbeck, M. Braun, M. J. Thali, and S. Ross, "The forensic holodeck: an immersive display for forensic crime scene reconstructions " Forensic Sci Med Pathol, vol. 10, no. 4, pp. 623-626, 2014. [312] C. Q. Little, D. E. Small, R. R. Peters, and J. B. Rigdon, "Forensic 3D scene reconstruction," INIS, vol. 33, no. 45, 1999. [313] D. A. Komar, S. Davy-Jow, and S. J. Decker, "The Use of a 3-D Laser Scanner to Document Ephemeral Evidence at Crime Scenes and Postmortem Examinations " J Forensic Sci, vol. 57, no. 1, pp. 188-191, 2012. [314] H. Gonzalez-Jorge, S. Zancajo, D. Gonzalez-Aguilera, and P. Arias, "Application of kinect gaming sensor in forensic science," J Forensic Sci, vol. 60, no. 1, pp. 206-211, 2015. [315] M. J. Thali et al., "--scientific documentation, reconstruction and animation in forensic: individual and real 3D data based geo-metric approach including optical body/object surface and radiological CT/MRI scanning," J Forensic Sci, vol. 50, no. 2, pp. 428-442, 2005. [316] A. K. Chong, M. F. M. Ariff, Z. Majid, and H. Setan, "Night-time Surveillance System for Forensic 3D Mapping," in International Congress on Image and Signal Processing, Yantai, China, 2010, pp. 502-506: IEEE. [317] S. Se and P. Jasiobedzki, "Photo-realistic 3D Model Reconstruction," in IEEE International Conference on Robotics and Automation, Orlando, Florida, 2006, pp. 3076-3082: IEEE. [318] W. van der Mark, G. Burghouts, E. den Dekker, T. ten Kate, and J. Schavemaker, "3-D Scene Reconstruction with a Handheld Stereo Camera," in COGnitive systems with Interactive Sensors Stanford University, California, 2007. [319] S. Yun, D. Min, and K. Sohn, "3D Scene Reconstruction System with Hand-Held Stereo Cameras," in 3DTV Conference, Kos, Greece, 2007: IEEE. [320] A. Topol et al., "Generating Semantic Information from 3D Scans of Crime Scenes," in Canadian Conference on Computer and Robot Vision, Windsor, Ontario, Canada, 2008, pp. 333-340: IEEE. [321] C. Mela, C. Patterson, and Y. Liu, "A miniature wearable optical imaging system for guiding surgeries," in SPIE Photonics West, San Francisco, California, 2015, vol. 9311, no. 93110Z, p. 8: SPIE. [322] C. Mela, C. Patterson, W. Thompson, F. Papay, and Y. Liu, "Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance," Plos One, vol. 10, no. 11, p. e0141956, 2015. [323] B. Technologies. (2012, January 24). Law Enforcement and CSI. Available: https://www.bodelin.com/proscope/law-enforcement [324] L. Iocchi, K. Konolige, and M. Bajracharya, "Visually Realistic Mapping of a Planar Environment with Stereo " in Experimental Robotics VII, vol. 271, D. Rus and S.

217

Singh, Eds. (Lecture Notes in Control and Information Sciences: Springer, 2001, pp. 521- 532. [325] Z. Zhang, "A flexible new technique for camera calibration," IEEE Trans Pattern Anal Mach Intell, vol. 22, no. 11, pp. 1330-1334, 2000. [326] OpenCV. (2014, 2/3/2018). Camera Calibration and 3D Reconstruction. Available: https://docs.opencv.org/3.0- beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#id12 [327] H. Zeman, G. Lovhoiden, C. Vrancken, and R. Danish, "Prototype vein contrast enhancer," Opt Eng, vol. 44, no. 8, p. 086401, 2005. [328] R. K. Miyake et al., "Vein imaging: a new method of near infrared imaging, where a processed image is projected onto the skin for the enhancement of vein treatment," (in eng), Dermatol Surg, vol. 32, no. 8, pp. 1031-8, 2006. [329] V. P. Zharov, S. Ferguson, J. F. Eidt, P. C. Howard, L. M. Fink, and M. Waner, "Infrared Imaging of Subcutaneous Veins," Lasers Surg Med, vol. 34, pp. 56-61, 2004. [330] R. Fuksis, M. Greitans, O. Nikisins, and M. Pudzs, "Infrared Imaging System for Analysis of Blood Vessel Structure," Electronics and Electrical Engineering - Kaunas: Technologija, vol. 97, no. 1, pp. 45-48, 2010. [331] S. Crisan, J. G. Tarnovan, and T. E. Crisan, "A Low Cost Vein Detection System Using Near Infrared Radiation," in IEEE Sensors Applications Symposium, San Diego, CA USA, 2007: IEEE. [332] M. Mansoor, S. S.N., S. Z. Naqvi, I. Badshah, and M. Saleem, "Real-time low cast infrared vein imaging system," in International Conference on Signal Processing, Image Processing and Pattern Recognition, Coimbatore, India, 2013: IEEE. [333] V. Paquita, J. R. Pricea, F. Meriaudeaub, K. W. Tobina, and T. L. Ferrellc, "Combining near-infrared illuminants to optimize venous imaging," in Medical Imaging 2007: Visualization and Image-Guided Procedures, San Diego, CA USA, 2006, vol. 6509, p. 65090H: SPIE. [334] L. Wang and G. Leedham, "Near- and Far- Infrared Imaging for Vein Pattern Biometrics," in IEEE International Conference on Video and Signal Based Surveillance Sydney, NSW Australia, 2006: IEEE. [335] E. C. Lee, H. Jung, and D. Kim, "New finger biometric method using near infrared imaging," (in eng), Sensors, vol. 11, no. 3, pp. 2319-33, 2011. [336] G. K. O. Michael, T. Connie, and A. B. J. Teoh, "A Contactless Biometric System Using Palm Print and Palm Vein Features," in Advanced Biometric Technologies, G. Chetty, Ed.: InTech, 2011, pp. 155-178. [337] E. Z. Cai, K. Sankaran, M. Tan, Y. H. Chan, and T. C. Lim, "Pen Torch Transillumination: Difficult Venepuncture Made Easy," World J Surg, vol. 41, pp. 2401-2408, 2017. [338] N. J. Cuper et al., "The use of near-infrared light for safe and effective visualization of subsurface blood vessels to facilitate blood withdrawal in children," Med Eng Phys, vol. 35, pp. 433-440, 2013. [339] D. Ai et al., "Augmented reality based real-time subcutaneous vein imaging system," Biomed Opt, vol. 7, no. 7, pp. 2565-2585, 2016. [340] M. Francis, A. Jose, G. Devadhas, and K. Avinashe, "A novel technique for forearm blood vein detection and enhancement," Biomed Res, vol. 28, no. 7, pp. 2913-2919, 2017.

218

[341] K. Seker and M. Engin, "Deep tissue near-infrared imaging for vascular network analysis," J Innov Opt Health Sci, vol. 10, no. 3, 2016. [342] D. Kim, Y. Kim, S. Yoon, and D. Lee, "Preliminary Study for Designing a Novel Vein- Visualizing Device," Sensors, vol. 17, no. 304, 2017. [343] S. Lee, S. Park, and D. Lee, "A phantom study on the propagation of NIR rays under the skin for designing a novel vein-visualizing device," in 13th International Conference on Control, Automation and Systems, Gwangui, Korea, 2013: IEEE. [344] P. N. Prasad, Introduction to Biophotonics. Hoboken, NJ, USA: John Wiley & Sons Inc, 2003, p. 593. [345] A. Newman, Photographic Techniques in Scientific Research. Cambridge, MA: Academic Press Inc, 1976, p. 458. [346] L. Wang, G. Leedham, and D. S.-Y. Choa, "Minutiae feature analysis for infrared hand vein pattern biometrics," Pattern Recognit, vol. 41, pp. 920-929, 2008. [347] L. Chen, J. Wang, S. Yang, and H. He, "A Finger Vein Image-Based Personal Identification System With Self-Adaptive Illuminance Control," IEEE Transactions on Instrumentation and Measurement, vol. 66, no. 2, pp. 294-304, 2017. [348] R. Kaddoum et al., "A randomized controlled trial comparing the AccuVein AV300 device to standard insertion technique for intravenous cannulation of anesthetized children," Paediatr Anaesth, vol. 22, no. 9, pp. 884-889, 2012. [349] M. W. Chu, J. R. Sarik, L. C. Wu, J. M. Serletti, and J. Bank, "Non-Invasive Imaging of Preoperative Mapping of Superficial Veins in Free Flap Breast Reconstruction," Arch Plast Surg, vol. 43, pp. 119-121, 2016. [350] J. C. Hebden, A. Alkhaja, L. Mahe, S. Powell, and N. Everdell, "Measurement of contrast of phantom and in vivo subsurface blood vessels using two near-infrared imaging systems," in Optical Diagnostics and Sensing XV: Toward Point-of-Care Diagnostics, San Francisco, CA, USA, 2015, vol. 9332, p. 933213: SPIE. [351] F. Wang, A. Behrooz, M. Morris, and A. Adibi, "High-contrast subcutaneous vein detection and localization using multispectral imaging," J Biomed Opt, vol. 18, no. 5, 2013. [352] V. Paquit, K. Tobin, J. Price, and F. Mèriaudeau, "3D and multispectral imaging for subcutaneous veins detection," Opt Express, vol. 16, no. 14, pp. 11360-11365, 2009. [353] A. I. Chen, M. L. Balter, T. J. Maguire, and M. L. Yarmush, "3D Near Infrared and Ultrasound Imaging of Peripheral Blood Vessels for Real-Time Localization and Needle Guidance," in Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 2016, vol. 9902, pp. 388-396: Springer. [354] T. Ahmed et al., "Real time injecting device with automated robust vein detection using near infrared camera and live video," in Global Humanitarian Technology Conference (GHTC), San Jose, CA, USA, 2017: IEEE. [355] Y. Katsogridakis, R. Seshadri, C. Sullivan, and M. Waltzman, "Veinlite transillumination in the pediatric emergency department: A therapeutic interventional trial," Pediatr Emerg Care, vol. 24, pp. 83-88, 2008. [356] M. Nizamoglu, A. Tan, H. Gerrish, D. Barnes, and P. Dziewulski, "Infrared technology to improve efficacy of venous access in burns population," Eur J Plast Surg, vol. 39, pp. 37- 40, 2016.

219

[357] T. Nakasa et al., "In-vivo imaging of the sentinel vein using the near-infrared vascular imaging system in hallux valgus patients," J Orthop Sci, vol. 22, no. 6, pp. 1066-1070, 2017. [358] L. Ramer, P. Hunt, E. Ortega, J. Knowlton, R. Briggs, and S. Hirokawa, "Effect of Intravenous (IV) Assistive Device (VeinViewer) on IV Access Attempts, Procedural Time, and Patient and Nurse Satisfaction," J Pediatr Oncol Nurs, vol. 33, no. 4, pp. 273-281, 2016. [359] A. Bandara, K. Rajarata, and P. Giragama, "Super-efficient spatially adaptive contrast enhancement algorithm for superficial vein imaging," in IEEE International Conference on Industrial and Information Systems, Peradeniya, Sri Lanka, 2017: IEEE. [360] M. Yakno, J. M. Saleh, and B. A. Rosdi, "Low Contrast Hand Vein Image Enhancement," in IEEE International Conference on Signal and Image Processing Applications, Kuala Lumpur, Malaysia, 2011: IEEE. [361] M. Asrar, A. Al-Habaibeh, and M. Houda, "Innovative algorithm to evaluate the capabilities of visual, near infrared, and infrared technologies for the detection of veins for intravenous cannulation," Appl Opt, vol. 55, no. 34, pp. D67-D75, 2016. [362] S. Pizer et al., "Adaptive Histogram Equalization and Its Variations," Comp Vis Graphics & Im Proc, vol. 39, pp. 355-368, 1987. [363] A. Reza, "Realization of the Contrast Limited Adaptive Histogram Equalization (CLAHE) for Real-Time Image Enhancement," J VLSI Signal Process Syst Signal Image Video Technol, vol. 38, no. 1, pp. 35–44, 2004.

220

APPENDIX A

Additional fluorescence characterization data not included in Chapter 3, and a table summarizing user comments on the tested AR displays. Also, tabulated results from statistical analyses in Chapter 3, including ANOVA and Tukey test, were included here.

221

Figure 1A Minimum dye concentrations required to achieve a SBR of 2 in tissue phantom at a

working distance of (A) 40 and (B) 60 cm. Readings were taken at a range of fluorescent inclusion volumes, inclusion depths in the tissue phantom and excitation intensities. Results indicated that the 1

mW/cm2 excitation intensity provided the lowest minimum dye concentrations.

222

Figure 2A Minimum dye concentrations required to achieve SBR values of 1.2, 1.5 and 2 using the

(A) 24 and (B) 48 Well volumes of ICG in DMSO. Tests were conducted over a range of working

distances and excitation light intensities.

223

Table 1A

Volunteer feedback concerning the advantages and disadvantages of each of the three tested AR displays.

Display Advantages Disadvantages

MyBud Bright, Good Contrast Difficult Stereo Perception

Vuzix Best Stereo Perception Dim, Low Contrast

LCD Large display size Poor Resolution

224

APPENDIX B

Additional data detailing and comparing Signal-to-Background Ratios (SBRs) for pulsed light fluorescence imaging versus conventional steady state (DC) fluorescence imaging. Data was taken using the system and procedures described in Chapter 4.

Figure 1B Signal-to-Background Ratio (SBR) for the detection of a 2 mL volume of the fluorescent

dye PpIX in a DMSO solution over a range of prepared dye concentrations and excitation intensities. Dye

volumes were placed into a 24-Well cell culture plate for imaging. Results show a significantly higher

SBR when using the pulsed light regime for fluorescence detection.

225

Figure 2B Signal-to-Background Ratio (SBR) for the detection of a 50 µL volume of the fluorescent

dye PpIX in a DMSO solution over a range of prepared dye concentrations and excitation intensities. Dye

volumes were spread topically onto the skin surface of a pig ear for imaging. Results show a significantly

higher SBR when using the pulsed light regime for fluorescence detection.

226

APPENDIX C

Included here are additional multimodal imaging data, related to Chapter 5, that didn’t find a place in the main manuscript. Figure 1C was modified from: “Mela C.A.,

Papay F.A., Liu Y. (2016) Intraoperative Fluorescence Imaging and Multimodal Surgical

Navigation Using Goggle System. In: Bai M. (eds.) In Vivo Fluorescence Imaging.

Methods in Molecular Biology, vol. 1444. Humana Press, New York, NY.”

Figure 1C Preliminary multimodal imaging data, taken using the early version of the goggle system presented in Chapter 2. (A) Ultrasound images pseudo-colored red were registered the fiducial markers (3

NIR LEDs) on the transducer, yellow arrow. The ultrasound image was registered over a fluorescent

227

inclusion made into a chicken breast. Additional inclusion is indicated by the green arrow. (B)

Reconstructed 3D MRI data (tan) of the bust and brain of a man, indicated by yellow arrow, registered to

the strong fluorescence imaged by the goggle system, green arrow.

Figure 2C Real-time ultrasound imagery of the hand and fingers registered to the transducer, and

aligned with the hand under inspection.

228

APPENDIX D

Details regarding statistical analyses from all chapters have been integrated into this

appendix. The first section concerns only statistical results on the data from Chapter 3,

including Dark Room, Tissue Phantom and surgical studies, in order. Section two presents statistical analyses from Chapter 4 in greater detail, analyzing differences in dye

detection for varying system and target parameters.

D.1. Statistical analysis of SBR data in Chapter 3.

D.1.1. Dark Room Study

Table 1D.

Analysis of variance for the Dark Room Study. Significance, i.e. varying the value resulted in significantly

different SBR results, was determined for Dye Concentration, Excitation Intensity and Volumetric

Category. Variations in Working Distance were not found to be significant.

Source DF Adj SS Adj MS F-Value P-Value Distance 2 17.90 8.950 2.05 0.130425010185483 Intensity 4 688.21 172.053 39.37 0.000000000000000 Concentration 8 3502.48 437.810 100.17 0.000000000000000 Category 2 466.75 233.373 53.40 0.000000000000000 Error 388 1695.82 4.371 Total 404 6371.16

229

Table 2D.

Grouping information with p-values using the Tukey Method and 95% confidence, for Volumetric

Categories.

Difference of Container N Mean Grouping Container Levels Tube 135 4.95126 A 48Well - 24Well 24-Well 135 3.24742 B Tube - 24Well 48-Well 135 2.36475 C Tube - 48Well

Difference of Difference SE of Simultaneous Container Levels of Means Difference 95% CI T-Value Adjusted P-Value 48Well - 24Well -0.883 0.254 (-1.478, -0.287) -3.47 0.0015163715170847 Tube - 24Well 1.704 0.254 (1.108, 2.299) 6.70 0.0000000000000000 Tube - 48Well 2.587 0.254 (1.991, 3.182) 10.16 0.0000000000000000

Table 3D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for Working Distances

within each Volumetric Category: Top: Tubes; Middle: 48 Well Plate; Bottom: 24 Well Plate. Note a

significant difference in mean SBR values was only found between the 20 and 60 cm working distances for

the 48 Well Plate.

230

Table 4D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for Excitation

Intensities within the Tube Volumetric Category.

Difference Difference SE of Simultaneous T- Adjusted Intensity N Mean Grouping of Intensity of Means Difference 95% CI Value P-Value Levels

4000 27 7.75397 A 500-250 1.039 0.628 (-0.703, 2.781) 1.65 0.4665

2000 27 6.64997 A 1000-250 2.581 0.628 (0.839, 4.322) 4.11 0.0007

1000 27 4.82475 B 2000-250 4.406 0.628 (2.664, 6.148) 7.01 1.082E-05 500 27 3.28336 B C 4000-250 5.51 0.628 (3.768, 7.252) 8.77 1.082E-05 250 27 2.24423 C 1000-500 1.541 0.628 (-0.200, 3.283) 2.45 0.1086

2000-500 3.367 0.628 (1.625, 5.108) 5.36 1.472E-05

4000-500 4.471 0.628 (2.729, 6.212) 7.11 1.082E-05

2000-1000 1.825 0.628 (0.083, 3.567) 2.9 0.0348

4000-1000 2.929 0.628 (1.187, 4.671) 4.66 8.943E-05

4000-2000 1.104 0.628 (-0.638, 2.846) 1.76 0.4036

Table 5D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for Excitation

Intensities within the 48 Well Volumetric Category.

Difference Difference SE of Simultaneous T- Adjusted Intensity N Mean Grouping of Intensity of Means Difference 95% CI Value P-Value Levels

4000 27 3.4196 A 500-250 0.354 0.338 (-0.582, 1.290) 1.05 0.8320 2000 27 2.99156 A B 1000-250 0.829 0.338 (-0.106, 1.765) 2.46 0.1076 1000 27 2.2391 B C 2000-250 1.582 0.338 (0.646, 2.517) 4.69 8.184E-05

500 27 1.76373 C 4000-250 2.01 0.338 (1.074, 2.945) 5.95 1.105E-05 250 27 1.40977 C 1000-500 0.475 0.338 (-0.460, 1.411) 1.41 0.6235

2000-500 1.228 0.338 (0.292, 2.163) 3.64 0.0036

4000-500 1.656 0.338 (0.720, 2.591) 4.91 3.907E-05

2000-1000 0.752 0.338 (-0.183, 1.688) 2.23 0.1760

4000-1000 1.18 0.338 (0.245, 2.116) 3.5 0.0058

4000-2000 0.428 0.338 (-0.508, 1.364) 1.27 0.7111

231

Table 5D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for Excitation

Intensities within the 24 Well Volumetric Category.

Difference Difference SE of Simultaneous T- Adjusted Intensity N Mean Grouping of Intensity of Means Difference 95% CI Value P-Value Levels

4000 27 4.59209 A 500-250 0.642 0.374 (-0.395, 1.679) 1.72 0.42767825

2000 27 4.31672 A B 1000-250 1.702 0.374 (0.665, 2.738) 4.55 0.00013478

1000 27 3.36307 B 2000-250 2.655 0.374 (1.618, 3.692) 7.1 1.0823E-05 500 27 2.30368 C 4000-250 2.931 0.374 (1.894, 3.967) 7.83 1.0823E-05 250 27 1.66155 C 1000-500 1.059 0.374 (0.023, 2.096) 2.83 0.04239421

2000-500 2.013 0.374 (0.976, 3.050) 5.38 1.4303E-05

4000-500 2.288 0.374 (1.252, 3.325) 6.12 1.0929E-05

2000-1000 0.954 0.374 (-0.083, 1.990) 2.55 0.08669319

4000-1000 1.229 0.374 (0.192, 2.266) 3.29 0.01146421

4000-2000 0.275 0.374 (-0.761, 1.312) 0.74 0.9475939

232

Table 6D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for Dye

Concentrations within the Tube Volumetric Category.

233

Table 7D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for Dye

Concentrations within the 24 Well Volumetric Category.

234

Table 8D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for Dye

Concentrations within the 48 Well Volumetric Category.

235

D.1.2. Tissue Phantom Study

Table 9D.

Analysis of variance for the Tissue Phantom Study. Variables resulting in at least some significantly

different SBR results when changed included Tissue Depth, Working Distance, Excitation Intensity and

Dye Concentration.

Source DF Adj SS Adj MS F-Value P-Value Depth 4 276.49 69.122 95.03 0.0000000000000000 Distance 2 62.67 31.333 43.08 0.0000000000000000 Intensity 4 30.53 7.631 10.49 0.0000000357081600 Concentration 6 783.37 130.561 179.50 0.0000000000000000 Error 508 369.49 0.727 Total 524 1522.54

Table 10D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for the Tissue Depth

variable factor.

Depth N Mean Grouping 0 105 3.92 A 1 105 2.71 B 3 105 2.39 C 5 105 2.29 C 10 105 1.75 D

Difference of Depth Levels Difference of Means SE of Difference Individual 95% CI T-Value P-Value 1-0 -1.21 0.118 (-1.443, -0.981) -10.30 0.00E+00 3-0 -1.53 0.118 (-1.761, -1.299) -13.00 0.00E+00 5-0 -1.64 0.118 (-1.868, -1.406) -13.91 0.00E+00 10-0 -2.18 0.118 (-2.409, -1.946) -18.50 0.00E+00 3-1 -0.32 0.118 (-0.549, -0.087) -2.70 7.11E-03 5-1 -0.43 0.118 (-0.656, -0.194) -3.61 3.35E-04 10-1 -0.97 0.118 (-1.197, -0.734) -8.20 2.00E-15 5-3 -0.11 0.118 (-0.338, 0.124) -0.91 3.64E-01 10-3 -0.65 0.118 (-0.878, -0.416) -5.50 6.08E-08 10-5 -0.54 0.118 (-0.771, -0.309) -4.59 5.60E-06

236

Table 11D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for the Working

Distance variable factor.

Distance N Mean Grouping 60 175 2.9094 A 40 175 2.80112 A 20 175 2.12839 B

Difference of Depth Levels Difference of Means SE of Difference Individual 95% CI T-Value P-Value 40-20 0.67 0.0912 (0.4593, 0.8861) 7.38 0.00E+00 60-20 0.78 0.0912 (0.5676, 0.9944) 8.57 0.00E+00 60-40 0.11 0.0912 (-0.1051, 0.3217) 1.19 4.61E-01

Table 12D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for the Excitation

Intensity variable factor.

Intensity N Mean Grouping 1000 105 2.93886 A 500 105 2.87518 A 2000 105 2.43288 B 250 105 2.41572 B 4000 105 2.40219 B

Difference of Depth Levels Difference of Means SE of Difference Individual 95% CI T-Value P-Value 500-250 0.46 0.118 (0.228, 0.691) 3.9 1.08E-04 1000-250 0.52 0.118 (0.292, 0.754) 4.44 1.08E-05 2000-250 0.02 0.118 (-0.214, 0.248) 0.15 8.84E-01 4000-250 -0.01 0.118 (-0.245, 0.218) -0.11 9.09E-01 1000-500 0.06 0.118 (-0.168, 0.295) 0.54 5.89E-01 2000-500 -0.44 0.118 (-0.674, -0.211) -3.76 1.91E-04 4000-500 -0.47 0.118 (-0.704, -0.242) -4.02 6.74E-05 2000-1000 -0.51 0.118 (-0.737, -0.275) -4.3 2.06E-05 4000-1000 -0.54 0.118 (-0.768, -0.305) -4.56 6.43E-06 4000-2000 -0.03 0.118 (-0.262, 0.201) -0.26 7.94E-01

237

Table 13D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for the Dye

Concentration variable factor.

Concentration N Mean Grouping 2000 75 4.40 A 1000 75 3.83 B 500 75 3.42 C 250 75 2.62 D 125 75 1.69 E 62.5 75 1.32 F 50 75 1.00 G

Difference of Depth Levels Difference of Means SE of Difference Individual 95% CI T-Value P-Value 62.5-50.0 0.32 0.139 (0.049, 0.596) 2.32 2.10E-02 125-50.0 0.69 0.139 (0.420, 0.967) 4.98 8.64E-07 250-50.0 1.62 0.139 (1.348, 1.896) 11.65 0.00E+00 500-50.0 2.42 0.139 (2.147, 2.694) 17.38 0.00E+00 1000-50.0 2.83 0.139 (2.558, 3.105) 20.33 0.00E+00 2000-50.0 3.40 0.139 (3.127, 3.675) 24.42 0.00E+00 125-62.5 0.37 0.139 (0.098, 0.645) 2.67 7.91E-03 250-62.5 1.30 0.139 (1.026, 1.573) 9.33 0.00E+00 500-62.5 2.10 0.139 (1.824, 2.371) 15.06 0.00E+00 1000-62.5 2.51 0.139 (2.235, 2.782) 18.01 0.00E+00 2000-62.5 3.08 0.139 (2.805, 3.352) 22.11 0.00E+00 250-125 0.93 0.139 (0.654, 1.202) 6.66 6.95E-11 500-125 1.73 0.139 (1.453, 2.000) 12.4 0.00E+00 1000-125 2.14 0.139 (1.864, 2.411) 15.35 0.00E+00 2000-125 2.71 0.139 (2.434, 2.981) 19.44 0.00E+00 500-250 0.80 0.139 (0.525, 1.072) 5.73 1.71E-08 1000-250 1.21 0.139 (0.936, 1.483) 8.68 1.00E-16 2000-250 1.78 0.139 (1.506, 2.053) 12.77 0.00E+00 1000-500 0.41 0.139 (0.137, 0.685) 2.95 3.31E-03 2000-500 0.98 0.139 (0.707, 1.255) 7.04 6.14E-12 2000-1000 0.57 0.139 (0.296, 0.844) 4.09 4.97E-05

238

Table 14D.

Grouping information with p-values using the Tukey Method (CL = 95%, α = 0.05), for the Volume

variable factor.

Volume N Mean Grouping 2 525 2.61297 A 1 525 2.36152 B 0.5 525 2.09886 C 0.25 525 1.5336 D

Difference of Depth Levels Difference of Means SE of Difference Individual 95% CI T-Value P-Value 0.50-0.25 0.57 0.0517 (0.4326, 0.6979) 10.94 0.00E+00 1.00-0.25 0.83 0.0517 (0.6953, 0.9606) 16.02 0.00E+00 2.00-0.25 1.08 0.0517 (0.9467, 1.2120) 20.89 0.00E+00 1.00-0.50 0.26 0.0517 (0.1300, 0.3953) 5.08 1.96E-06 2.00-0.50 0.51 0.0517 (0.3815, 0.6467) 9.95 0.00E+00 2.00-1.00 0.25 0.0517 (0.1188, 0.3841) 4.87 6.47E-06

D.1.3. Surgical Simulation

Table 15D.

Analysis of variance for the Surgical Study. Results include Time of Completion, Weight of Resected

Tissue, Resected Tissue Fluorescent Intensity and Margin Fluorescent Intensity. Significantly different margin and resected intensities were observed between systems (i.e. Goggle versus Stand). No significance

was determined between user (Doctor, Student or Engineer).

239

D.2. Statistical analysis of SBR data in Chapter 4.

D.2.1. Dark room study using well plates, comparing pulsed with steady state excitation

Table 16D.

Analysis of variance on SBR values for pulsed light studies on well plates. Variable factors include

Excitation Intensity (Int), Dye Concentration (Conc) and Imaging Mode (Mode), comparing pulsed light

imaging with steady excitation. Significant differences were observed for all factors; Post-hoc Tukey test

grouping results elucidate the differences in imaging mode.

D.2.2. Pig skin study, comparing pulsed with steady state excitation

Table 17D.

Analysis of variance on SBR values for pulsed light studies on pig skin. Variable factors include

Excitation Intensity (Int), Dye Concentration (Conc) and Imaging Mode (Mode), comparing pulsed light

imaging with steady excitation. Significant differences were observed for all factors; Post-hoc Tukey test

grouping results elucidate the differences by imaging mode.

240