CUSTOMIZABLE 3-D VIRTUAL GI TRACT SYSTEMS FOR LOCATING, MAPPING AND NAVIGATION INSIDE HUMAN

A thesis submitted in partial fulfillment

of the requirement for the degree of

Master of Science

By

MEGHA DATTATREY DALVI B.E, Visvesvaraya Technological University, 2011

2016 Wright State University

WRIGHT STATE UNIVERSITY

GRADUATE SCHOOL

Jan 6th, 2017

I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY SUPERVISION BY Megha Dattatrey Dalvi ENTITLED Customizable 3-D Virtual GI Tract Systems For Locating, Mapping And Navigation Inside Human Gastrointestinal Tract BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science.

______Yong Pei, Ph.D. Thesis Director

______Mateen M. Rizki, Ph.D. Chair, Department of Computer Science And Engineering Committee on Final Examination

______Yong Pei, Ph.D.

______Mateen M. Rizki, Ph.D. (Co-Thesis Director)

______Paul Bender, Ph.D.

______Robert E.W. Fyffe, Ph.D. Vice President for Research and Dean of the Graduate School

ABSTRACT

Dalvi, Megha Dattatrey. M.S. Department of Computer Science and Engineering, Wright State University, 2016. Customizable 3-D Virtual GI Tract Systems for Locating, Mapping and Navigation inside Human Gastrointestinal Tract.

One of the critical challenges of wireless capsule examination is to find the exact position of the capsule in the Gastrointestinal Tract (GI) tract so as to correctly and accurately spot the position of the intestinal diseases. Creating a 3D virtual GI tract system could significantly improve the capsule endoscopy operations.

The virtual human model, such as the BioDigital Human, has been credited as Google

Earth for the human body, which provides us medically accurate virtual body and organ structures. However, it only assembles a “Standard” human body. The problem is: there is only one earth, but billions of people.

Every patient has varying GI tract with regards to its length, width and structures depending on his or her body structure, the kind of the food intake, etc.

Clearly, a virtual GI tract system based on the standard patient anatomy, such as the

Digital Human project, is used for reference in anatomy studies, but not sufficient to provide the locating and navigation for each individual patient’s disease diagnosis and treatment in GI medicine. As a result, the proposed idea is a 3D customizable GI tract model development and a prototype system to illustrate the automatic generation of an individual-specific virtual GI tract system based on sensory data that could be collected from the capsule. The resulted highly customized 3D map for GI tract for each patient can then be used for navigating with high-precision and for drug delivery capsules or surgical micro-robot within the GI tract.

iii

Specifically, the main interest of this thesis research is in developing accurate

3D map of Human GI system that can be automatically customized for each individual patient, such that it can support high-precision navigation for endoscopy and drug delivery capsules used in GI medicine. Such individually customized virtual

Human GI Tract can support the physician with more intuitive views of the endoscopy-obtained images and easily obtaining the location of interest, as each image has already been accurately mapped to its corresponding location and orientation in the GI tract. These new technologies will significantly improve the accuracy, efficiency and workflow performance in diagnostic and treatment of gastrointestinal diseases.

In particular, this thesis research has : (i) designed and developed an individually-customizable 3D virtual Human Gastrointestinal Tract system - a 3D map, that can be accurately and individually customized for each patient; (ii) demonstrated the process to automatically fusing the visual-inertial odometry sensory measurements with the human anatomy to build a 3D Locating and Mapping–based navigation system for endoscopy and drug delivery capsules used in GI medicine; and, (iii) built a prototype for the individually-customizable virtual colon to demonstrate the proposed approach.

In summary, a new technique for the inertial odometer and human anatomy based 3D locating and mapping technology for endoscopy and drug delivery capsule is presented. Prototypes to illustrate the approach and to carry out experimental studies to assess its performance and complexity have been developed. Research believes these new technologies can significantly improve the current practice in

iv

Human Gastrointestinal Tract endoscopy operation, data analysis, disease diagnostics and treatments.

v

TABLE OF CONTENTS

1. Introduction ...... 1

1.1. Overview of Capsule Endoscopy ...... 1

1.2. Customization of the virtual GI tract System ...... 2

1.3. Achieving High Precision Locating Capabilities for Endoscopy Capsules ...... 4

2. Overviews of Capsule Endoscopy ...... 8

2.1. Capsules ...... 9

2.2. Software ...... 11

3. Capsule Locating Techniques ...... 14

3.1. Existing System without Location Information ...... 14

3.2. Locating the Capsule...... 15

3.3. Advantages of the proposed system ...... 17

4. Development of Customizable 3D Virtual Colon Model ...... 18

4.1. Development of 3D Model in Maya ...... 18

4.2. Identify the Anchor Points for Our Colon Model ...... 19

4.2.1. Cutting the Polygonal Faces ...... 22

5. Emulated Visual-Inertial Position Tracking using a Google Tango Prototype

Device ...... 24

6. Building of Customized 3D Virtual Colon ...... 31

vi

6.1. Customization ...... 31

6.2. Accessing the data for customization ...... 32

6.2.1. Kalman filter ...... 32

6.3. Length Calculation and Bounds Recalculation of the Sub-Components ...... 34

6.3.1. Identifying the Anchor Points and Calculating the Length ...... 34

6.3.2. Bound Recalculation of 3D colon ...... 37

6.4. Visualization ...... 42

6.5. Experimental results using Emulated Data Collected by the Tango Device ... 43

7. Conclusion and Future Work ...... 56

7.1. Conclusion ...... 56

7.1.1. 3D Model Development ...... 56

7.1.2. Customizing the 3D model ...... 57

7.1.3. Visualizer ...... 57

7.2. Future Work ...... 57

8. References ...... 59

vii

LIST OF FIGURES

Figure 1.1 : Embeddable Motion Tracking Sensors for Capsule [15] ...... 5

Figure 1.2: Virtualization enabled immersive physician-computer interaction interface

...... 6

Figure 2.1: Pillcam wireless endoscopy capsule [15] ...... 9

Figure 2.2: Screenshot for existing PillCam software[15] ...... 12

Figure 3.1: Screenshot for existing pilcam software[15]…………………………….15

Figure 4.1: GI tract of ……………………………………...19

Figure 4.2: Different sections of Colon ...... 21

Figure 4.3: Subcomponents of Colon ...... 23

Figure 5.1: Tango device [9]…………………………………………………………25

Figure 5.2: User Interface of Tango application ...... 26

Figure 5.3: Application showing the values and the traced path of colon ...... 27

Figure 5.4: Unity 3D visualizer for colon tracing using Tango data showing front view ...... 29

Figure 5.5: Unity 3D visualizer for colon tracing using Tango data showing side view

...... 30

Figure 6.1: Filtered and unfiltered Tango position Data……………………………..34

Figure 6.2: Workflow of the Visualizer ...... 42

Figure 6.3: Standard Colon ...... 44

viii

Figure 6.4: Screenshot showing the ascending colon object customized with the

Tango data ...... 46

Figure 6.5: Screenshot showing the Traverse colon object customized with the Tango data ...... 48

Figure 6.6: Descending colon object customized with the Tango data ...... 50

Figure 6.7: Sigmoid colon’s traverse object customized with the Tango data ...... 52

Figure 6.8: Sigmoid colon descending object customized with the Tango data ...... 54

ix

LIST OF TABLES

Table 2.1: Different region capsules for endoscopy [7] ...... 11

Table 6. 1: Standard Colon length values for sub-components……………………...38

Table 6.2: Table showing colon data for Standard Colon ...... 45

Table 6.3: Colon length and bound values for the ascending colon object ...... 47

Table 6.4: Colon length and bound values for the Traverse colon object ...... 49

Table 6.5: Colon length and bound values for the descending colon object ...... 51

Table 6.6: Colon length and bound values for the Sigmoid colon traverse object ..... 53

Table 6.7: Colon length and bound values for the Sigmoid colon’s descending object

...... 55

x

ACKNOWLEDGEMENTS

I would like to thank my thesis advisor, guide and mentor Dr Yong Pei in guiding me throughout the thesis work. I would also thank my parents for their constant support.

xi

Chapter 1

1. Introduction

1.1. Overview of Capsule Endoscopy

Health care is one of the major application domains for sensory systems development these days. For instance, it is required to examine patient’s GI tract, e.g., , small bowel and colon, to identify and locate possible ulcer and other problems. Examining these parts has been made easier by using latest technologies, such as wireless endoscopy technique. Such wireless capsule endoscopy is able to access all the parts of the GI tract, and is particularly useful to allow doctors to image and visualize the conventionally inaccessible parts of the GI tract. As a result, wireless endoscopy, also known as capsule endoscopy studies, are mainly for the visualization of the bowel as it cannot be reached by wired endoscopy and remains as dead area for endoscopy.

Capsule in a pill shape, such as Pilcam, consists of cameras at both ends of the pill and is indigestible. It is approximately the size of a large vitamin, which passes through the patient's GI tract for a total period of about 10 hours. Through the process, approximately 50,000-60,000 digital images are taken and transferred to an image database, which can be later viewed by doctors and specialists in the form of image/video to examine the GI tract and get the analysis reports [15].

1

Major advantages of wireless capsule endoscopy are:

1. It is a hassle free technique as compared to wired endoscopy.

2. Patients can continue to work throughout the day or go to sleep while the

capsule is taking the pictures of the GI tract.

3. It is a generally painless technique that passes from the bowel naturally.

4. It stores the data in the form of images with image-capture rate being adjusted

from 4 frames per second (f/s) up to 35 f/s, e.g., so as to maximize the

coverage of colon tissues and continue working up to 10 hours [1].

Risks involved with capsule endoscopy are: puncture of esophagus, lining, colon or intestines; dehydration; adverse reaction to sedation; infection and bleeding. However these risks involved can be mitigated [2].

1.2. Customization of the virtual GI tract System

One of the critical components of capsule endoscopy examination is to know the exact position of the capsule in the GI tract so as to correctly and accurately spot the position of the intestinal diseases. Once this exact position could be identified then it makes the task of physicians easier to carry out the necessary treatment or surgery at the identified location. Understanding the exact location and visualizing it in a 3D model helps the physicians get the clear picture of the diseases and location.

This makes creating a 3D model for capsule endoscopy an important task. This is similar to the Google Earth approach. However, there is only one earth, but billions of people. Every patient has a varying structure of their colon with regards to its length

2 and width depending on the height and body structure and the kind of the food intake.

Vegetarians are considered to have longer intestine compared to meat eaters [4].

Clearly, a virtual GI tract system based on the standard patient anatomy, such as the

Digital Human project [14], is suitable for reference in anatomy studies, but not sufficient to provide the locating and navigation for each individual patient’s diagnosis and treatment in GI medicine. The idea of 3D customizable GI tract model development and a prototype of it to illustrate the automatic generation of an individual-specific virtual GI tract system based on sensory data collected from the capsule is proposed.

In this work, the graphical model is built using Auto Desk Maya with the parameters of the GI tract based on a standard patient anatomy. This model is then exported from Maya into Unity 3D, so as to carry out the runtime processing to create the individual-specific virtual GI tract system based on visual-inertial sensory data that could be collected from the capsule and fed into the Unity application as the endoscopy operation progresses.

A Prototype system had been developed that has the colon being customized according to different patient.

Advantages of the customization of the 3D model-based approach include:

 Providing more precise positions for the capsule traveling inside the GI tract.

 Identifying the length of the sub-components of the colon of a patient.

 Providing an accurate 3D graphical view of the colon of the patient.

3

 Helping support high precision navigation of endoscopy and drug delivery

capsules in GI medicines.

In a separate thesis research [13] in parallel to this work, a cloud-based real- time visual-inertial sensory data collection and storage solution has been developed.

In this thesis, a prototype is developed to extract the data saved in the cloud and visualize the 3D graphic image, and then, progressively, generate the customized 3D virtual GI tract system by tracing the data being extracted in real-time from the cloud.

1.3. Achieving High Precision Locating Capabilities for

Endoscopy Capsules

To achieve the high-precision customization of the 3D virtual GI tract, there exists a need to obtain precise positions of the endoscopic capsule traveling inside the

GI tract in order to locate the position of GI disease after it is detected by the captured video source. Most existing capsule locating approaches explored the Radio

Frequency (RF) signal from the capsule, and obtains the location information through

RF signal processing, including RF-based simultaneous localization and mapping

(SLAM) techniques [6]. However, these methods require additional sensors being attached to patients body, and achieve only limited precision, with errors over 3-5 centimeters (cm) for various simulated environments, due to the effect of non- homogeneous body tissues on RF propagation. Thus, it is significantly limited in

4

practice due to its inconsistency as well as

32X11 mm its interference to the patient’s life during Pillcam Colon the examination. In this research, the

3x3x1 mm InvenSense’s MPU-925x proposed model overcomes these limitations Motion tracking unit (3-axis gyroscope, 3-axis accelerometer and 3-axis and achieves high precision capabilities for compass) FigureFigure 1.1 1.1 : Embeddable Embeddable Motion Motion Tracking Tracking capsule endoscopy through the use of a SensorsSensors forfor CapsuleCapsule [15] combination of a miniature motion tracking

sensor, such as 9-axis accelerometer/ gyroscope /magnetic sensor units, together with

the visual images captured by the endoscopy cameras and the 3D graphical model

based on standard patient’s anatomy.

Motion tracking sensors were not used in currently available GI tract capsules

due to their “big” form factors, battery usage, and insufficient accuracy. However,

recent advances in MicroElectroMechanical Systems (MEMS) have helped pave the

way for adding such motion tracking sensors into a capsule.

To achieve high precision, there are major challenges to address.

Approximately 10 hours are required for the capsule to move through the patient’s

digestive system. Over this period of 10 hours there may be accumulation of small

errors by the motion tracking sensors causing the drift of measurements and thus may

lead to large errors in position values. To overcome this drift issue, area learning

techniques as shown in Google Tango to correct the cumulative location errors are

used so as to continuously calibrate and refine location results using data fusion and

cross-examination with : (i) unique visual features at different portions of the

digestive system that can be reliably extracted from the image sequence captured by

5

the camera sensors and (ii) medically accurate virtual anatomical GI tract reference

models, which can provide not only a sequence of anchor points along the trajectory

that can be identified and matched with high confidence, such as the entry of an

organ, but also distinct directions of motion at certain portions of tract, e.g.,

ascending, descending or traverse portion of the colon.

Once these values are obtained, we match these motions and imaging sensor

readings to the 3D model developed resulting in a highly customized 3D map for the

GI tract for each patient. By validating the motion tracking results with virtual

anatomical-models of the GI tract and related organs, together with GI tract features

extraction and recognition, the proposed system can potentially achieve high accuracy

compared to existing technologies for inside GI tract mapping. It can be further used

for navigation purposes for individual patients for follow-up examinations and

treatments, such as surgery design,

preparation, and drug delivery through

guided capsules.

Furthermore, the customization

technique used is explained in detail and

future work required to be done for better

visualization of the GI tract for capsule

Figure 1.2: Virtualization enabled endoscopy with a real-time 3D model view. immersive physician-computer interaction interface This will help in developing better capsule

6 endoscopy techniques.

7

Chapter 2

2. Overviews of Capsule Endoscopy

Wireless endoscopy emerged from the introduction of Body Area Networks (BAN)

IEEE 802.15.6 standards. With the introduction of these standards and introduction of micro robotics, wireless endoscopy became one of the major non-invasive micro robotic inventions in BAN [6].

Wireless capsule endoscopy is a small capsule-shaped micro robot which is swallowable and is equipped with micro visual sensors and an illuminating LED device with the purpose to capture images, and a RF transmission module to transmit the captured images to the outer station wirelessly. This technique is particularly helpful to reach out to the parts of the GI tract, specifically the colon and small bowel where the wired endoscopy device can hardly reach and is more painful. This device captures all the images of the insides of the GI tract in multiple frames per second which are useful for later examination by physicians to identify the problems associated with the GI tract.

It has been credited as a cost-effective and accurate diagnostic instrument, as evident, for example, by the fact that more than 1.6 million patients worldwide have used PillCam technology for the small bowel, esophagus, and colon examinations [1].

8

2.1. Capsules

As mentioned in the previous chapter, the primary interests are in developing

and applying new locating and navigation techniques for the wireless capsule

endoscopy. Wireless capsule endoscopy provides complete examination of the

digestive tract consisting of major parts including esophagus, colon and small bowel -

the three major parts to be examined for possible digestive problems. Compared to

the wired endoscopy technique, capsule endoscopy is more hassle-free and provides a

more convenient examination of the digestive tract for the patients as well as doctors.

Since a lot of known diseases are associated with the digestive tract, examining these

parts and obtaining the proper information is important. With the technique of wired

endoscopy being potentially painful and requiring sedation and bowel preparations,

patients normally try to avoid endoscopy.

To bring about this technique to be hassle-free and painless, capsule

endoscopy was developed thanks to the recent advances in miniature sensory system,

such as battery, low-power circuit, RF communications, and data processing and

storage, etc. Capsule endoscopy is carried out with a pill shaped device consisting of

cameras fitted on one or both of

the ends of the capsule as

illustrated in Figure. 2.1. It

captures the pictures of the

digestive tract as it moves pass the Figure 2.1: Pillcam wireless endoscopy capsule system at a rate of 3-4 fps. It works [15]

9 for around 8-10 hours and captures around 50,000-60,000 pictures [5].

Considering the different needs of the three segments of the digestive tract, capsule endoscopy was developed into three different forms as Esophageal Capsule

Endoscopy, Small Bowel Capsule Endoscopy, and Capsule Endoscopy for the colon to capture more accurate pictures of these specific parts and generate a report more precisely. These capsules were designed to access these parts and were used to examine the diseases associated with the bowel and esophagus separately. The below

Table 2.1 provides a list of all capsule endoscopy in use today. It also specifies the different capsules used to diagnose different regions of the GI tract, and the software used to analyze the obtained pictures.

10

Table 2.1: Different region capsules for endoscopy [7]

2.2. Software

Capsule, such as PillCam, is the sensory hardware section of the wireless endoscopy. Apart from having this capsule being swallowed by the patient to capture

11 the pictures of the whole digestive system, these images are transferred, through built- in RF communication, to an external storage device where it is saved for later analysis by the physicians through image or video-analysis software.

As shown in Figure. 2.2, the software has back and forward buttons provided to be used by the physicians to view the images which are showed in the form of high resolution videos. Different softwares are being used to carry out this task for the physicians, so that they can identify the issues associated with the digestive tract. For example, the red spots in the images show bleeding in the parts and ulcers are also displayed.

Figure 2.2: Screenshot for existing PillCam software[15]

The softwares used for the different capsule endoscopy are mentioned in

Table 2.1 above. Software is one of the important parts of the capsule endoscopy as without it the analysis of the pictures captured would not be possible. Initially when capsule endoscopy was developed, physicians or imaging specialists would analyze

12 the images captured one after another looking into them and generate a report which would normally take around a week’s time. Thus this process was time consuming and tedious to physicians. In the later years, capsule endoscopy companies developed software to combine the individual images captured by the capsule into the form of a video, so that the physicians can analyze the video faster and identify the associated problems without having to consume much time, and thus helping to generate the report with much less time.

Clearly, the software section of the capsule endoscopy has been as important to be further innovated as the innovations with the capsule itself so as to capture better quality images. We believe visualization is another important innovation sector in wireless endoscopy operations by augmenting the video with e.g., location information, and structural information, as well as providing the physician with a more intuitive interface to navigate through all the images captured. So, the proposed system develops, a 3D model of the digestive tract and carries out further individualized customization to help the physician better see the capsule moving through the patient’s digestive tract and view the images captured, and identify the problem and its location.

The next chapter explains the development of the 3D model and further modifications with the model to provide a clear view of the whereabouts of the capsule inside the GI tract and to capture the exact location of the problems within the

GI tract.

13

Chapter 3

3. Capsule Locating Techniques

This chapter provides the information of working of localization for the existing capsule systems. Also introduces the proposed locating system and its advantages.

3.1. Existing System without Location Information

One of the limitations of existing wireless capsule endoscopy (WCE) is that it is unable to localize itself, so as to specify the location of the problem associated with the small bowel or colon of the GI tract. In such scenarios, physicians are unable to determine the exact location of the problem and the capsule may look lost in the GI tract. This gives rise to more innovation and research in localizing the device in the

GI tract.

Figure 3.1 shows the current display of the WCE captured images and related information available through its software. As seen in the screenshot, the video having all the images of GI tract can be viewed by physicians to identify problems such as ulcer or bleeding in the GI tract, but there is no clarity on where exactly the problem lies. Hence correct localization becomes one of the major issues.

14

Figure 3.1: Screenshot for existing pilcam software[15]

3.2. Locating the Capsule

Some researchers have explored the RF signal used by the capsule to send out the data to build RF-based capsule localization systems [6]. However, so far such solutions did not appear to have provided position estimation in an accurate and continuous manner, with average localization error of 6.8 cm and above. The reason being, the human body tissues of the GI tract and skin are non-homogenous and the complicated structure of the intestinal tissues, resulting in highly complicated RF propagation characteristics.

Other researches tried hybrid solutions that use two data sources: the visual evidence provided by the image sequences obtained by the video sensors in the capsule; and the wireless RF signals that are used to locate the 3D space location.

Such hybrid localization has been employed to reduce locating error from 6.8 cm to less than 2.3 cm [6]. However, in a real-life environment, the accuracy may be

15 significantly degraded due to differences between patients’ bodies. Thus, in this research, the use of visual-inertial sensory system based solutions is explored, as evident in Google Tango project, to envision a miniature sensory augment into the existing endoscope capsule to achieve much higher accuracy in locating, at the level of millimeter. This sensory system research has been discussed in detail in the reference thesis [13].

However, the localization techniques employed in these solutions would not be of much use without a proper 3D display of the movement of the capsule and the location with respect to the patient’s GI tract. Only with a proper 3D graphic interface, the obtained exact position of the problem area can be presented to the physician to get a clear view of the problem and carry out the medication required for treating it.

In this thesis, a virtual Human Gastrointestinal Tract system - a 3D map that can be accurately and individually customized for each patient is developed. It also enables an intuitive interface for image reviewing and analysis. Colon is considered for the prototype study which can be further extended to the and other portions of the GI tract. Here, the 3D graphic image is customized in Unity 3D with the position values obtained by our experimenting device, Google Tango, which has the SLAP capability to capture the position values for the colon shape object.

Customization of the 3D model is one of the major parts of this research as body structures of every human being varies and so do the internal tissue structures. Hence with the values obtained by the capsule, the colon of the graphic model is customized

16 to provide more accurate positions of the capsule. More detailed explanation is provided in the next chapter.

3.3. Advantages of the proposed system

In the existing system of 3D image display, physicians get only the video of the images captured and a smaller unrealistic view of the colon as a display which visually does not provide any clear picture of the GI tract and colon in particular to identify the exact location of the associated problem. Hence developing a customized colon to provide a better view of the insides of the GI tract will make task of the physicians much easier and also will help them in analyzing the problem and carry out the remedial measures in the right position of the colon.

17

Chapter 4

4. Development of Customizable 3D Virtual Colon Model

Developing a customizable 3D virtual GI tract model is the first step towards the customization and visualization of the human digestive tract. The human digestive system is made of various parts and the thesis work is started with a commercially available Unity asset. For this thesis study and experiment, however, it is necessary to dissect and develop a new computer graphical model for supporting the customization.

4.1. Development of 3D Model in Maya

The human digestive system consists of the following parts as shown in Figure.

4.1 below which are used in this thesis for demonstration and prototype creation. The thesis is initiated with restructuring the 3D model built in Maya. The Unity asset model is modified using Auto Desk Maya tools.

18

Mouth

Esophagus

Stomach

Colon

Small

Intestine

Figure 4.1: GI tract of human digestive system

4.2. Identify the Anchor Points for Our Colon Model

Initially the asset consists of following parts as separate components.

1. Mouth 2. Teeth 3. Esophagus 4. Stomach 5. Liver 6. Duo 7. Colon 8. Jejunum 9.

19

With all these parts available, the sub-components are separated to position them as per the position values, which are obtained from the capsule data. Each of the parts mentioned above is further divided into more sub-components and colored red to show the problems associated with it.

In Maya, every part is extracted, which is made up of n number of sub polygons. These parts are separated using a separator tool in Maya and divided as two separate polygons. For the prototype system, the colon is used as a main section of study and plan to further work, with small intestine which is one of the toughest parts of the GI tract to analyze and diagnose.

The colon consists of four parts as specified below:

1. The ascending colon including the cecum and appendix. 2. The transverse colon including the colic flexures and transverse mesocolon. 3. The descending colon. 4. The sigmoid colon – the v-shaped region of the .

Figure 4.2 illustrates the different regions of the colon as per the parts defined above.

20

Traverse colon

Descending colon

Ascending colon

Sigmoid colon

Figure 4.2: Different sections of Colon

The colon, as a whole, is one mesh and does not consist of any sub- components obtained from the original assets. To carry out the task of customization, while at the same time, correct the errors from the motion sensor data obtained from the capsule, six anchor points are identified in the colon’s structure that could be reliably identified by the sensor readings. These results in five separate sub- components in this colon model: ascending colon, traverse colon, descending colon and sigmoid colon. The sigmoid colon is further divided into two components as traverse section and descending section.

21

4.2.1. Cutting the Polygonal Faces

Then, the dissection of colon is done using Autodesk Maya. Maya has a tool named Cut Faces tool which allows users to dissect the required parts in different planes (XY, YZ and XZ). The colon has to be cut into five separate polygons. Once the polygons are cut, they need to be selected and the cut space value should be changed to 0.01 so as to reduce the split space shown between the two polygons of the colon. Next, they need to be separated so as to have different polygons of a single colon mesh. In order to carry out the process, the dissected polygons are to be selected and separated to show as a single colon on the visualizer screen. But internally, each of these sub-components can be separately worked on, which is important for customization. Once all the major components of the colon are divided into sub-components, the asset is exported in .fbx format. This asset saved in .fbx format is imported into Unity player to carry out the customization part using c# coding in the backend.

Figure 4.3 below shows different sub-components of the colon created using Maya tool and displayed in Unity visualizer.

22

Ascending colon divided into 3 parts for customization

Figure 4.3: Subcomponents of Colon

23

Chapter 5

5. Emulated Visual-Inertial Position Tracking using a Google Tango Prototype Device

Since accessing the actual data from the WCE was not allowed during this thesis research, for experimental purposes, Tango was chosen as the experimental device to emulate the visual-inertial sensory system that we envision to augment the existing actual WCE. Tango–based sensory prototype was developed and was used to track the exact colon shape which provided us with the position values.

Tango is a platform developed by Google. It enables mobile devices, such as smart phones and tablets to detect the existing position of the devices relative to the world.

And it does so without using global positioning system (GPS) signals or any other external signals. Google has provided developers with two devices to demonstrate its

Tango technology one is the yellowstone 7 inch tablet, and another one is the peanut phone [10]. The yellowstone 7 inch tablet is used for experimental purposes. The motion tracking functionality of the Tango device is used to fetch the position and orientation of the device. The device generates data in six degrees of freedom, i.e., 3 orientation axes, and 3 motion axes, and thus gives detailed information about the environment in 3 dimensions.

24

Figure 5.1: Tango device [9]

This motion tracking technique of Tango does not use any GPS signals to get the data but uses the camera, known as fisheye lens camera, to capture the surrounding area visual details, such as edges and corners of an object of interest to determine the distance travelled by the device. An IMU (Inertial Measurement Unit) uses an accelerometer and a gyroscope to determine how fast the device travels and in which direction the device is turning. Finally, both data are fused with IMU sensor data and one can calculate how much the device has moved and thus determining the position values. All this works on the device at 100 readings per second and captures about 60 images per second, giving smooth and precise position values. The coordinate system of Tango is one of the important aspects to be understood while using it for motion tracking. The motion tracking is relative to the origin or frame of reference, and the origin is always zero-zero-zero [9]. The orientation of the device can be in all three

25 axes and hence we get data in six degrees of freedom. The device captures the data relative to the origin and the origin is always at the start of the device, where the device has been connected and used to run the user application for motion tracking; it is referred to as start of service frame [9]. Tango supports APIs for developers in several different languages, i.e., C, Java and also runs on Unity platform. However we make use of Java API to develop our application to track the motion of the device and the position values.

Figure 5.2: User Interface of Tango application

26

Figure 5.3: Application showing the values and the traced path of colon

The application is built in a user friendly manner where the user is provided with three options. The user can start the application clicking the “Start” button; stop the application using the “Reset motion tracking” button and “SendToServer” button to send the data to the cloud. “First” and “Third” buttons are used to change the view of the traced path on the Tango device. As shown in Figure 5.2, the “start” button is used to start the application which takes the user to the next page as shown in the

Figure 5.3. When the user starts and records the motion data, data is recorded and it is also displayed on screen for the user to view along with the blue line tracing of the device shown at the center of the grid. Once the user is done with recording the data, clicking on the “Reset motion tracking” button will stop the tracking of data. To fetch the data from the cloud as a database file, the user has to click on “SendToServer”

27 button. Doing so will send all the data to the SQL database and save it in a table. This saved data is further available to access from Unity application for tracing and viewing in a 3 dimensional view. We access the data from the cloud using JSON. The data in cloud is saved in the following format: status: valid, count: 23, delta time (ms): 9.554, position (m): [0.006, 0.005, 0.045], orientation: [0.642, 0.015, -0.029, 0.766] status: valid, count: 33, delta time (ms): 9.805, position (m): [0.010, 0.008, 0.053], orientation: [0.644, 0.019, -0.031, 0.764] status: valid, count: 43, delta time (ms): 10.056, position (m): [0.015, 0.009, 0.061], orientation: [0.646, 0.019, -0.029, 0.763]

The above dataset is extracted in the same format as mentioned above from the cloud.

It has a status field saying if the data is valid or invalid. Count field keeps track of the count of the position and orientation values that are collected. Delta time shows the exact time in milliseconds when the particular position orientation value was captured by the device. The position field has the x, y, and z values in meters, and the orientation field holds the orientation of the device. As this data is extracted from the cloud, we parse this data to just extract the position values for our experiment, also checking for data validity. Using the position values, we plot the data on the Unity visualizer. Also we use these data values for customizing our 3D graphic image built to show the first stage of better visualization.

Figure 5.4 shows Screenshot for the plotting of the Tango data on the visualizer in colon shape. Experimental results are the approximation of the colon’s shape, not the

28 exact colon, since we are working with an emulated object, and not the real human colon.

Figure 5.4: Unity 3D visualizer for colon tracing using Tango data showing front view

29

Figure 5.5: Unity 3D visualizer for colon tracing using Tango data showing side view

Thus, these values from a Tango application are used to create a virtual colon shape of the human body to plot on the visualizer and compare the results with the visualizer that have been developed using the actual 3D graphic image of the human

GI tract.

30

Chapter 6

6. Building of Customized 3D Virtual Colon

This chapter explains the core part of the thesis: the building of the customized 3D

Virtual Colon, which includes the data that is fetched and the required process to be carried out for the customization and visualization.

6.1. Customization

As mentioned in chapter 4, we have already developed the virtual GI tract model, which has been divided into customizable sub-components based on identifiable anchor points.

To customize each sub-component, estimate the position values and in turn identify the structural dimensional parameters, such as length, sensors could be built into the capsule. In our prototype to illustrate our approach, we can emulate the position sensors through either a smart watch or the Tango device. In our prototype system, the obtained raw position data is saved either in a text file or into a cloud- based database. In the background, the processing code uses the saved data to estimate the dimension parameters.

Once dimension parameters are obtained, the program customizes the colon of

GI tract accordingly in our Unity-based software. The polygon bound values are updated from the default values based on the “Standard Patient” and recalculated to

31 generate the colon with the required shape as per the readings of the patient.

Specifically, the colon is built by replacing the default bound values for each sub component, in the x, y, and z directions, with the updated values obtained from sensory data processing of smart watch or Tango collected data. By using the updated values, each sub-component of the colon are created proportionally at runtime and shown one after the other in a progressive manner. The bound values may vary in x plane, y plane or z plane depending on the parts of the colon. The entire process runs in real-time, and the customization is carried out progressively as data continually arrives at the cloud from the capsule. The resulting customized virtual colon will be saved for usage in image/video review interface, as well as a 3D map for GI tract navigation of the follow-up drug delivery capsule and surgical micro-robots.

6.2. Accessing the data for customization

Customization of the colon is carried out by extracting the data obtained from the capsule. In this research, Tango device prototype is used and hence the data is extracted from it. Tango device has a built-in visual-inertial sensor based position tracking application to get the position of the device whenever and wherever it is moved. For research purposes, the Tango device is moved in the shape of a colon in

3D space.

6.2.1. Kalman filter

Position data collected by the Tango device could be noisy and requires a filter to remove the noise before calculating the length of the sub-components of the

32 colon with those position values. To filter this raw data from the Tango device,

Kalman filter is used.

Simply put, the Kalman filter is a signal processing algorithm and it is also referred to as linear quadratic estimation (LQE). This algorithm produces a much better result than using Bayesian estimator, which is based mostly on a single measurement. Measurements over a period of time that consist of noise and some inaccuracies can be filtered for more accurate estimated position values. This algorithm uses position and velocity values to filter out the data and produce more accurate values [11]. The Kalman filter is also a simple algorithm which does not consume a significant amount of memory space [12].

In prototype system, the Kalman filter is used to filter out the obtained position values from Tango. The noise is filtered out and structural parameters are calculated from this data. Figure 6.1 shows the graph of data, before and after filtering it. The dotted lines indicates the raw x, y, and z position values i.e. before filtering the data and solid lines indicates the values obtained, after passing through

Kalman filter .

33

Figure 6.1: Filtered and unfiltered Tango position Data

6.3. Length Calculation and Bounds Recalculation of the

Sub-Components

Next, the section discuses steps to identify the anchor points of the sub- components from the position data, and calculate the length of the sub-components and the corresponding bounds. Bound recalculations are important tasks in customization of the 3D graphic object.

6.3.1. Identifying the Anchor Points and Calculating the Length

To map the sensor data to the 3D graphical model of the colon, the anchor points of the sub components are to be identified with the data available in the model from the filtered sensory data. For instance, start and end points of ascending colon

34 need to be identified to measure the length of the ascending colon, and thus customize and show it in the visualizer. Similarly, we need to find the anchor points for a traverse colon, descending colon and sigmoid colon, includes traverse and descending sections of it. To identify the anchor points of the colon from the Tango data, the system needs to analyze the moving trajectory. For instance, to calculate the length of the ascending colon, its end point has to been identified. As the capsule travels through the ascending colon, it can be observed that the filtered position data shows a growing trend in vertical y direction. The y-value is constantly increasing as we move up the ascending section of the colon and identify the anchor point when the y value turns mostly constant or starts dropping from the previous value. This gives the anchor point for the ascending colon and y position value is captured. Since the starting point y value of the ascending colon is also available, subtracting the initial value with the anchor point y value gives the length of the ascending colon. The obtained length is in meter.

However, one of the important aspects of the capsule endoscopy is that the capsule may move back and forth or up and down in the colon. Thus, the obtained position data may show complicated trajectory, instead of consistent path of moving up. This is another reason for choosing the Kalman filter which may help filter out those frequent variations as high-frequency noise in the collected position data.

Similarly, the length of the traverse colon is extracted by now considering the x position value of the Tango data, which is horizontal and parallel to the body. For the traverse colon, the initial position value will be from the end anchor point obtained for the ascending colon. Moving ahead with the traverse colon, the x

35 position value goes on increasing and the anchor point of the traverse colon are determined when the x position values either becomes constant or it starts decreasing from the previous x position value. Once this value is determined, the length of the traverse colon is calculated by subtracting the anchor point x position value with the initial x position value. This gives the length of the traverse colon.

The length of the descending colon is obtained by starting with the position y value from the anchor point obtained for the traverse colon. Moving ahead with the descending colon, the y position values obtained from Tango data decreases from the previous values and x values mostly remain constant. Anchor point for the descending colon is obtained when the y position value of descending colon either remains constant or starts increasing at a point. At this point, the length of the descending colon is calculated, by subtracting the anchor point value from the initial y position value and thus updates the length file with length of descending colon.

Once the anchor point for descending colon is obtained, the length of the sigmoid colon is calculated and the anchor point is determined for the same. Since sigmoid colon is the v-shaped section of the colon, the anchor point for the traverse section of the sigmoid colon is determined first, as it is divided into two sub components in proposed model. Starting with the traverse section of the sigmoid colon, the initial point for this is the ending anchor point obtained for the descending colon. Moving towards the traverse section of the sigmoid colon, the x position value goes on decreasing as compared to its previous value and anchor point is detected at a point where the x position value either goes constant or starts increasing. Once the anchor point is obtained for the traverse section of the sigmoid colon, length is

36 calculated by subtracting the anchor point x value with the initial x value and the length text file is updated with the length value. Now since the anchor point for the traverse section of the sigmoid colon is obtained, this is the starting point for the descending section of the sigmoid colon.

Moving down towards the end of the descending section of the sigmoid colon, the y position value keeps decreasing and the anchor point is detected when the position values reach the end point. The length of the descending section of the colon is calculated by subtracting the final anchor point y position value obtained with the initial starting y position value. This length value is updated in the length text file.

The obtained length data will then be used to recalculate the bounds of the standard 3D colon model for each of the sub-components.

6.3.2. Bound Recalculation of 3D colon

The sub-components of colon are customized using the calculated length from previous section. To customize the colon with the length values, the bounds of the sub-components of available “Standard Colon” are recalculated. An algorithm is applied to recalculate the bounds and vertices of the sub-components of “Standard

Colon”. “Standard Patients” colon sub-components have bound values which are saved in a text file as standard bound values of those sub-components. The “Standard

Colon” sub-components have standard length values associated with it. The table 6.1 shows the length of “Standard Colon” sub-components.

37

Colon sub- Length component (m)

Ascending colon 0.20

Traverse colon 0.40

Descending colon 0.20

Sigmoid traverse 0.20 colon

Sigmoid descending 0.15 colon

Table 6. 1: Standard Colon length values for sub-components

To replace the bound values of the sub-components, the newly identified lengths values are divided by the standard length values and multiplied with the original bound values. This process gives the new bound values of the sub- components of colon. For instance, in the case of the ascending colon, the calculated length value is divided by 0.20 and multiplied to the y bound value of the standard ascending colon. In the case of the traverse colon, the calculated length value of the traverse colon is divided by 0.40 and multiplied to x bound value of the standard traverse colon. Similarly, the descending colon’s calculated length value is divided by

0.20 and multiplied to y bound value of the standard descending colon. The

38 calculated length of the traverse section of the sigmoid colon divided by 0.20 and multiplied to the x bound value of the traverse section of the standard sigmoid colon.

Finally, the calculated length value of the descending section of the sigmoid colon is divided by 0.15 and multiplied to the y bound value of the descending section of the standard sigmoid colon. All these new values of bounds are updated in the bounds file so as to be used by the algorithm to recalculate the bounds and vertices of colon mesh.

With these new bound values and available original bound values of the

“Standard Colon”, the colon object is updated. To carry out further calculation, an algorithm is applied, by passing the new bound values for the sub-components of colon one after other. The algorithm follows the following steps:

1. Calculates the difference of the bound values: x, y, and z separately, of sub- component of “Standard Colon” and updated bound values of sub-component of the colon saved in file. 2. Calculates the Scale value by dividing the newly calculated bound value with the standard bound value of the colon for x, y, and z bounds. 3. Calculates the mesh vertices by multiplying “Standard Colon’s” mesh vertices values with the Scale value calculated in previous step. 4. With all these values, recalculate bounds and recalculate vertices functions to obtain all the recalculated values for the mesh of the colon is executed.

The above steps are supposed to achieve what is required to customize the sub-components of the colon. But when these customized sub-components of colon on are displayed on Unity visualizer, the sub-components of colon cannot be viewed in their original places as the “Standard Colon” was placed. Therefore, an algorithm

39 needs to be executed to transform each sub-components of the colon so as to position them as it was originally placed. Recalculation of bounds and vertices makes the sub- components of the colon to change their center value. Hence, the center of the sub- components of the colon needs to be transformed to match the original center and not move away from the visualizer screen. To do so, the difference between the center of the “Standard Colon’s” sub-components and newly generated center value (due to recalculation of bounds and vertices) are calculated. This difference of center value is used to transform the sub-component of the colon back to the original center value.

Since, each sub-component of the colon along with its bound values is passed to the above specified algorithm, different axis values change for different sub- components of the colon. Initially, the ascending colon object is passed. The

JavaScript running in the background follows the same algorithm as mentioned above but the difference for the ascending colon is that the y bound values change and x, z bound values remains the same. Hence throughout the algorithm, the y bound values and y axis of center value changes. The algorithm runs to recalculate bounds and vertices of the ascending colon mesh and also to transform the position of the ascending colon as per the original center.

Next, the traverse colon object along with the new bound values is passed to the algorithm. In the case of traverse colon, x bound and vertices changes whereas y and z values remain unchanged. The algorithm runs to recalculate the bounds and vertices of the traverse colon mesh and also to transform the position of the traverse colon object as per the original center and to be aligned as per the other sub- components of the colon.

40

In the third case, the descending colon object along with the new bound values is passed to the algorithm. In case of the descending colon, y bound and vertices value changes whereas x and z values remain unchanged. The algorithm runs to recalculate the bounds and vertices of the descending colon mesh and to transform the descending colon to its original center and to be aligned as per the other sub- components of the colon.

Next, the traverse section of the sigmoid colon object is passed to the algorithm along with the new bound values. In this part of the colon only the x bound and vertices value changes whereas the y and z values remains unchanged .The algorithm runs to recalculate the bounds and vertices of the traverse section of the sigmoid colon mesh and also transforms the traverse section of the sigmoid colon to the original center and to be aligned as per the other sub-components of the colon.

Finally, the descending section of the sigmoid colon object is passed along with its new bound values to the algorithm. This section of sigmoid colon has changes only in y bound and vertices values whereas the x and z values remain unchanged. The algorithm is executed for one final time to recalculate the bounds and vertices of the descending section of the sigmoid colon mesh and transform its position as per original center and also to align with the rest of the sub-components of the colon.

Thus, the bounds are recalculated using the position values obtained from the

Google-Tango used for the experiments. The customized components of the colon are developed one after other progressively and showed on the visualizer. These values

41 obtained from the Google-Tango are one of the set of experimental values extracted by running the Google-Tango in the shape of the colon in 3D space.

6.4. Visualization

As described in the section above, visualization is one of the important sections of wireless endoscopy to be developed further to bring about better analysis of the GI tract of the patients by physicians. In this thesis, an application is developed to display a 3D model of the GI tract which can be viewed in every angel by the physicians. Visualizer development is one of the important efforts of this thesis for wireless endoscopy technique. It is an application created using Unity 3D which is a cross platform visual gaming engine.

The visualizer is built following the workflow as illustrated in Figure. 6.2.

Figure 6.2: Workflow of the Visualizer

42

To carry out the task of the application creation as per the work flow, a 3D virtual digestive tract model is developed in AutoDesk Maya, which is then imported into Unity 3D to carry out the customization and visualization in order to create the individualized 3D virtual colon object. As per the workflow, the application is built to integrate all the tasks required as discussed in previous chapters. When the user runs the application, initially, the standard 3D virtual colon object is displayed on the visualizer. A physician, being the user, can rotate around the 3D object along the y axis in clockwise direction and counter clockwise direction to have the complete view of the 3D graphic object. The physician can click on any part of the 3D graphic object of the digestive tract to view the zoomed in/out version of that particular part. The physician can further move to view the actual workaround of the patient. When user clicks on R on the keyboard, the customized part of colon, customized by using the position value from emulated Tango prototype device, appears on the visualizer.

Since in this colon model, the colon is divided into five sub-components, physicians can click R five times to develop the complete customized colon on the visualizer, and thus can also view the customization process of the colon.

6.5. Experimental results using Emulated Data

Collected by the Tango Device

The detailed process to create the customized colon with the produced bound values that are extracted by tracing an emulated human colon using mentioned sensory prototype system is presented in Figure. 6.4-8 one step at a time. The Unity

43

3D visualizer shows the buildup of the colon and this is created so as to provide a better visualization tool as per WCE is concerned.

Figure 6.3 shows the standard patient’s colon with standard bound values for the

3D model without any customization. The measurements of the colon are shown below in Table 6.1.

Figure 6.8 shows the completed customized colon with bound values for the 3D model after the customization. The completed measurements of the colon are shown below in Table 6.6.

Figure 6.3: Standard Colon

44

Colon Parts Standard colon Bounds of standard object size colon object

Ascending colon includes cecum and 0.20 x: 2.756398 appendix y:8.325905

z:2.889272

Transverse colon includes colic flexures and 0.40 x:7.500443 traverse mesocolon y:2.705032

z:3.012613

Descending colon 0.20 x: 1.708986

y:8.979538

z:2.389384

Sigmoid colon: V Traverse section 0,20 x: 4.182859 shaped region of colon y: 1.603851 z: 1.43885

Descending section 0.15 x: 1.752098

y: 4.671764

z: 1.293041

Table 6.2: Table showing colon data for Standard Colon

45

Figure 6.4: Screenshot showing the ascending colon object customized with the

Tango data

46

Colon Parts Customized Colon Bounds of object size Customized colon object

Ascending colon includes cecum and 0.3163 x: 2.756398 appendix y:13.167410

z:2.889272

Transverse colon includes colic flexures and 0 x:7.500443 traverse mesocolon y:2.705032

z:3.012613

Descending colon 0 x: 1.708986

y:8.979538

z:2.389384

Sigmoid colon: V Traverse section 0 x: 4.182859 shaped region of colon y: 1.603851 z: 1.43885

Descending section 0 x: 1.752098

y: 4.671764

z: 1.293041

Table 6.3: Colon length and bound values for the ascending colon object

47

Figure 6.5: Screenshot showing the Traverse colon object customized with the Tango data

48

Colon Parts Customized Colon Bounds of object size Customized colon object

Ascending colon includes cecum and 0.3163 x: 2.756398 appendix y:13.167410

z:2.889272

Transverse colon includes colic flexures and 0.5915 x:11.09128 traverse mesocolon y:2.705032

z:3.012613

Descending colon 0.20 x: 1.708986

y:8.979538

z:2.389384

Sigmoid colon: V Traverse section 0.20 x: 4.182859 shaped region of colon y: 1.603851 z: 1.43885

Descending section 0.15 x: 1.752098

y: 4.671764

z: 1.293041

Table 6.4: Colon length and bound values for the Traverse colon object

49

Figure 6.6: Descending colon object customized with the Tango data

50

Colon Parts Customized Colon Bounds of object size Customized colon object

Ascending colon includes cecum and 0.3163 x: 2.756398 appendix y:13.167410

z:2.889272

Transverse colon includes colic flexures and 0.5915 x:11.09128 traverse mesocolon y:2.705032

z:3.012613

Descending colon 0.2910 x: 1.708986

y:13.06522

z:2.389384

Sigmoid colon: V Traverse section 0.20 x: 4.182859 shaped region of colon y: 1.603851 z: 1.43885

Descending section 0.15 x: 1.752098

y: 4.671764

z: 1.293041

Table 6.5: Colon length and bound values for the descending colon object

51

Figure 6.7: Sigmoid colon’s traverse object customized with the Tango data

52

Colon Parts Customized Colon Bounds of object size Customized colon object

Ascending colon includes cecum and 0.3163 x: 2.756398 appendix y:13.167410

z:2.889272

Transverse colon includes colic flexures and 0.5915 x:11.09128 traverse mesocolon y:2.705032

z:3.012613

Descending colon 0.2910 x: 1.708986

y:13.06522

z:2.389384

Sigmoid colon: V Traverse section 0. 26049 x: 5.447964 shaped region of colon y: 1.603851 z: 1.43885

Descending section 0.15 x: 1.752098

y: 4.671764

z: 1.293041

Table 6.6: Colon length and bound values for the Sigmoid colon traverse object

53

Figure 6.8: Sigmoid colon descending object customized with the Tango data

54

Colon Parts Customized Colon Bounds of object size Customized colon object

Ascending colon includes cecum and 0.3163 x: 2.756398 appendix y:13.167410

z:2.889272

Transverse colon includes colic flexures 0.5915 x:11.09128 and traverse mesocolon y:2.705032

z:3.012613

Descending colon 0.2910 x: 1.708986

y:13.06522

z:2.389384

Sigmoid colon: V Traverse section 0. 26049 x: 5.447964 shaped region of colon y: 1.603851

z: 1.43885

Descending 0.16603 x: 1.752098 section y: 5.17101

z: 1.293041

Table 6.7: Colon length and bound values for the Sigmoid colon’s descending object

55

7. Conclusion and Future Work

7.1. Conclusion

In this thesis, researchers have: (i) designed and developed an individually- customizable 3D virtual Human Gastrointestinal Tract system model - a 3D map, that can be accurately and individually customized for each patient; (ii) demonstrated the process to automatically fusing the visual-inertial odometry sensory measurements with the human anatomy to build a 3D Locating and Mapping–based navigation system for the endoscopy and drug delivery capsules used in GI medicine; and, (iii) built a prototype for the individually-customizable virtual colon to demonstrate the proposed approach.

7.1.1. 3D Model Development

The idea of a customizable 3D graphic model for virtual human digestive system is presented, and developed a virtual colon model for experimental studies.

The process to identify anchor points along the GI tract is presented, that can be reliably recognized from sensor data, which serve two primary purposes: i) determine the granularity of the customizable 3D virtual GI tract model; and ii) help correct the cumulative sensor data errors from a prototype visual-inertial position tracking system. Also, the procedures to use Auto desk Maya to convert an existing asset of human GI tract into a customizable model consisting of a sequence of sub- components are demonstrated.

56

7.1.2. Customizing the 3D model

The process of automatically fusing the visual-inertial odometry obtained sensory measurements with the human anatomy based 3D virtual GI tract model is demonstrated. A cloud-based solution is built, that stores and processes sensory data collected by a Tango-based visual-inertial locating prototype system. A filtering algorithm to process the collected position data is developed, to remove any errors and noise through the Kalman filter. Also, the algorithms are presented, to identify anchor points, and estimate the dimensional parameters for each sub-components of the colon and thus customize the “Standard Colon" model to create individualized 3D virtual colon object.

7.1.3. Visualizer

All the processing capabilities are integrated into a Unity-based Visualizer app, which can customize and visualize the customized 3D virtual colon object. It provides a user interface for the physicians to view the customized colon as per data obtained by the capsule.

7.2. Future Work

A model and prototype system for the colon has been created in this thesis. In the future, this can be extended to customize the small intestine which is more complicated with regards to its complexity of turns and twists involved. This makes identification of anchor points a major task. Also more work is planned towards customizing all the other parts of the digestive tract.

57

Due to the lack of access to actual visual data from patient , the experimental studies are carried out with emulated visual-inertial sensory data fusion through the use of Tango collected visual and inertial sensor data. As future work : i) use 3D printer to create more realistic GI tract 3D prototype for the study; and, ii) build partnership with medical professionals to carry out the actual clinical study.

58

8. References

[1] Guobing Pan and Litong Wang, “Swallowable Wireless Capsule Endoscopy:

Progress and Technical Challenges,” Gastroenterology Research and Practice, vol.

2012, Article ID 841691, 9 pages, 2012. doi:10.1155/2012/841691

[2] Erica Sanderson (August 26, 2013). Is Capsule Endoscopy Right for You?

[Article] . Retrieved from http://www.healthcentral.com/ibd/c/11407/162559/capsule- endoscopy/

[3] Ida Korneliussen (April 17, 2015). How different are we on the inside?. [Article].

Retrieved from http://sciencenordic.com/how-different-are-we-inside

[4] Andy Raskin (2004), A Gut Above [article]. Retrieved from http://www.biguglyreview.com/body/nonfiction_andy_raskin.html

[5] Zhou M, Bao G, Pahlavan K, Measurement of motion detection of wireless capsule endoscope inside large intestine, Conf Proc IEEE Eng Med Biol

Soc. 2014;2014:5591-4. doi: 10.1109/EMBC.2014.6944894.

[6] Guanqun Bao, Kaveh Pahlavan, Liang Mi, Hybrid Localization of Micro robotic

Endoscopic Capsule Inside Small Intestine by Data Fusion of Vision and RF Sensors,

IEEE Sensors Journal, Vol. 15, No. 5, May 2015

[7] Slawinski, P. R., Obstein, K. L., & Valdastri, P. (2015). Emerging Issues and

Future Developments in Capsule Endoscopy. Techniques in Gastrointestinal

Endoscopy, 17(1), 40–46. http://doi.org/10.1016/j.tgie.2015.02.006

59

[8] Jinn-Yi Yeh1, Tai-Hsi Wu2, Wei-Jun Tsai1, Bleeding and Ulcer Detection Using

Wireless Capsule Endoscopy Images. Journal of Software Engineering and

Applications, 2014, 7, 422-432.Published Online May 2014 in SciRes. http://www.scirp.org/journal/jsea,http://dx.doi.org/10.4236/jsea.2014.75039.

[9]Google Developers.(June 10,2016). Motion Tracking How it works.[Tutorial] https://developers.google.com/tango/overview/motion-tracking.

[10]Wikipedia, the free encyclopedia.(December

29,2016).Tango(platform).[Tutorial]. https://en.wikipedia.org/wiki/Tango_(platform)

[11]Wikipedia, the free encyclopedia.(December 25,2016).Kalman Filter.[Tutorial]. https://en.wikipedia.org/wiki/Kalman_filter

[12] Tim Babb.(August 11,2015). How a Kalman filter works, in pictures.[Article]. http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/

[13] Dhrukumar Navinchandra Patel. (Jan 6, 2017). Miniatured Inertial Motion and

Position Tracking and Visualization Systems Using Android Wear Platform. [Thesis].

[14]Bio Digital (2017).Male Complete Anatomy[Online]. https://human.biodigital.com/index.html

[15]Rapid Reader (2017).Given Imaging| Expanding the scope of GI [Online]. http://www.givenimaging.com/en-int/Innovative-Solutions/Product-Support/pillcam- help-center/Downloads/v8specs/Pages/RAPID-Reader-Download-Form-v8.aspx

60