A Method of Superimposition of CBCT Volumes in the Posterior Cranial Base

A Thesis Submitted to The Temple University Graduate Board

In Partial Fulfillment of the Requirements for the Degree MASTER OF SCIENCE In ORAL

By Jared R. Gianquinto, D.M.D.

Orhan C. Tuncay, D.M.D. Advisor, Department of Orthodontics

James J. Sciote, D.D.S., Ph.D., M.S.,

Committee, Department of Orthodontics

Jie Yang, D.D.S., M.Med.Sc., M.S., D.M.D. Committee, Department of Radiology August, 2011

i

ABSTRACT

Three dimensional imaging in the form of Cone Beam Computed Tomography has become prevalent in the field of orthodontics. Analytical methods of resulting volumetric data sets have not kept pace with the technology capable of producing them.

Current 3D analysis techniques are largely adaptations of existing 2D methods, offering no clear diagnostic advantage over traditional imaging techniques in light of increased radiation exposure, and cannot be compared with norms generated from 2D image capture sources. In order to study morphology in 3D, data sets must be generated for longitudinal studies and native 3D analytical methods must also be developed. Existing methods of CBCT volume superimposition are cumbersome, involving complex software pipelines and multiple systems to complete the process. The goal of the current study was to develop a reproducible method of CBCT volume superimposition in the posterior cranial base in a single software package, and construct an easy to follow, step-by-step manual to facilitate future studies in craniofacial morphology.

Existing anonymized sequential CBCT volumes of three subjects meeting inclusion criteria were obtained from the Kornberg School of Dentistry Department of

Radiology. Volumes for each subject were imported into AMIRA software, resampled to a standardized 0.5mm voxel size and superimposed with a mutual information algorithm.

Posterior cranial base surface data was extracted using a semi-automatic technique.

Resulting surface distance data was compiled and visualized through application of color maps. A streamlined image processing protocol was produced and documented in a detailed step-by-step manual.

ii

Surface distance analysis of serial segmentations was performed to verify reliability of the process. Surface distance deviations greater than 0.5mm consistently fell below 0.2 percent of the total surface area.

Sequential scan superimpositions of all three subjects exhibited mean surface distances of less than 0.15mm. Two out of three subjects exhibited deviations of greater than 0.5mm in less than 1 percent of the total surface area, suggesting consistent sub- voxel accuracy of the protocol.

iii

ACKNOWLEDGEMENTS

First and foremost, I would like to thank Dr. Orhan Tuncay for his wisdom, guidance and dedication to the specialty. Your tireless and selfless devotion to the department and relentless pursuit of the future are truly inspiring. Thank you for believing in me.

I would like to thank Dr. James Mah for graciously providing access to data collected by the University of Nevada at Las Vegas, and for numerous discussions regarding Mutual Information theory. Your work continues to inspire.

Dr. Jie Yang, Dr. Htin Gyaw and the Department of Radiology staff were essential to the success of this project. Without your guidance and help in collecting data from the archives, this project could not have been possible.

To Dr. Bryon Viechnicki, thank you for discovering Amira software, leading the way and sharing your successes.

Finally, to my family. Thank you for your unwavering support in all of my pursuits.

iv

TABLE OF CONTENTS

ABSTRACT ...... ii

ACKNOWLEDGEMENTS ...... iv

LIST OF TABLES ...... x

LIST OF FIGURES ...... xi

CHAPTER ONE: INTRODUCTION ...... 1

CHAPTER TWO: REVIEW OF LITERATURE ...... 3

2.1 Cephalometric Radiography ...... 3

2.1.1 Cephalometric Superimposition ...... 4

2.1.2 Controversy in Cephalometrics ...... 4

2.1.3 Cranial Base Morphology and Stability ...... 5

2.2 Cone Beam Computed Tomography ...... 7

2.2.1 CBCT Limitations ...... 8

2.2.2 Artifacts ...... 9

2.2.3 Partial Volume Averaging ...... 9

2.2.4 Undersampling ...... 9

2.2.5 Cone Beam Effect ...... 10

2.2.6 Image Noise ...... 12

v

2.2.7 Patient Related Artifacts ...... 12

2.3. 3D Biomedical Imaging and Analysis Software ...... 13

2.3.1 Multiplanar Reformation ...... 14

2.3.2 Surface Rendering ...... 14

2.3.3 Volume Rendering ...... 15

2.3.4 ...... 15

2.3.5 ...... 16

2.4 Previous Research from Temple University Department of Orthodontics ...... 18

2.4.1 Thesis Project of Dr. Michael Nuveen ...... 18

2.4.2 Thesis Project of Dr. Can Nguyen ...... 18

2.4.3 Thesis Project of Dr. John Slattery ...... 19

2.4.4 Thesis Project of Dr. David Lee ...... 20

2.4.5 Thesis Project of Dr. Robert J. Murray ...... 20

2.4.6 Thesis Project of Dr. Wilbert Saavedra ...... 21

2.4.7 Thesis Project of Dr. Ched Smaha ...... 21

2.4.8 Thesis Project of Dr. Bryan Hiller ...... 22

2.4.9 Thesis Project of Dr. Jacob Orozco ...... 22

2.5 Recent Works in 3D Morphometrics ...... 23

CHAPTER THREE: AIM ...... 27

CHAPTER FOUR: MATERIALS AND METHODS...... 28

vi

4.1 Subject Selection ...... 28

4.2 Image Acquisition ...... 29

4.3 Superimposition of Cranial Base Models ...... 29

4.4 3D Surface Model Construction ...... 30

4.4.1 Segmentation ...... 30

4.4.2 Segmentation Reliability ...... 32

4.5 Analysis of Registration Accuracy...... 32

CHAPTER FIVE: RESULTS ...... 34

5.1 Segmentation Reliability ...... 34

5.2 Superimposition Analysis ...... 37

CHAPTER SIX: DISCUSSION ...... 41

6.1 Data Collection ...... 41

6.2 Data Processing ...... 42

6.3 Image Segmentation ...... 43

6.4 System Accuracy ...... 43

6.5 System Deficiencies ...... 44

6.6 Future Areas for Research ...... 44

CHAPTER SEVEN: CONCLUSION ...... 46

REFERENCES CITED ...... 47

vii

APPENDIX A: CBCT Superimposition Protocol ...... 53

A.1 Initial Data Loading and Processing...... 53

A.1.1 Loading CBCT Data ...... 53

A.1.2 DICOM Volume Resampling ...... 55

A.2 Volume Rendering ...... 57

A.2.1. Application of Color Maps to Volume Rendering...... 58

A.3 Object Pool Setup ...... 60

A.4 Volume Registration ...... 61

A.4.1 Rough Alignment of Volumes ...... 61

A.4.2. Rigid Registration ...... 64

A.4.3. Volume Duplication ...... 66

A.5 Data Isolation ...... 67

A.5.1 OrthoSlice Orientation ...... 67

A.5.2 Volume Cropping ...... 72

A.6 Segmentation ...... 79

A.6.1 Thresholding ...... 79

A.6.2 Slice Editing ...... 83

A.6.3 3D Volume Editing ...... 86

A.6.4 Surface Generation...... 88

A.7 Surface Analysis ...... 91

viii

A.7.1 Region of Interest Definition ...... 91

A.7.2 SurfaceDistance Computation ...... 93

A.6.3 Vector Map Computation ...... 98

ix

LIST OF TABLES

Table 2.1 Anatomical boundaries of the cranial base ...... 5

Table 2.2. Image noise described as scatter-to-primary ratio in CT systems...... 1

Table 4.1. Subjects included in study...... 28

Table 4.2. Region of interest boundaries ...... 31

Table 5.1. Surface distance error data for repeated segmentations...... 34

Table 5.2. Surface distance data for superimposed volumes...... 37

x

LIST OF FIGURES

Figure 2.1. Comparison of CT vs. CBCT imaging systems ...... 8

Figure 2.2. The cone-beam effect...... 11

Figure 2.3 Structured light projected onto a physical subject ...... 19

Figure 2.4. Thresholding and edge detection ...... 24

Figure 2.5. Chamfer map generation ...... 25

Figure 4.1. Superimposition of CBCT Volumes...... 30

Figure 4.2. The segmentation process ...... 31

Figure 4.3. Segmented voxel data of the posterior cranial base ...... 32

Figure 4.4. Surface distance ...... 33

Figure 5.1. Superimposed and cropped CBCT volume renderings...... 35

Figure 5.2. Intra-rater comparison of sequential CBCT volume segmentations...... 36

Figure 5.3. Topographical changes between sequential CBCT volumes of subject 1...... 38

Figure 5.4. Topographical changes between sequential CBCT volumes of subject 2...... 39

Figure 5.5. Topographical changes between sequential CBCT volumes of subject 3...... 40

xi

xii

CHAPTER ONE

INTRODUCTION

Elements of the natural world exist in three spatial dimensions, yet scientists and artists throughout the history of mankind have attempted to describe human in grossly oversimplified two dimensional terms. As technology advances, so does the ability to come ever closer to generating true representations of living anatomy in three dimensions. Current 3D image capture technologies have generated astounding visual representations of the human form, but have outpaced the development of true 3D analytical methods.

Cone beam computed tomography (CBCT) imaging has taken the orthodontic community by storm as of late, replacing a series of standard two dimensional radiographs with data gathered in a single scan. Despite the beauty of images captured via CBCT, the lack of a true advantage in routine diagnosis and treatment planning relative to the increased radiation exposure has led to recommendation only for limited orthodontic purposes. These indications include impacted teeth, tempormandibular joint evaluations, analysis of upper airway constrictions, estimation of dental age, assessment of maxillofacial growth and development, and treatment planning for orthognathic surgery (Silva et al. 2008).

Until recent years, the goal of producing true anatomical models was limited by available technology, resulting in a conglomeration of site-specific two dimensional image sets for diagnosis and treatment planning (Quintero et al. 1999). Current methods

1

of 3D analysis are largely adaptations of existing 2D analytical methods, and provide no greater insight, as they cannot be compared with norms generated from traditional 2D image capture technology (van Vlijmen et al. 2009). As imaging technology has evolved into the 3D era, so must analytical methods. In order to understand the true nature of morphology in 3D, new 3D data sets must be generated for longitudinal studies, and their subsequent analyses should be natively developed in 3D as well.

This is a product development study to establish a reproducible method of superimposing posterior cranial base anatomy segmented from CBCT volumes through voxel-wise rigid registration in a single software package. The resulting method will be used in subsequent studies to develop meaningful surface analyses of anatomical changes in the craniofacial complex.

The Graduate Orthodontics program at Temple University Kornberg School of

Dentistry has a strong tradition of leading the orthodontic community into the world of

3D surface imaging. This project follows in the footsteps of studies at Temple University by Nguyen, Slattery, Saveedra, Murray, Nuveen, Smaha, Hiller, and Orozco, and will hopefully provide a framework for future research and the eventual development of a true

3D craniofacial surface analysis.

2

CHAPTER TWO

REVIEW OF LITERATURE

2.1 Cephalometric Radiography

Roentgen’s discovery of x-rays in 1895 revolutionized and dentistry.

Radiography came into widespread use in the field of orthodontics through Broadbent’s invention of roentgenographic cephalometry in 1931 (Broadbent 1931; Proffit, Fields &

Sarver 2007). Although the original intention of cephalometry was to interpret craniofacial growth patterns, it was later found that dentofacial proportions could be assessed through the same radiographic technique.

With the firm establishment of cephalometric radiography as an image capture modality, many rushed to make sense of the new data generated through the technique.

The first article on the subject, published by De Coster, related anatomical landmarks on cephalometric radiographs to dentofacial proportions through a grid system (Wahl 2006a;

Moorrees 2006). Soon thereafter, Downs produced a set of cephalometric ideals through his study of facial proportions of 20 non-orthodontically treated Caucasian adolescents with ideal dental occlusions (Proffit, Fields & Sarver 2007; Downs 1956). His study,

“Variations in Facial Relationships: Their Significance in Treatment and Prognosis,” became known as simply the Downs analysis (Wahl, 2006b). Despite the fundamental error incorporated into the data set from such a small sample size, the analysis showcased the utility of cephalometric radiography in the assessment of underlying skeletal proportions, and paved the way for larger scale studies.

As cephalometric radiographs gained widespread acceptance in orthodontics, many authors followed with their own methods of image . Steiner’s analysis

3

modernized cephalometrics by recognizing patterns among measurements and offering suggestions for their use in treatment planning (Steiner, 1953; Wahl, 2006b).

Other analyses exploring vertical and horizontal proportional relationships in the craniofacial complex were developed, yet these analyses were largely used to characterize normal craniofacial relationships (Wahl, 2006b; Sassouni,,1955).

2.1.1 Cephalometric Superimposition

Anatomical changes resulting from growth and development or orthodontic- orthopedic treatment are analyzed by superimposition of serial cephalometric radiographs

(Arat et al., 2003). Many superimposition methods have been generated based on evidence supporting stability and consistent radiographic appearance of anatomical structures (Coben, 1955; Björk, 1968; Broadbent et al., 1975; Ricketts, 1960).

2.1.2 Controversy in Cephalometrics

The validity of linear and angular measurements as descriptors of morphology and craniofacial growth has been questioned since the inception of cephalometric analysis. How can a complex anatomical form be characterized by the distance between two landmarks, or the angle created by two lines resulting from structures with different radiopacities? Proponents of cephalometric analyses have been eager to demonstrate all manners of craniofacial growth, treatment effects and establish treatment plans based on their results (Downs, 1956; Steiner, 1953; Sassouni, 1955; Coben, 1955; Björk, 1968;

Broadbent et al., 1975; Ricketts, 1960). Opponents of cephalometric analysis have noted the inconsistency of landmark identification and results of comparable methods, effect of patient positioning, inter-rater error, and myriad factors affecting reproducibility (Arat,

4

Rubenduz & Akgul 2003, Bergerson 1961; Moyers & Bookstein 1979; Ghafari et al.,1987).

2.1.3 Cranial Base Morphology and Stability

It is generally accepted that 90 to 95 percent of cranial growth is complete by age

6, with negligible increments of growth in the cranial base after age 7. As a result, the cranial base has been used as a convenient reference point for growth studies (Sicher,

1970).

Scott described the cranial base in three regions: the posterior cranial base extending from the foramen magnum to the hypophyseal fossa, the middle cranial base extending from the hypophyseal fossa to the foramen cecum, and the anterior cranial base, extending from the foramen cecum to nasion (Table 2.1)(Scott, 1967).

Region Anatomical Boundaries Posterior Cranial Base Foramen magnum, hypophyseal fossa Middle Cranial Base Hypophyseal fossa, foramen cecum Anterior Cranial Base Foramen cecum, nasion Table 2.1 Anatomical boundaries of the cranial base.

Scott examined the growth patterns of each region, concluding that the middle cranial base contained the most stable morphology, reaching 98 percent of adult size between the ages of 8 and 13 years. The anterior and posterior regions were less stable.

Activity of the spheno-occipital synchrondrosis remained a factor into early adulthood in the posterior region, while growth at nasion and the frontal sinuses continued even later in life (Scott, 1967).

Craniometric studies by Ford conducted on dried skulls showed regions of the cranial base exhibited either neural or skeletal growth rates. The foramen magnum and

5

middle cranial base exhibited neural growth rates ceasing between the ages of 6 and 8 years, while the posterior and anterior regions exhibited the general skeletal growth rate

(Ford, 1958).

Studies by Baume, Bergerson, Björk, Ford and Scott all noted changes in the hypophyseal fossa during growth, specifically its rise attributed to factors such as enlargement of the sphenoid air sinuses, growth of the spheno-occipital synchrondrosis, or minor remodeling (Baume ,1957; Björk, 1955; Bergerson, 1961; Scott, 1967; Ford,

1958; Björk, 1955; Steuer, 1972).

Several studies by Moss noted the relative stability of the anterior cranial base on the assumption that neural growth in the region is completed by the end of the third year, concluding that the “cerebral surface of the cranial base is constant in size, shape and position,” and reported that medial areas of the cranial base exhibit greater stability than lateral areas (Moss & Greenberg, 1955; Moss & Salentijn, 1969).

Thus, a multitude of studies have validated the medial aspect of the cranial base as a stable region for superimposition of serial radiographs. While the middle cranial base is the preferred region for superimposition in growing subjects, the posterior cranial base is suitable for non-growing subjects and provides a smoother, more regular surface for segmentation and registration.

6

2.2 Cone Beam Computed Tomography

Three-dimensional imaging has been a mainstream diagnostic modality of the medical community for over thirty years in the form of computed tomography (CT).

While many healthcare specialties have eagerly incorporated CT imaging, its use in the dental community has been limited due to minimal diagnostic benefits relative to radiation exposure, cost, and low vertical resolution (Silva et al., 2008). Furthermore, the size and cost of a CT scanner is not well suited to private dental offices or imaging centers.

Cone-beam computed tomography is an adaptation of computed tomography with several significant differences (Figure 2.1). The traditional CT system is comprised of a high output rotating anode generator producing a fan-shaped x-ray beam. Radiation is captured by a horizontal array of detectors, resulting in a series of axial plane images.

These images are then stacked to create a volume of information that can be interpreted in three dimensions. In comparison, a CBCT system is comprised of a low energy fixed anode tube generating a cone-shaped x-ray beam. Radiation is captured by either a large surface area solid state sensor or amorphous silicon plate, resulting in a cylindrical volume of image information in a single pass. This information can also be interpreted in three dimensions (Arai et al., 1999).

7

Figure 2.1. Comparison of CT vs. CBCT imaging systems (Mah & Hatcher, 2004).

2.2.1 CBCT Limitations

CBCT technology is mainly limited by image quality relative to noise and contrast resolution associated with detection of large amounts of scattered radiation. This becomes a significant factor in larger fields of view. Artifacts, limitations of reconstruction algorithms, physical properties of cone beams, noise and contrast also affect resolution and image detail (Scarfe & Farman, 2008).

8

2.2.2 Artifacts

An artifact is defined as any distortion or error in an image that is unrelated to the subject being studied. Beam hardening, or the mean energy increase due to the absorption of lower energy versus higher energy photons, is a significant factor in CBCT due to its lower kilovolt (peak) energy in comparison to conventional CT. Metallic restorations and dental implants can result in beam hardening, subsequent cupping or streaking between dense objects and loss of image quality (Scarfe & Farman, 2008).

2.2.3 Partial Volume Averaging

Partial volume averaging occurs in both conventional and CBCT imaging. When a voxel resolution set for a scan is lower than the spatial or contrast resolution of the structure to be imaged, a single voxel is therefore not representative of the tissue or boundary. Boundaries are represented as weighted averages of the different CT values, often resulting in a “stepped” appearance, or regions with homogenous intensity levels.

These artifacts occur in regions with anatomy rapidly changing in the z direction. This can be minimized by setting a smaller higher voxel size at the cost of a higher radiation dose (Scarfe & Farman, 2008).

2.2.4 Undersampling

Undersampling can occur when reconstruction is performed with too few basis projections. Insufficient data leads to registration errors, sharp edges and noisier images due to aliasing. Fine striations artifacts can appear in the image. This becomes problematic when high resolution is required, and can be avoided by maintaining the number of basis projection images (Scarfe & Farman, 2008).

9

2.2.5 Cone Beam Effect

The physical properties of cone-shaped beams also present challenges in image acquisition. CBCT images exhibit the highest image quality and contrast at the center of the volume, and the lowest at the periphery. This phenomenon is known as “The cone- beam effect”. The amount of image information gathered is relative to the amount of solid volume through which the beam passes, the greatest being the center of the subject.

Therefore, central anatomical structures are subject to a high degree of beam overlap, resulting in greater image detail at the center of mass, with degradation of image quality at the periphery. The image resolution gradient is most easily seen in corners of the image, where minimal beam overlap results in a v-shaped artifact of reduced image quality (Scarfe & Farman, 2008).

10

Figure 2.2. The cone-beam effect. Cone beam geometry can be described as a projection of three x-ray beams; one perpendicular, one angled inferiorly, the last angled superiorly, shown at two projections 180 degrees apart. The amount of data collected for reconstruction is relative to the amount of solid volume between overlapping projections. This results in maximal data collection of central structures, with degraded image quality in the periphery. The midsaggital slice exhibits this effect as a peripheral “V” artifact of reduced contrast, noise and distortion (Scarfe & Farman, 2008).

11

2.2.6 Image Noise

A large volume is irradiated by the cone-shaped beam in every basis image projection. This results in a large amount of non-linear attenuation and omnidirectional scattering of the beam. Since a large surface area detector is used, scattered radiation is registered by the detector in addition to the primary beam. This additional x-ray information is called noise, and contributes to image degradation and loss of detail.

Smaller surface area detectors exhibit less noise due to this phenomenon (Table 2.2)

(Scarfe & Farman, 2008).

CT Type Detector Array Size Scatter-to-Primary Ratio Single-Ray Small 0.01 Fan Beam/Spiral Small 0.05-0.15 CBCT Large 0.4-2.0 Table 2.2. Image noise described as scatter-to-primary ratio in CT systems.

2.2.7 Patient Related Artifacts

Patient movement can also cause errors in volume reconstruction, as differences in basis image projections cannot be properly registered. This can produce artifacts manifesting as horizontal streaks in the reconstructed image. Motion artifacts can be minimized by using head restraints, which can distort soft tissue contours, or decreasing scan time, which decreases scan resolution (Scarfe & Farman, 2008).

Metallic dental restorations or jewelry in the field of view can lead to degradation of image sharpness and horizontal streaks due to beam hardening and insufficient gathering of photons at the detector. This can also lead to horizontal streaks and increased image noise. Metallic artifacts can be reduced by removing jewelry or other metallic objects

12

prior to scanning. Artifacts from dental restorations cannot be avoided, however (Scarfe

& Farman, 2008).

CBCT imaging is a relatively new, yet quickly evolving technology. Many limitations can be accounted and compensated for through software correction or adjustments of image capture methods. Despite these limitations, CBCT imaging is currently the most practical modality of 3D imaging available for routine use in orthodontics, and may very well replace panoramic and lateral cephalometric radiographs in the future.

2.3. 3D Biomedical Imaging and Analysis Software

Through reconstruction of native projections, CBCT data can be used to generate images that intraoral, panoramic, and cephalometric images cannot. Volumetric data sets are reconstructed as slices in three orthogonal planes by the image capture software.

Customized projections, traditional 2D images, 3D renderings and extraction of voxel data for surface analyses can be created through further processing (Scarfe & Farman,

2008). Open source software such as 3DSlicer, SCIRun, Seg3D and ITK-SNAP have been produced through NIH-funded research, and are available free of charge, with full access to source code repositories. Source code access allows customization of the software for specific purposes. Several commercial entities have developed imaging software as well, such as Anatomage, CyberMed, Medicim, MotionView3D and Amira.

Open source software packages are freely available, yet require a significant amount of technical expertise to install and maintain, often with limited documentation and support.

13

While commercial software can be costly, the level of support, documentation, ease of use and installation is greatly improved (Lerner & Tirole, 2002).

2.3.1 Multiplanar Reformation

Volumetric data generated through CBCT is comprised of isotropic voxels. This allows the production of non-orthogonal sections of voxel data. This process, known as multiplanar reformation, or MPR, allows the generation of non-axial 2D images. MPR can also be used to reconstruct distortion-free traditional projections such as panoramic or cephalometric radiographs as well as custom cross-sections of user-specified anatomy

(Scarfe et al., 2006).

2.3.2 Surface Rendering

Surface rendering is a method of producing 3D images directly from volumetric data. A threshold level of a specific radiodensity corresponding to the tissue of interest is selected by the user. Voxels within the threshold levels are then used to construct and display a three-dimensional model on screen. Multiple surfaces can be rendered based on thresholds of different tissues, and colors can be assigned for ease of identification.

Surface rendering is dependent upon thresholding. Therefore, inhomogeneity in CBCT gray value data can lead to improper visualization of certain anatomical structures such as sella turcica, condyles and orbital walls. Internal structures of elements within the defined threshold are also not visible, as the surfaces alone are rendered (Swennen et al., 2009).

14

2.3.3 Volume Rendering

Volume rendering is the second method of producing 3D images directly from volumetric data. Rather than displaying surfaces within a given single threshold, varying transparencies and colors can be assigned to voxels in multiple gray value ranges. This allows the entire semi-transparent volume to be viewed as a single image that can be manipulated by the operator. Volume rendering is more appropriate for visualization of anatomy than actual measurement, as specific surfaces are not defined (Swennen et al.,

2009).

2.3.4 Image Segmentation

Any sort of image analysis usually involves “segmenting” it into parts and measuring their various properties and relationships (Rosenfeld, 1979). In the field of , the process of segmentation specifically applies to delineation of boundaries of anatomical structures in consecutive slices of 3D images (Yushkevich et al., 2006). Segmentation can be accomplished manually, automatically, or through a combination of both techniques, allowing export of 3D surface data in non-proprietary formats and subsequent analysis in myriad software.

2.3.4.1 Manual Segmentation

Manual segmentation involves delineation of anatomical boundaries by a trained expert. This puts the expert in complete control of the process, but is time consuming and error-prone. Contours traced in subsequent slices without a 3D reference image can result in mismatched regions and jagged, unnatural edges that present difficulties in shape analysis. Consistency of segmentation results is also problematic. Studies have

15

demonstrated a high frequency of significant discrepancies between delineations produced by different experts. Discrepancies have also been noted in repeated attempts by a single expert (Yushkevich et al., 2006).

2.3.4.2 Automatic Segmentation

Fully automated and user-guided segmentation methods are also available.

Automation algorithms include probabilistic models of image intensity, atlas deformation and statistical shape models (Wells et al., 1996; Davies et al., 2002; Joshi et al., 2002).

Fully automated segmentation methods are completely dependent upon user-defined parameters and quality of input data. Therefore, user error is not completely eliminated.

Semi-automated techniques incorporating an initial grayscale threshold or active contour process followed by hand-editing of slices has been used successfully and validated in a number of studies (Yushkevich et al., 2006; Cevidanes et al., 2005a; Cevidanes et al.,

2005b; Cevidanes et al., 2006; Cevidanes et al., 2009; Cevidanes et al., 2009b, Cevidanes et al., 2010).

2.3.5 Image Registration

Image registration is the process of transforming multiple data sets into a single coordinate system. Registration is required to perform any comparison, integration or measurement between two or more data sets and can be accomplished by spatially transforming the target image to the reference image. While a multitude of registration techniques are available, each can be categorized by their method of relating target space images to reference space images. Rigid registration techniques involve only changes of spatial orientation, without manipulation of object proportions or form. Non-rigid

16

registration techniques can include elements of rigid registration, while allowing local warping of the target image to align with the reference image (Goshtaby, 2005).

2.3.5.1 Rigid Registration

Rigid registration techniques maintain object proportions and form of both target and reference images throughout changes in spatial orientation. Rigid registration can be accomplished by linear transformations, including translation, rotation, or scaling. Affine registration is a composition of linear methods that preserve collinearity and ratio of distances. Angles and lengths, however, may not always be preserved, resulting in artifacts (Weisstein).

2.3.3.2 Mutual Information Registration

Mutual information (MI) or relative entropy is a type of rigid registration based upon fundamental concepts of information theory. Rather than relying on specific shape, feature, or point correspondence matching, MI measures the informational redundancy between image intensities of corresponding voxels in both images. This is assumed to be maximal if images are geometrically aligned (Maes et al., 1997).

MI alignment accuracy has been verified for rigid registration of computed tomography

(CT), magnetic resonance (MR) and photon emission tomography (PET) images (Zhang et al., 2009; Gan et al., 2008; Ren et al., 2003). Since the technique does not require prior segmentation, alignment of anatomical features or image preprocessing, the technique is well suited to clinical applications, and has been used in a multitude of craniofacial morphology studies (Cevidanes et al., 2005a; Cevidanes et al., 2005b; Cevidanes et al.,

17

2006; Cevidanes et al., 2009; Cevidanes et al., 2009b; Cevidanes et al. 2010, Cevidanes et al. 2010b, Mah et al., 2010)

2.4 Previous Research from Temple University Department of Orthodontics

2.4.1 Thesis Project of Dr. Michael Nuveen

Orthodontic resident Michael Nuveen developed the first three dimensional computer imaging system for diagnosis and treatment planning at Temple University.

The system was based on scans of contour lines projected onto the face of a radiographic mannequin. Contour lines included hard and soft tissue structures, and were digitized to create a three-dimensional computer model of the physical subject. Frontal photographs, lateral and posterioanterior cephalograms were incorporated into the digital model. One hundred and three skeletal points were identified on the physical and digital models, facilitating measurement in the digital space through a Euclidian X,Y,Z coordinate system and standard radiographic conversion formulas. The three dimensional skeletal complex models were rendered on a Macintosh platform (Nuveen, 1996).

2.4.2 Thesis Project of Dr. Can Nguyen

Orthodontic resident Can Nguyen expanded on the work of Michael Nuveen at

Temple University through projecting “structured light” onto the physical subject rather than contour lines. A monochrome LCD projector was used to cast an array of dots onto the same mannequin used in the previous study (Figure 2.3). Soft tissue images were captured by a black and white camera mounted in such a fashion to allow a 30 degree angle of divergence between the light source and subject. Using the same biplanar radiographic technique as Nuveen, skeletal landmarks were used to derive 3D skeletal

18

coordinates and register the radiographic images to those obtained through capture of structured light. Hard and soft tissue data were integrated on a Macintosh platform

(Nguyen, 1998).

Figure 2.3 Structured light projected onto a physical subject (Nguyen, 1998).

2.4.3 Thesis Project of Dr. John Slattery

Orthodontic resident John Slattery expanded upon previous work at Temple

University through developing a system called Three-Dimensional Craniofacial Imaging and Animation (CIA). Slattery employed the Minolta 7000 laser scanner for three- dimensional surface capture rather than structured light, introducing a higher level of accuracy than ever before. Surface information was then reconstructed with Minolta’s

Vivid software, and exported to a standard file format. Next, surface scans and biplanar cephalometric radiographs and a generic skull model were integrated in a three- dimensional digital workspace using 3D Studio MAX to create a single model capable of manipulation and animation (Slattery, 2001).

19

2.4.4 Thesis Project of Dr. David Lee

Orthodontic resident David Lee improved upon the existing CIA system at

Temple University through implementing digital cameras, digital x-rays, patient placement with cephalometers, and integration of Invisalign three-dimensional dental models into the resulting patient model. These enhancements greatly improved the quality and reproducibility of the three-dimensional patient record. Digital radiography allowed incorporation of higher definition images into the patient model, as resolution and magnification errors introduced though scanning traditional cephalometric films were eliminated. Soft tissue profiles were more easily captured through digital radiography and

Cliniview XV ™ software, allowing improvements in image registration and overall quality of model construction. Invisalign models were generated from impressions scanned by Align technologies and exported from Treat III ™ software. All digital elements were integrated and animated using 3D Studio MAX (Lee, 2004).

2.4.5 Thesis Project of Dr. Robert J. Murray

Orthodontic resident Robert Murray adapted the CIA system at Temple

University to incorporate digital dental models generated through scans provided from

OrthoCAD™ rather than Align Technologies. Dental models were canted five to ten degrees above and below the occlusal plane. Multiple laser surface scans of soft tissue were taken from seven different perspectives and stitched together to create a more complete 3D model. The resulting digital soft tissue surfaces were comprised of roughly sixty thousand data points and manipulated with a variety of software packages to

20

produce a full motion human smile animation and resulting film exhibiting the model from several different perspectives (Murray, 2004).

2.4.6 Thesis Project of Dr. Wilbert Saavedra

Orthodontic Resident Wilbert Saavedra of Temple University applied the use of the Craniofacial Imaging and Animation system to simulate the effects of extractions in conjunction with treatment projections generated by Invisalign and OrthoCAD. Digital dental models exported from Invisalign's Treat III software were used to simulate tooth movement, while smile animations were constructed with models exported from

OrthoCAD. A series of fourteen animated films were generated through rendering 360 frames in 3DStudioMAX and MAYA to simulate the effects of premolar extraction and space closure (Saavedra, 2004).

2.4.7 Thesis Project of Dr. Ched Smaha

Orthodontic resident Ched Smaha performed the final study using the

Craniofacial Imaging and Animation system at Temple University. Smaha's study was an attempt to simplify and automate previous image capture and data processing methods to allow generation of high quality 3D treatment simulations and smile animations by auxiliary staff with minimal training. Although a detailed set of reproducible instructions was created and system automation greatly decreased the time required to generate animations, it was determined that the Craniofacial Imaging and Animation system was not cost effective for routine use (Smaha, 2006).

21

2.4.8 Thesis Project of Dr. Bryan Hiller

Orthodontic resident Bryan Hiller explored practical uses of CBCT in the creation of physical models from digital scan information. In his study, clear plastic aligners were fabricated from stereolithography models generated from CBCT data, and compared the fit to those created from polyvinyl siloxane (PVS) impressions, and those created from

CT scans of PVS impressions. It was determined that PVS impressions and models created from their CT scans were highly accurate, but CBCT study models were unable to capture dental anatomy with enough detail to be useful. It was concluded that impressionless systems based on CBCT technology were feasible, but not practical at that time (Hiller, 2008).

2.4.9 Thesis Project of Dr. Jacob Orozco

Orthodontic resident Jacob Orozco continued 3D imaging research at Temple

University through a qualitative study assessing the consistency of diagnosis and treatment planning decisions based on traditional records versus 3D CBCT. Pretreatment records of twenty one Class I division 1 patients were evaluated by eight orthodontists.

Each orthodontist was asked to diagnose and treatment plan seven cases using three different types of diagnostic record sets: traditional study models with panoramic and lateral cephalometric radiographs, CBCT scan and intra/extraoral photographs. The study indicated that Angle classification, crowding/spacing and overjet were diagnosed consistently between record sets, but CBCT was slightly less useful in diagnosing excessive overjet. Treatment plans generated with CBCT data were more likely to favor extraction when compared to traditional and photographic record groups (Orozco, 2009).

22

2.5 Recent Works in 3D Morphometrics

Cevidanes et al. (2005) described method of superimposing serial CBCT volumes to evaluate hard-tissue changes in ten orthognathic surgery patients. CBCT scans were taken before and after undergoing maxillary surgery to correct various malocclusions. 3D models were created from the CBCT volumetric data using semi-automatic segmentation and manual editing with ITK-SNAP (National Library of Medicine and National

Institutes of Health, Bethesda, Md; freely available). CBCT volumes were registered using the maximization of mutual information (MI) method via the MIRIT software package (Gerig et al., 2001; Maes et al., 1997). VALMET software was used to assess inter and intra-observer differences in segmentations as well as calculate differences between registered pre and post-treatment CBCT volumes (Gerig et al., 2001). Color maps were applied for qualitative analysis, and mean surface distance was calculated to quantify the overall difference between the two surfaces. Interobserver errors were shown to average 0.02mm (SD = 0.01mm), suggesting the reliability of the technique. While the study detailed a robust method for CBCT superimposition, MIRIT and VALMET software is no longer available. Furthermore, the use of multiple customized software packages combined with a high level of support from software and development teams places the technique outside the scope of routine clinical use.

Lee et al. (2006) described a method of automatic segmentation and registration of CBCT volumes to measure hard and soft tissue changes after mandibular setback surgery. Pre and post-surgery CBCT scans were taken on ten patients. Hard tissues were segmented from the CBCT volumes using a gray value thresholding and edge detection technique based on specific Hounsfield units (HU) as in medical CT (Figure 2.4).

23

Chamfer maps were applied to the segmented surfaces to minimize surface irregularities

(Figure 2.5). Resulting surfaces were then registered using an adaptive optimization algorithm whereby average surface distance was calculated and minimized to generate rigid transformation parameters. Soft tissue changes were assessed using cephalometric landmarks applied to a surface grid. The method showed a statistically significant difference between adaptive optimization and manual registration techniques. While Lee and associates showed impressive results, the software package used in the study was not disclosed to allow validation. Unfortunately, it has been shown that HU image densities vary greatly between CBCT scans. Densities have also been shown to vary between

CBCT and medical CT scans as well (Armstrong, 2006). HU derivation coefficients for

CBCT scanners were not computed until several years after this study (Mah et al., 2010).

Furthermore, registration of the segmented surfaces relies upon accuracy of the segmentation process, which can incorporate significant error (Yushkevich et al., 2006).

Figure 2.4. Thresholding and edge detection. Automatic segmentation is accomplished through Boolean operations of the original image (a), a threshold image containing the anatomy of interest (b) and an edge image defining contours of the region of interest (Lee et al., 2006).

24

Figure 2.5. Chamfer map generation. Region of interest boundaries are determined by calculating the edges of highest levels of intensity though a chessboard transformation (a). The resulting chamfer map (b) shows the calculated boundaries where the darkest pixels are the furthest from the region of interest, and a smooth contour line (Lee et al., 2006).

Mah et al. (2010) discussed another method of CBCT voalume superimposition based on maximization of mutual information. Volumes from two adult orthognathic surgery and one mixed dentition patient were obtained. Registration areas of the cranial base were selected in multiplanar slice views, and superimposed with OnDemand 3D

(Cybermed, Inc., Seoul, Korea). The authors showed impressive results, and the ability of the software to superimpose volumes obtained from different CT modalities, such as medical CT with CBCT, or between different scanners with different settings, all within a single software package. The authors also described capabilities such as DICOM export of superimposed volumes, and Boolean operations for comparison. Despite the impressive resulting images, no segmentation capabilities for advanced surface analysis were mentioned (Choi & Mah, 2010).

Cevidanes et al. (2010) continued their previous work, using similar techniques with slight variations in software packages. A method was described for superimposing

25

serial CBCT volumes on the cranial base to evaluate soft tissue changes in 3D. Serial

CBCT images were captured and formatted to a voxel size of 0.5mm and cropped.

Surface 3D models were constructed through segmentation of craniofacial hard and soft tissues with ITK-SNAP. Images were registered on the anterior cranial base using a fully- automated voxel-wise rigid technique based on maximization of mutual information with the open source IMAGINE software package (National Institutes of Health, Bethesda,

Md; freely available). Soft tissue and hard tissue changes were evaluated via calculation of Euclidean surface distances and application of color maps using the CMF software package (Maurice Müller Institute, Bern, Switzerland). Despite the discussion of technique modifications for growing patients, no specifics regarding patient selection or repeatability of the technique were provided. While effective, their processing pipeline consisting of multiple open-source software packages is unwieldy and poorly documented, placing it outside the scope of routine clinical use.

Recent publications have shown impressive image superimposition and segmentation techniques in addition to effective data processing software pipelines, yet fall outside the realm of clinical use due to issues of practicality and efficiency. The software packages listed occupy two ends of the spectrum; they either have the necessary capabilities, but require specialized training, staff and hardware, or substitute high image quality for low analytical power. Thus, a streamlined, all-inclusive method of CBCT volumetric data processing within a single software package is the next logical step in creating a true 3D surface analysis.

26

CHAPTER THREE

AIM

The aims of the investigation are as follows:

1. To develop an easily reproducible method of 3D surface model construction of

the posterior cranial base from CBCT volumes.

2. To develop a reproducible method of registration and segmentation of sequential

CBCT volumes.

3. To provide foundational knowledge for future craniofacial analyses and growth

studies.

27

CHAPTER FOUR

MATERIALS AND METHODS

4.1 Subject Selection

Existing anonymized CBCT scans were obtained from the database of the

Kornberg School of Dentistry Department of Radiology and analyzed retrospectively. An exempt status was granted for the experimental protocol approved by the Institutional

Review Board. Eleven patients having more than one scan taken in the School of

Dentistry were identified. Growing patients were eliminated from the data set by age information included in DICOM file headers, as were scans collimated to exclude the anatomy of the cranial base. Scans with patient positioning errors and poor contrast were also eliminated. Scans with different exposure times and voxel sizes were also eliminated. The resulting data set consisted of three patients with two sequential scans each. The average patient age at time of first scan was 62 years, 3 months. The average time elapsed between scans was 6.4 months. Subject information is provided in Table

4.1.

Subject Sex Ethnicity Age at first Time elapsed scan between scans

1 F Caucasian 48y 9mo 9 mo 2 F Caucasian 62y 2mo 9 mo 3 M African 75y 6mo 0 mo American Table 4.1. Subjects included in study.

28

4.2 Image Acquisition

Images in the Temple University database were captured using the iCAT, a CBCT scanner optimized for maxillofacial imaging (Imaging Sciences International, 1910 North

Penn Road, Hatfield, PA, 19440). The iCAT possesses a 17x23 cm field of view, and

20x25 cm flat panel sensor. The scanner generates a cylindrical volume and is capable of capturing data at resolutions up to 0.125mm voxel size, with an effective radiation dose of 36 - 74 ìSv. Scan times range from 4 seconds to 26.9 seconds. Scans were saved in a series of 2D slices and encoded in Digital Communications in Medicine (DICOM) format.

All images were captured with the same machine in a 4 second scan time at 120 kVp and 0.4mm voxel size, resulting in 400 axial and 400 saggital slices each.

4.3 Superimposition of Cranial Base Models

Cranial base models were superimposed using a rigid registration technique.

Rough surface approximation was performed through manual rotation and translation of volume rendered DICOM data sets imported into AMIRA. Final alignment was achieved through voxel-based automatic registration with the Amira AffineRegistration module in

Amira (Figure 4.1). The AffineRegistration model is a comprehensive rigid registration tool employing sum over squared pixel differences and mutual information measure.

29

(a) (b)

Figure 4.1. Superimposition of CBCT Volumes. Models are approximated in 3D space (a). Final alignment is achieved by the AffineRegistration module (b).

4.4 3D Surface Model Construction

A 3D surface mesh of the cranial base was created for analysis of registration and superimposition of each CBCT volume in this approach using commercial software

(AMIRA, version 5.3.3, Visage Imaging, Inc., San Diego, CA).

4.4.1 Segmentation

CBCT volumes were imported into AMIRA via sequential 2D images encoded in

DICOM files. In order to minimize the amount of data processing required, a region of interest (ROI) was specified to initially isolate the posterior cranial base from the remainder of the volume. A detailed description of the ROI boundaries can be found in table 4.2

30

Boundary Description Anterior Most posterior aspect of ethmoid sinus at its junction with the anterior aspect of the sphenoid sinus to include the anterior clinoid process in its entirety.

Posterior Most anterior extent of the foramen magnum (Basion) to include dorsum sellae and clivus in their entirety. Lateral Most lateral aspects of the posterior clinoid process Superior Most superior extent of the posterior clinoid process (Clinoidale). Inferior Most anterior extent of the foramen magnum (Basion). Table 4.2. Region of interest boundaries

For each subject, segmentation was performed by an orthodontic resident and verified by an oral and maxillofacial radiologist using a semi-automatic technique.

Voxels with gray values corresponding with cortical bone were identified in each scan and added to each label within the region of interest (Figure 4.2a). Relevant voxels were then isolated from each slice by hand to match corresponding CBCT data using a gray value threshold of -3000 to 1000 HU (Figure 4.2b). Segmented voxel information was then extracted from CBCT data and saved (figure 4.2c). Mesh smoothing operations were then automatically performed by Amira (Figure 4.3).

(a) (b) (c)

Figure 4.2. The segmentation process exhibiting gray value thresholding (a), manual editing (b) and resulting voxel information (c).

31

Figure 4.3. Segmented voxel data of the posterior cranial base extending from Basion (left) to the anterior clinoid process (right) rendered as a smoothed triangular mesh.

4.4.2 Segmentation Reliability

Reliability of segmentation was verified by comparison of serial segmentations performed by two individuals proficient in cranial base anatomy and verified by an oral and maxillofacial radiologist prior to final registration and superimposition.

4.5 Analysis of Registration Accuracy

Accuracy of superimposition was analyzed using the Amira SurfaceDistance module. SurfaceDistance computes differences between the two triangulated surfaces.

For each vertex on one surface, the closest point on the other surface was calculated, and a histogram of values was generated. Six measurements were taken from this histogram: mean distance, standard deviation from the mean distance, standard deviation, root mean square distance, maximum distance, median distance and percentage of area deviating

32

more than 0.5 voxels (0.25mm), 1 voxel (0.5mm) and 2 voxels (1mm). The resulting data was then visualized through the application of a color map (Figure 4.4).

Figure 4.4. Surface distance visualization. A color map was applied to the resulting surface data ranging from 0mm (Blue) 0.5mm (Red).

33

CHAPTER FIVE

RESULTS

5.1 Segmentation Reliability

Sequential CBCT volumes were registered using the AffineRegistration module.

Volume renderings were generated and cropped to isolate the region of interest (Figure

5.1) Segmentations were performed twice on each CBCT volume to determine intra-rater reliability in generation of 3D surfaces from voxel data. Surface distance was then calculated, statistics collected and colormaps generated for qualitative analysis (Table

5.1, Figure 5.2). Subjects 1 and 3 exhibited maximum errors of 0.5mm or less, indicating that the segmentations did not contain errors in surface generation greater than one voxel.

Mean surface differences in all repeated segmentations were less than 0.1mm. Subject 2 exhibited the highest maximum surface distance at 1.6mm between serial segmentations of the first volume.

Volume Mean SD RMS Max Median >0.25mm >0.5mm >1mm (mm) (mm) (mm) (mm) (mm) (%area) (%area) (%area) 1-1 0.05 0.05 0.075 0.5 0.03 1.65 0.005 0 1-2 0.05 0.08 0.1 0.5 0.0015 3.9 0.03 0 2-1 0.042 0.06 0.075 1.6 0.03 1.9 0.005 0 2-2 0.023 0.045 0.05 0.45 0.0015 0.9 0 0 3-1 0.065 0.08 0.1 0.07 0.025 3.7 0.16 0 3-2 0.05 0.07 0.085 0.425 0.0025 2.6 0 0 Table 5.1. Surface distance error data for repeated segmentations.

34

(a) (b)

(c) (d)

(e) (f)

Figure 5.1. Superimposed and cropped CBCT volume renderings. (a) Subject 1, left superior oblique view. (b) Subject 1, right superior oblique view. (c) Subject 2, left superior oblique view. (d) Subject 2, right superior oblique view. (e) Subject 2, left superior oblique view. (f) Subject 3, right superior oblique view.

35

(a) (b)

(c) (d)

(e) (f)

Figure 5.2. Intra-rater comparison of sequential CBCT volume segmentations. (a) Subject 1, first volume. (b) Subject 1, second volume. (c) Subject 2, first volume. (d) Subject 2, second volume. (e) Subject 3, first volume. (f) Subject 3, second volume.

36

5.2 Superimposition Analysis

The posterior cranial base was segmented from the superimposed serial CBCT volumes of three subjects after rigid registration the AffineRegistration module. Surface distance was then calculated, statistics collected, and colormaps generated for qualitative measure (Table 5.2, Figures 5.3-5.5). All three subjects showed mean surface distances of less than 0.15mm, RMS of less than 0.25mm, and standard deviations of less than

0.2mm. Subjects 1 and 2 exhibited deviations greater than 0.5mm in less than 1 percent of the surface area, while subject 3 exhibited greater variation in surface distance between

CBCT volumes. Qualitative measurement of color maps indicate deviations localized to the region of the posterior clinoid process (Figure 5.4).

Subject Mean SD RMS Max Median >0.25mm >0.5mm >1mm (mm) (mm) (mm) (mm) (mm) (%area) (%area) (%area)

1 0.085 0.07 0.11 0.6 0.65 3.2 0.05 0 2 0.11 0.095 0.15 1.2 0.085 7.6 1.0 0.035 3 0.14 0.18 0.23 1.5 0.09 11 4.7 2.4 Average 0.11 0.12 0.16 1.1 0.28 7.27 1.92 0.81

Table 5.2. Surface distance data for superimposed volumes.

37

Figure 5.3. Topographical changes between sequential CBCT volumes of subject 1.

38

Figure 5.4. Topographical changes between sequential CBCT volumes of subject 2.

39

Figure 5.5. Topographical changes between sequential CBCT volumes of subject 3.

40

CHAPTER SIX

DISCUSSION

The popularity of CBCT imaging in the field of orthodontics cannot be ignored.

Analytical methods for interpretation of resulting data sets, however have not kept pace with the technological advancements that have generated them. Temple University

Department of Orthodontics has recognized this disparity, and has continued to advance the understanding of three-dimensional imaging through transformation of theoretical knowledge into practical techniques.

6.1 Data Collection

Despite the continually increasing number of CBCT machines in use, procurement of serial volumes with adequate field of view, resolution and contrast proved difficult. An initial series of CBCT volumes was obtained from the Department of

Orthodontics at the University of Nevada Las Vegas. These volumes were captured with a NewTom CBCT system as part of standard initial and final records protocol for orthodontic treatment. Unfortunately, CBCT volumes were not categorized or stored in any sort of database allowing efficient search and retrieval methods. Multiple CBCT volumes meeting inclusion criteria were retrieved, but lacked sufficient image contrast for segmentation of the posterior cranial base. The superimposition protocol allowed seemingly successful rigid registration of sequential volumes, but could not be analyzed without the ability to extract surface data.

A second set of CBCT volumes was obtained from the Temple University

Kornberg School of Dentistry Department of Radiology. Of the myriad CBCT volumes contained in the departmental database, only eleven serial CBCT volumes were located.

41

Although the contrast and resolution of the scans captured with iCAT machine exhibited consistent high image quality and resolution, only three sets of serial CBCT volumes contained adequate posterior cranial base anatomy in the field of view.

Capture of CBCT volumes of adequate resolution and contrast currently involve significant radiation exposure. Currently, the diagnostic value of CBCT volumes relative to exposure levels for routine use in orthodontics is debatable. As a result, CBCT images are primarily taken for indications other than orthodontic treatment. These indications rarely involve a follow-up scan, and those that do are often collimated to a limited field of view to minimize radiation exposure. Thus, sources of sequential full field of view

CBCT volumes are rare. As CBCT technology improves, radiation exposure levels may decrease to the point where 3D imaging becomes the norm in documentation of orthodontic treatment, and sources of research data will abound.

6.2 Data Processing

Recent advances in computer processing power have greatly improved rendering capabilities of most common desktop workstations. As a result, 3D imaging software can perform adequately on such machines when limited to pure visualization.

Efficient and timely analysis of 3D volumetric data, however, still requires the use of high-performance graphics workstations. In this study, DICOM volumes were resampled from 0.4mm voxel size to 0.5mm not only to decrease the computer processing power requirements, but to simplify surface distance conversions and calculations in AMIRA.

As processing power and scan resolution increase, analysis of volumetric data in native

42

voxel size thereby minimizing the effects of partial volume averaging and chamfer mapping on surface data.

6.3 Image Segmentation

Extraction of surface data from CBCT volumes continues to present significant challenges, particularly in areas of minimal cortical bone thickness. The posterior cranial base provided a largely regular, smooth surface, greatly simplifying the segmentation task. In most cases, segmentation was accomplished efficiently through gray value thresholding, with minimal slice editing requirements. Subject 3 exhibited the greatest difficulty in segmentation, as poor bone density combined with high levels of metallic artifact obscured posterior cranial base anatomy. Selection of younger subjects with fewer metallic dental restorations could easily address those particular issues. Subjects 1 and 2 exhibited better definition of posterior cranial base anatomy, yet still required significant slice editing to eliminate image noise from the region of interest. Smooth surfaces exhibited fewer segmentation discrepancies. Borders of foramina and surface irregularities displayed greater discrepancies. Fully automated segmentation methods could greatly improve the efficiency of the superimposition protocol, but are not currently integrated into the AMIRA software.

6.4 System Accuracy

The superimposition protocol consistently delivered sub-voxel accuracy in surface distances between registered sequential CBCT volumes, as shown in table 5.2.

Segmentation accuracy remains somewhat questionable, as the requirement for human intervention in the process introduces error into the system. This proved significant in

43

areas of poor bone density and inadequate image contrast, such as in subject 3 (Figure

5.1e, Figure 5.1f, Figure 5.5). For the purpose of this study, given the 0.5mm voxel size, the level of error was nominal. Future studies of craniofacial growth, bone remodeling or tooth movement will most likely require scans of higher resolution for adequate surface data capture. Higher resolutions will minimize the effect of undersampling and produce larger, more true-to-life image data sets. Again, efficient, fully automated image segmentation would be essential to the practicality of the system for routine clinical use.

6.5 System Deficiencies

The rigid registration protocol employing the AffineRegistration module was reproducible, accurate, and quite able to compensate for most discrepancies in patient positioning and other sources of error. Rigid registration has been shown to work well in non-growing individuals, where growth of the spheno-occipital synchrondrosis does not affect overall dimensions of the posterior cranial base. Growth studies of the future based on superimposition of CBCT volumes will require elastic registration techniques and additional data processing steps to account for major changes in surface morphology.

Elastic registration algorithms are computationally intensive and currently lie outside the realm of practicality in clinical use.

6.6 Future Areas for Research

Mutual information registration and superimposition techniques can easily be applied to other regions of the craniofacial complex. Further validation studies could be conducted on other smooth, regular surfaces, such as the zygomatic arch, mandible, or

44

maxilla. Superimposition accuracy could easily accommodate higher resolution scans to study tooth movement or bone remodeling in limited-view CBCT volumes.

As automated segmentation methods are integrated into AMIRA software, system efficiency can be greatly improved, and applied to large scale studies.

45

CHAPTER SEVEN

CONCLUSION

The focus of this thesis was to create a simplified method of CBCT superimposition and surface analysis in a single, well supported commercial software package for use in future studies. The technique was documented in a step-by-step fashion, thereby creating a user-friendly manual for the creation and analysis of surface models generated from CBCT data sets. While the process remains time consuming, the process consistently generated high quality results. The following objectives were achieved:

1. A reproducible method of superimposition and 3D surface model construction

of the posterior cranial base from CBCT volumes was adapted from existing

techniques, simplified, and documented in an easy to follow manual.

2. Sub-voxel accuracy in superimposition and surface generation was

consistently demonstrated.

3. Segmentation of anatomical structures remains a rate and accuracy limiting

factor. Use of fully automated segmentation techniques could significantly

improve image processing efficiency and accuracy, but is not currently

integrated into the AMIRA software.

4. Foundational knowledge was gained in the form of encounters with

limitations of existing CBCT data sets and imaging software, allowing

improvement of data collection methods for future studies.

46

REFERENCES CITED

Arai, Y., Tammisalo, E., Iwai, K., Hashimoto, K., & Shinoda, K. (1999). Development of a compact computed tomographic apparatus for dental use. Dento Maxillo Facial Radiology, 28(4), 245-248. doi:10.1038/sj/dmfr/4600448

Arat, Z. M., Rubenduz, M., & Akgul, A. A. (2003). The displacement of craniofacial reference landmarks during puberty: A comparison of three superimposition methods. The Angle Orthodontist, 73(4), 374-380.

Armstrong, R. T.,. (2006). Acceptability of cone beam ct vs. multi-detector CT for 3D anatomic model construction. AAOMS, , 64(37)

Baume, L. J. (1957). A biologist looks at sella point. Transactions of the European Orthodontic Society, , 150-159.

Bergerson, E. O. (1961). A comparative study of cephalometric superimposition. Angle Orthodontist, 31, 216-229.

Björk, A. (1955). Cranial base development. American Journal of Orthodontics, 41, 198- 255.

Björk, A. (1968). The use of metallic implants in the study of facial growth in children: Method and application. American Journal of Physical Anthropology, 29, 243–254.

Björk, A. (1969.). Prediction of mandibular growth rotation. American Journal of Orthodontics, 55(6), 585-599.

Bookstein, F. L. (1996). Applying landmark methods to biological outline data. Image Fusion and Shape Variability, 59-70.

Bookstein, F. L. (1997). Landmark methods for forms without landmarks: Localizing group differences in outline shape. Medical Image Analysis, 1, 225-243.

Broadbent, B. H. (1931). A new x-ray technique and its application to orthodontia. The Angle Orthodontist, 1(2), 45-66.

Broadbent, B. H., Broadbent, B. H., & Golden, W. H. (1975). Bolton standards of dentofacial developmental growth,. Saint Louis: Mosby.

Cevidanes, L. H., Bailey, L. J., Tucker, G. R.,Jr, Styner, M. A., Mol, A., Phillips, C. L., et al. (2005). Superimposition of 3D cone-beam CT models of orthognathic surgery patients. Dento Maxillo Facial Radiology, 34(6), 369-375. doi:10.1259/dmfr/17102411

47

Cevidanes, L. H., Franco, A. A., Gerig, G., Proffit, W. R., Slice, D. E., Enlow, D. H., et al. (2005). Assessment of mandibular growth and response to orthopedic treatment with 3-dimensional magnetic resonance images. American Journal of Orthodontics and Dentofacial Orthopedics, 128(1), 16-26. doi:10.1016/j.ajodo.2004.03.032

Cevidanes, L. H., Heymann, G., Cornelis, M. A., DeClerck, H. J., & Tulloch, J. F. (2009). Superimposition of 3-dimensional cone-beam computed tomography models of growing patients. American Journal of Orthodontics and Dentofacial Orthopedics, 136(1), 94-99. doi:10.1016/j.ajodo.2009.01.018

Cevidanes, L. H., Motta, A. T., Proffit, W. R., Ackerman, J. L., & Styner, M. A. (2010). Cranial base superimposition for 3-dimensional evaluation of soft-tissue changes. American Journal of Orthodontics and Dentofacial Orthopedics, 137(4 Suppl), S120-9. doi:10.1016/j.ajodo.2009.04.021

Cevidanes, L. H., Styner, M. A., & Proffit, W. R. (2006). Image analysis and superimposition of 3-dimensional cone-beam computed tomography models. American Journal of Orthodontics and Dentofacial Orthopedics, 129(5), 611-618. doi:10.1016/j.ajodo.2005.12.008

Cevidanes, L. H., Styner, M. A., & Proffit, W. R. (2009). Three-dimensional superimposition of the skull base for the longitudinal evaluation of the effects of growth and of treatment. [Superposition tridimensionnelle (3-D) sur la base du crane pour l'evaluation longitudinale des effets de la croissance et du traitement] L' Orthodontie Francaise, 80(4), 347-357. doi:10.1051/orthodfr/2009021

Choi, J. H., & Mah, J. (2010). A new method for superimposition of CBCT volumes. Journal of Clinical Orthodontics : JCO, 44(5), 303-312.

Coben, S. E. (1955). The integration of facial skeletal variants: A serial cephalometric roentgenographic analysis of craniofacial form and growth. American Journal of Orthodontics, 41(6), 407-434.

Cutting, C., Bookstein, F. L., Grayson, B., Fellingham, L., & McCarthy, J. G. (1986). Three-dimensional computer-assisted design of craniofacial surgical procedures: Optimization and interaction with cephalometric and CT-based models. Plastic and Reconstructive Surgery, 77(6), 877-887.

Davies, R. H., Twining, C. J., Cootes, T. F., Waterton, J. C., & Taylor, C. J. (2002). A minimum description length approach to statistical shape modeling. IEEE Transactions on Medical Imaging, 21(5), 525-537. doi:10.1109/TMI.2002.1009388

Downs, W. B. (1956). Analysis of the dentofacial profile. Angle Orthodontist, 26(4), 191- 212.

48

Ford, E. H. (1958). Growth of the human cranial base. American Journal of Orthodontics, 44, 498-506.

Gan, R., Chung, A. C., & Liao, S. (2008). Maximum distance-gradient for robust image registration. Medical Image Analysis, 12(4), 452-468. doi:10.1016/j.media.2008.01.004

Gerig, G., Jomier, M., & Chakos, M. (2001). Valmet: A new validation tool for assessing and improving 3D object segmentation. MICCAI 2001: Proceedings of the International Society and Conference Series on and Computer-Assisted Intervention, Utrecht, Netherlands. 516-528.

Ghafari, J., Engel, F. E., & Laster, L. L. (1987). Cephalometric superimposition on the cranial base: A review and a comparison of four methods. American Journal of Orthodontics and Dentofacial Orthopedics, 91(5), 403-413.

Goshtaby, A. (2005). 2-D and 3-D image registration for medical, remote sensing, and industrial applications. Hoboken, NJ: J. Wiley & Sons.

Grauer, D., Cevidanes, L. H., Styner, M. A., Heulfe, I., Harmon, E. T., Zhu, H., et al. (2010). Accuracy and landmark error calculation using cone-beam computed tomography-generated cephalograms. The Angle Orthodontist, 80(2), 286-294. doi:10.2319/030909-135.1

Hiller, B. C. (2008). Qualitative examination of the accuracy of a three-dimensional digital study model isolated from a cone beam computed tomography scan. Temple University).

Joshi, S., Pizer, S., Fletcher, P. T., Yushkevich, P., Thall, A., & Marron, J. S. (2002). Multiscale deformable model segmentation and statistical shape analysis using medial descriptions. IEEE Transactions on Medical Imaging, 21(5), 538-550. doi:10.1109/TMI.2002.1009389

Lee, D. P. (2004). Integration of invisalign clincheck models with the three dimensional facial image and animation of the human smile. Temple University).

Lerner, J., & Tirole, J. (2002). Some simple economics of open source. The Journal of Industrial Economics, 50(2), 197-234.

Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., & Suetens, P. (1997). Multimodality image registration by maximization of mutual information. IEEE Transactions on Medical Imaging, 16(2), 187-198. doi:10.1109/42.563664

Mah, J., & Hatcher, D. C. (2004). Three-dimensional craniofacial imaging. American Journal of Orthodontics and Dentofacial Orthopedics, 126(3), 308-309. doi:10.1016/S0889540604005621

49

Mah, J., Huang, J. C., & Choo, H. (2010). Practical applications of cone-beam computed tomography in orthodontics. Journal of the American Dental Association (1939), 141 Suppl 3, 7S-13S.

Mah, P., Reeves, T. E., & McDavid, W. D. (2010). Deriving hounsfield units using grey levels in cone beam computed tomography. Dento Maxillo Facial Radiology, 39(6), 323-335. doi:10.1259/dmfr/19603304

Moorrees, C. F. A. (2006). Twenty centuries of cephalometry. In A. Jacobson (Ed.), Radiographic cephalometry (pp. 13-32). Carol Stream, IL: Quintessence.

Moss, M. L., & Greenberg, S. N. (1955). Postnatal growth of the human skull base. Angle Orthodontist, 25, 77-84.

Moss, M. L., & Salentijn, L. (1969). The capsular matrix. American Journal of Orthodontics, 56, 474-90.

Moss, J. P. (2000). 2D or not 2D? that is the question. American Journal of Orthodontics and Dentofacial Orthopedics, 117(5), 580-581.

Moyers, R. E., & Bookstein, F. L. (1979). The inappropriateness of conventional cephalometrics. American Journal of Orthodontics, 75(6), 599-617.

Murray, R. J. (2004). Animation of the human smile with the craniofacial imaging and animation system. Temple University).

Nguyen, C. (1998). 3D image construction of the craniofacial complex. Temple University).

Nuveen, M. J. (1996). Development of a low-cost three-dimensional imaging system for orthodontic diagnosis and treatment planning. Temple University:

Orozco, J. (2009). A study of treatment consistency in the class II patient: Conventional diagnostic records V. cone beam computed tomography. Temple University).

Proffit, W. R., Fields, H. W., & Sarver, D. M. (2007). Contemporary orthodontics (4th ed.). St. Louis, Mo.: Mosby Elsevier.

Quintero, J. C., Trosien, A., Hatcher, D. C., & Kapila, S. (1999). Craniofacial imaging in orthodontics: Historical perspective, current status, and future developments. The Angle Orthodontist, 69(6), 491-506.

Ren, H. P., Yang, H., Ma, B. R., Chen, S. Z., & Wu, W. K. (2003). A novel magnetic resonance-positive emission image registration based on morphology. Bio-Medical Materials and Engineering, 13(3), 187-196.

50

Ricketts, R. M. (1960). Cephalometric synthesis: An exercise in stating objectgives and planning treatment with tracings of the head roentgenogram. American Journal of Orthodontics, 46(9), 647-673.

Rosenfeld, A. (1979). Digital topology. The American Mathematical Monthly, 86(8), pp. 621-630. Retrieved from http://www.jstor.org/stable/2321290

Saavedra, W. (2004). Computer animation of the smile and the peri-oral soft tissue with the craniofacial imaging and animation system. Temple University).

Sassouni, V. (1955). A roentgenographic cephalometric analysis of cephalo-facio-dental relationships. American Journal of Orthodontics, 41(10), 735-64.

Scarfe, W. C., & Farman, A. G. (2008). What is cone-beam CT and how does it work? Dental Clinics of North America, 52(4), 707-730. doi:DOI: 10.1016/j.cden.2008.05.005

Scarfe, W. C., Farman, A. G., & Sukovic, P. (2006). Clinical applications of cone-beam computed tomography in dental practice. Journal (Canadian Dental Association), 72(1), 75-80.

Scott, J. H. (1967). Dento-facial development and growth. Oxford: Pergamon Press.

Sicher, H. (Ed.). (1970). Oral anatomy (5th ed.). St. Louis: The C. V. Mosby Company.

Silva, M. A., Wolf, U., Heinicke, F., Bumann, A., Visser, H., & Hirsch, E. (2008). Cone- beam computed tomography for routine orthodontic treatment planning: A radiation dose evaluation. American Journal of Orthodontics and Dentofacial Orthopedics, 133(5), 640.e1-640.e5. doi:10.1016/j.ajodo.2007.11.019

Slattery, J. (2001). Development of the laser scan craniofacial imaging and animation system. Temple University).

Smaha, C. E. (2006). Method of image capture to create three-dimensional virtual patient and human smile animation for orthodontic diagnosis and treatment planning. Temple University).

Steiner, C. C. (1953). Cephalometrics for you and me. American Journal of Orthodontics and Dentofacial Orthopedics, 39, 729-755.

Steuer, I. (1972). The cranial base for superimposition of lateral cephalometric radiographs. American Journal of Orthodontics, 61(5), 493-500.

Subsol, G., Thirion, J. P., & Ayache, N. (1998). A scheme for automatically building three-dimensional morphometric anatomical atlases: Application to a skull atlas. Medical Image Analysis, 2(1), 37-60.

51

Swennen, G. R., Mollemans, W., De Clercq, C., Abeloos, J., Lamoral, P., Lippens, F., et al. (2009). A cone-beam computed tomography triple scan procedure to obtain a three-dimensional augmented virtual skull model appropriate for orthognathic surgery planning. The Journal of Craniofacial Surgery, 20(2), 297-307. doi:10.1097/SCS.0b013e3181996803

Swennen, G. R., Mollemans, W., & Schutyser, F. (2009). Three-dimensional treatment planning of orthognathic surgery in the era of virtual imaging. Journal of Oral and Maxillofacial Surgery, 67(10), 2080-2092. doi:DOI: 10.1016/j.joms.2009.06.007

Thompson, D. W. (1979; 1917). On growth and form. London; New York: Cambridge University Press. van Vlijmen, O. J., Maal, T. J., Berge, S. J., Bronkhorst, E. M., Katsaros, C., & Kuijpers- Jagtman, A. M. (2009). A comparison between two-dimensional and three- dimensional cephalometry on frontal radiographs and on cone beam computed tomography scans of human skulls. European Journal of Oral Sciences, 117(3), 300- 305. doi:10.1111/j.1600-0722.2009.00633.x

Wahl, N. (2006). Orthodontics in 3 millennia. chapter 7: Facial analysis before the advent of the cephalometer. American Journal of Orthodontics and Dentofacial Orthopedics, 129(2), 293-298. doi:10.1016/j.ajodo.2005.12.011

Wahl, N. (2006). Orthodontics in 3 millennia. chapter 8: The cephalometer takes its place in the orthodontic armamentarium. American Journal of Orthodontics and Dentofacial Orthopedics, 129(4), 574-580. doi:10.1016/j.ajodo.2006.01.013

Weisstein, E. W. Affine transformation. Retrieved Feb 2, 2011, from http://mathworld.wolfram.com/AffineTransformation.html

Wells, W. M., Grimson, W. L., Kikinis, R., & Jolesz, F. A. (1996). Adaptive segmentation of MRI data. IEEE Transactions on Medical Imaging, 15(4), 429-442. doi:10.1109/42.511747

Yushkevich, P. A., Piven, J., Hazlett, H. C., Smith, R. G., Ho, S., Gee, J. C., et al. (2006). User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. NeuroImage, 31(3), 1116-1128. doi:10.1016/j.neuroimage.2006.01.015

Zhang, Y., Chu, J. C., Hsi, W., Khan, A. J., Mehta, P. S., Bernard, D. B., et al. (2009). Evaluation of four volume-based image registration algorithms. Medical Dosimetry, 34(4), 317-322. doi:10.1016/j.meddos.2008.12.004

52

APPENDIX A: CBCT Superimposition Protocol

A.1 Initial Data Loading and Processing

A.1.1 Loading CBCT Data

Open Amira. Import CBCT data by selecting the “Open Data” button and browsing to the directory containing pertinent DICOM files.

Select all files in the pertinent directory and click “Load”.

53

DICOM volumes of resolutions higher than 0.4 mm can cause the system to issue an “Out-of-Core Data” warning. Select “Read complete volume into memory” and “ok”. Resolutions lower than 0.4mm should load without a warning.

The DICOM loader window will appear, giving a detailed description of images to be stacked and processed. Click “OK”

The DICOM volume will now appear as a green object in the Object pool

54

Hit “F2” and rename the object to “Scan1”.

Repeat for the second DICOM volume. The following object pool should result:

A.1.2 DICOM Volume Resampling

DICOM volume resampling to lower resolution preserves system memory and minimizes processing time. Resample DICOM volumes to 0.5mm by selecting the Scan1 object. Right-click on the object. Select “Compute” and “Resample”. A red “Resample” object will appear in the Object pool.

Select the Resample object. In the Resample properties panel, select Lanczos filter, voxel size, and type in 0.5 for voxel size in the x, y, and z boxes. Click “Apply”.

55

The following Object pool will result:

Select the Scan1 object. Click the white triangle icon, followed by “Remove Object”. All operations will be performed on resampled data to maintain integrity of original DICOM volumes. The following Object pool should result

Select the Scan1.Resampled object. Right-click and select “Save as”. Click “OK”. The asterisk should disappear from the Scan1.Resampled object, denoting that the object has been saved.

Repeat the resampling operation for the Scan2 object. The following Object pool should result, with only the objects Scan1.Resampled and Scan2.Resampled.

56

A.2 Volume Rendering

Select the Scan1.Resampled object followed by “Volren” to generate volume rendering.

The following Object pool will result:

The appropriate volume rendering will appear in the viewfinder:

57

The volume view can be changed via selecting the “Trackball” and “Translate” icons in the viewport. Use of these icons will not affect volume data or spatial position.

Repeat for the second volume. The following Object pool should result:

A.2.1. Application of Color Maps to Volume Rendering

Red volume rendering colormaps are applied by default (volrenRed.col). Apply a green color map to the second volume to improve alignment visualization by selecting the object “Volren2”. Select “Edit” and “volrenGreen.col”.

58

The two volume renderings should appear in the viewfinder with contrasting color maps

59

A.3 Object Pool Setup

Select the Scan2.Resampled object. This is the target image.

Right-click on the object. Select “Compute” and “AffineRegistration”. A red “AffineRegistration” object will appear in the Object pool connected to the Scan2.Resampled object.

Click the white box on the AffineRegistration object and select “Reference”

Connect the blue line to the node at the right of the first volume object. The following Object pool should result:

60

A.4 Volume Registration

A.4.1 Rough Alignment of Volumes

Select the Scan2.Resampled object. In the Properties panel, select the Transform Editor to roughly align the target volume with the reference.

Select the “Interact” button in the main viewport

61

The selected volume will appear in the viewport with the transformer manipulator. Green spheres can be selected to rotate the volume about specific axes. The volume can be clicked anywhere to translate along multiple axes. Care must be taken not to select the gray icons at the edges of the bounding box to ensure no changes in scale are made. Rotate and translate the target volume until it more closely approximates the reference. Select the Trackball or Translate icons to adjust the volume angle of view.

62

When rough alignment is achieved, deselect the transform editor icon in the properties window.

63

A.4.2. Rigid Registration

Select the AffineRegistration object.

In the properties window, select the “Normalized Mutual Information” metric and “Rigid” transform only. Then select “Register”

64

The target volume will then be registered to the reference. Aligned volume renderings should result in the viewport.

65

A.4.3. Volume Duplication

In order to preserve the original registered volumes, segmentation operations are performed on duplicates. Select the reference volume, right click and select “Duplicate Object”.

Repeat for the target volume. The following Object pool will result.

Deselect the orange boxes in the initial volumes to disable them in the viewport. Apply volume renderings to the duplicated volumes per step A.2.1. Apply green color map to the second volume per step A.2.2. The following Object pool should result.

66

A.5 Data Isolation

A.5.1 OrthoSlice Orientation

Click the white triangle icon of the Scan1.Resampled2 object. Select “Display” and “OrthoSlice.” Repeat two more times to create three orthogonal slices. Click the orange boxes in the Volume Rendering objects. The boxes will turn grey to signify they are disabled in the main viewport display. The following Object pool will result.

Select the first OrthoSlice object. In the Properties panel, ensure orientation is set to “xy”. Data window should be set -2000 to 1000 to display the proper Hounsfield Unit contrast for anatomical identification.

Select the OrthoSlice2 object. In the Properties panel, set the orientation to “xz” and Data Window -2000 to 1000. Select the OrthoSlice3 object. In the Properties panel, set the orientation to “yz” and Data Window -2000 to 1000. . The following image should result in the viewport:

67

Use the “trackball” tool to obtain an axial view of the volume.

Select the OrthoSlice3 (yz orientation) module. In the Properties panel, manipulate the Slice Number slider until the orange line denoting the slice position is aligned with the midsaggital plane of the volume.

68

Use the “trackball” tool to obtain a saggital view of the volume.

Select the first OrthoSlice (xy orientation) module. In the Properties panel, manipulate the Slice Number slider until the slice is aligned with the most superior aspect of the posterior cranial base is displayed.

69

The following view will result:

Select the OrthoSlice2 (xz orientation) module. In the Properties panel, manipulate the Slice Number slider until the slice is aligned with dorsum sellae.

70

The following view should result

Using the “trackball”, ensure that the posterior cranial base anteroposterior, bilateral and axial extents are contained within the slices.

71

A.5.2 Volume Cropping

Select the Scan1.Resampled2 object. In the Properties panel, select the “Crop Editor” icon in the properties window.

In the Resolution section of the Crop Editor dialog box, Select “Bounding Box”.

72

The bounding box tool will appear in the view finder.

Select the “Interact” tool and manipulate the green bounding box borders. Narrow the crop area to include the following regions:

Boundary Description Anterior (A) Most posterior aspect of ethmoid sinus at its junction with the anterior aspect of the sphenoid sinus to include the anterior clinoid process in its entirety. Posterior (P) Most anterior extent of the foramen magnum (Basion) to include dorsum sellae and clivus in their entirety. Lateral (L) Most lateral aspects of the posterior clinoid process Superior (S) Most superior extent of the posterior clinoid process (Clinoidale). Inferior (I) Most anterior extent of the foramen magnum (Basion).

73

Saggital View Coronal View

Axial View

74

Saggital View Coronal View

Axial View Multiplanar Reconstruction

75

Click “ok”. A properly cropped volume should appear as follows:

Saggital View Coronal View

Axial View

76

Volume rendering can be enabled to visualize the cropped volume.

The cropped volume can be viewed from any angle via the “Trackball” tool.

77

Repeat stages A.4.1 and A.4.2 for the Scan2.Resampled2 object. The following Object pool should result.

Both cropped volumes can be examined in the viewport

Select the Scan1.Resampled2 object. Hit F2 and rename “Scan1.cropped”. Right-click on the renamed object, select “Save Data As”, and Save. Select the Scan2.Resampled2 object. Hit F 2 and rename “Scan2.cropped”. Right-click on the renamed object, select “Save Data As”, and Save.

78

A.6 Segmentation

A.6.1 Thresholding

Select the Scan1.Cropped object followed by the Segmentation Editor icon.

Select the Scan1.Cropped object under Image Data and create new Label Data.

Double-click on the “Inside” material. Rename to CranialBase1 and select “3D”

79

Set Zoom and Data window contrast -2000 to 1000.

Select Magic Wand tool with “All slices” option enabled.

Use the View YZ, View XY and View XZ buttons to select the appropriate view.

The mouse wheel can be used to scroll through volume slices.

80

Click on region of cortical bone in viewer. By default, all voxels of similar gray value will be selected.

The Display and Masking section of the Segmentation Editor will display a histogram of gray values contained in the DICOM image, with a red line denoting the specific value of the selected voxel. The text boxes on each side of the section denote the low and high extent of the gray value threshold of similar voxels.

81

Adjust the Display and Masking threshold to approximate contrast levels of cortical bone.

Add the selected voxels to the CranialBase1 volume label by selecting the “+” icon. The voxel selection can be cleared by using the “eraser” icon.

The voxel selection can also be seen in the 3D viewport.

82

Label data should be saved at this point in the Object Pool window by clicking on the white triangle of the Scan1.Cropped.Labels module.

A.6.2 Slice Editing

Segmentation must be verified slice by slice to ensure extraneous voxel data is not included in volume label. Medullary spaces must also be eliminated from label data to simplify surface analysis.

Segmented slices are easily adjusted with the Lasso tool. Auto trace and Trace edges should be enabled. In the Selection field, “Current slice” should be enabled.

83

Selected voxels are added to the volume label.

84

Voxels can also be added and subtracted from the label.

85

A.6.3 3D Volume Editing

Extraneous voxels can be added and subtracted from the label with the Lasso tool in the 3D viewport.

86

Return to the Object pool view by clicking on the Object pool icon

Label data should be saved at this point in the Object Pool window by clicking on the white triangle of the Scan1.Cropped.Labels module.

87

A.6.4 Surface Generation

Surfaces are generated from the segmented volume labels. Select the Scan1.Cropped.Labels object. Click on the white triangle and select “SurfaceGen”

In the SurfaceGen properties panel, select “Unconstrained smoothing”, “Add Border” and “adjust cords” with a Minimal edge length of “0”. Click “Apply”

The Scan1.Cropped.Labels.surf surface object will appear in the object pool. Select the object. Click on the white triangle and select “SurfaceView”.

88

The SurfaceView object will appear in the object pool.

The surface generated from the volume label will be visible in the viewport.

Repeat stages A.5 through A.6.1 for the Scan2.Cropped object with the following exceptions: Stage A.5.1: In the Segmentation Editor, select Scan2.Cropped as the Image Source Rename the “Interior” material “CranialBase2” vice “CranialBase1” Select the colored icon to the left of the CranialBase2 material name and select yellow for the volume label color. The Segmentation Editor panel will appear as follows:

89

The following object pool will result:

Both cranial base surfaces will be visible in the viewport”

90

A.7 Surface Analysis

A.7.1 Region of Interest Definition

Select the Scan1.Cropped.Labels.surf object. Click the white triangle icon. Select “Display” and “SelectRoi”

A bounding box will appear in the viewport.

Select the “Interact” icon and adjust the bounding box to contain the region as defined in section A.4.2, ensuring that only overlapping regions are included.

91

Saggital view Superior view

Posterior view Oblique view

92

A.7.2 SurfaceDistance Computation

Select the Scan1.Cropped.Labels.surf object. Click the white triangle icon, followed by “Compute” and “SurfaceDistance”. Select the resulting SurfaceDistance object. Click the white square icon. Select “Surface2”. Connect the blue line to the node at the right of the Scan2.Cropped.surf object. Click the white square on the SurfaceDistance object again. Select “ROI”. Connect the blue line to the SelectROI object. The following object pool will result:

Select the SurfaceDistance object. In the properties panel, select the following: o Direction “Surface 1 -> 2” o Maximal distance “1e+10” o Above threshold 0.5 o Output: Vectors and Distance o Click “Apply”.

Displacement and distance objects will be generated and the following object pool will result.

93

Check the object pool and ensure the orange “view enable” icon is selected on only the following modules: o Scan1.Cropped.Labels.surf o SelectRoi o Displacements o Distance The object pool will appear as below:

94

Select the distance module. Click the white triangle icon and select “SurfaceView”. The generated distance map surface will appear in the

viewport.

Select the SurfaceView3 module. In the Colormap option field of the properties panel, select “Edit” and “Physics.icol”.

Set the Colormap range from 0 to 1:

95

The colormapped surface distance object will appear in the viewfinder:

Saggital view

Superior view

96

Posterior view

Oblique view

97

Statistical information regarding the mean, standard deviation, root mean square and maximum distance between the two surfaces can be viewed in the SurfaceDistance Info field. Distances are calculated in voxel size rather than linear measurement. Values can be converted to linear values through the following formula:

L = MxV

Where L= linear value in millimeters, M=voxel size in millimeters, V= value in voxel size

Median and percentage above threshold can be viewed in the SurfaceDistance Info2 field

A.6.3 Vector Map Computation

Select the displacements module. Click the white triangle icon and select “vectors”.

Ensure the Vectors object is the only yellow object with the orange “view enable” box selected. The object pool will appear as below:

98

Select the Vectors object. In the properties panel, set the following options: o Colormap: physics.icol o Scale maximum: 1.00548 o Options: arrows.

The vector map will appear in the viewport and can be manipulated with the trackball icon to view vector changes in the surface distance computations

99

100