INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of printer.

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book.

Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI A Bell & Howell Infonnation Company 300 North Zed) Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600

INTERACTIVE VOLUME RENDERING AND DEFORMATION FOR SURGERY SIMULATION

DISSERTATION

Presented in Partial Fulfillment of the Requirements for

the Degree Doctor of Philosophy in the Graduate

School of The Ohio State University

By

David M. Reed, B.A., M.S.

*****

The Ohio State University 1997

Dissertation Committee: Apprpved/by Dr. Roni Yagel, Adviser

Dr. Richard Parent idviser Dr. Wayne Carlson Department of Computer and Information Science UMI Number: 9801768

Copyright 1997 by Reed, David Matthew

All rights reserved.

UMI Microform 9801768 Copyright 1997, by UMI Company. All rights reserved.

This microform edition is protected against unauthorized copying under Title 17, United States Code.

UMI 300 North Zeeb Road Ann Arbor, MI 48103 © Copyright by David M. Reed 1997 ABSTRACT

In the past few years, interest in volumetric graphics techniques for many

applications has increased. Until recently, were not powerful enough to support

applications using volumetric data. Advances in computer hardware and new volumetric algorithms have made volumetric approaches attractive for many applications. Medical applications and scientific visualization are two areas that commonly use volumetric data because data is more naturally represented in these applications using volumes and volumes supports features that are not adequately handled by surface representations.

This dissertation extends the fields of volume rendering and deformation to provide a framework for surgical simulation. A volumetric rendering algorithm that overcomes the depth sorting problem of irregular grids and takes advantage of polygon rendering hardware to produce images faster than existing algorithms is presented. A physically-based simulation method is presented that uses volumetric data to avoid the problems caused by surface representations. The method is fast, numerically stable, and supports varying material properties. The tenderer and simulation are integrated to provide a system for surgical simulation. As the processing power of computers continues to increase, the methods presented in this dissertation along with haptic feedback devices will provide usable surgery simulators within ten years.

Ill Dedicated to my wife, Sherri.

IV ACKNOWLEDGMENTS

First, I would like to thank my adviser. Dr. Roni Yagel, for providing the encouragement, motivation, and countless hours of discussion that helped me complete this dissertation. Without his encouragement and help, this dissertation would never have been completed. I would also like to thank the following who contributed directly to a portion of this work: Asish Law, Po-Wen Shih, and Naeem Shareef. Asish provided many of the ideas for the early version of the tenderer and always pushed us to try one more technique to improve the algorithm. Po-Wen and Naeem provided the scientific data sets in a usable form for our algorithm. Don Stredney and Dennis Sessanna provided and segmented the medical data sets used for testing the simulation algorithm.

I would like to thank Dr. Wayne Carlson, Steve May, and Pete Carswell who provided me the opportunity to work at ACCAD on a variety of graphics related projects.

These experiences and the environment at ACCAD provided me with valuable experience.

My initial interest in graphics was sparked by Dr. Richard Parent and Dr. Kikou Fujimura.

I would like to thank Rick for originally serving as my advisor and continuing to support me even when my work steered me away from his specialty areas. During my years at OSU, I had numerous discussion about various graphic related and software engineering topics with a number of people. These discussions often sparked ideas and new levels of understanding. At the risk of leaving someone out, I would like to thank: Wayne Carlson, Pete Carswell, Roger Crawfis, Kikou Fujimura, Meg Geroch, Scott

King, Yair Kurzion, Asish Law, Matt Lewis, Nathan Loofbourrow, Steve May, Raghu

Machiraju, Torsten Moller, Klaus Mueller, Rick Parent, Kevin Rodgers, Ferdi Scheepers,

Dennis Sessanna, Naeem Shareef, Po-Wen Shih, Karan Singh, Steve Spencer, Ken

Supowit, Ed Swan, Lawson Wade, Raphael Wenger, and of course, Roni Yagel.

Barb Heifer and Dennis Sessanna deserve special thanks for helping me generate the slides, images, and videos I needed for papers and conference presentations.

Finally, I would like to thank my wife Sherri and my parents Allan and Rose Reed for their constant encouragement. Their encouragement enabled me to continue working and see the light at the end of the tunnel.

VI VITA

May 11, 1969 ...... Bom - Columbus, OH

1991 ...... B.A.(Math, CS), Wittenberg University

1993 ...... M.S. (CIS), The Ohio State University

1991-present...... Graduate Teaching and Research Associate, The Ohio State University

PUBLICATIONS

[1] Larry A. Vitema, Robert D. Green, and David M. Reed. “RSM 1.0 User’s Guide: A Resupply Scheduler Using Integer Optimization,” NASA Technical Memorandum 104380, May 1991. [2] David M. Reed, Lawson Wade, Peter G. Carswell, and Wayne E. Carlson “Particle tracing in curvilinear grids,” in Visual Data Exploration and Analysis II, Georges G. Grinstein, Robert E Erbacher, Editors, Proc. SPDE 2410, pp. 120-128 (1995). [3] Roni Yagel, David M. Reed, Asish Law, Po-Wen Shih, and Naeem Shareef, “Hardware Assisted Volume Rendering of Unstructured Grids,” Proceedings 1996 Symposium on Volume Visualization, San Francisco, CA, October 1996, pp. 55-62.

VII [4] John S. McDonald, M.D., Roni Yagel, Ph.D. Petra Schmalbrock, Ph.D., Don Stredney, David M. Reed, and Dennis Sessanna, “Visualization of Compression Neuropathes Through Volume Deformation”, Proceedings Medicine Meets Virtual Reality 1997 (MMVR’97).

FIELDS OF STUDY

Major Field: Computer and Information Science (Computer Graphics)

Minor Field: Algorithms

Minor Field: Parallel Computing

Vlll TABLE OF CONTENTS Page

Abstract ...... ii

Dedication...... iv

Acknowledgments...... v

V ita...... vii

List of T a b le s ...... xii

List of Figures ...... xiii

Chapters:

1. Introduction ...... I

I. I Volume R endering ...... 4 1.2 Volume Deformation ...... 6 1.3 Existing Methods ...... 7 1.4 New Solutions ...... Il 1.5 Overview of Dissertation...... 12

2. Rendering and Deformation of Volumetric Data ...... 13

2.1 Volume R endering ...... 14

2.1.1 Grid Taxonomy ...... 17

2.1.2 Voxel Space M ethods ...... 20

2.1.3 Pixel Space M ethods ...... 22

2.1.4 Hybrid Methods ...... 24

2.2 Acceleration Techniques ...... 25

ix 2.2.1 Regular Grid Voxel Space Techniques ...... 26

2.2.2 Regular Grid Image Space Techniques ...... 30

2.2.3 Irregular Grid Object Space Techniques ...... 38

2.2.4 Irregular Grid Image Space Techniques ...... 43

2.2.5 Summary ...... 44

2.3 D eform ation...... 44 2.3.1 Physically-Based Deformation ...... 48

2.4 Deformation For Medical Applications ...... 50 2.4.1 Surgery Simulation ...... 52

2.4.2 Surgical Planning ...... 56

2.4.3 Summary ...... 58

3. Slice Based Volume Rendering ...... 59

3.1 Overview of Slice Based Method...... 61 3.2 Initialization Stage ...... 64 3.3 Edge and Polyhedron Slicing ...... 67 3.4 Polygon Forming and Rendering ...... 70 3.4.1 Clipping Problem ...... 71

3.4.2 Rendering Different Data Types ...... 73

3.5 Polygon Forming Details ...... 76

3.5.1 Convex C ells ...... 76

3.5.2 Concave Cells...... 78

3.6 Rendering Deforming Data...... 87 3.7 Extensions ...... 88

3.7.1 Adaptive Slicing...... 89

3.7.2 Progressive Slicing...... 91

3.7.3 Stored Slices...... 91

3.8 Advantages and Disadvantages of Method...... 93

X 4. Fast Physically-Based Volume Deformation ...... 96

4.1 Overview of Physically-Based Simulation ...... 98 4.2 Simulation Grid Setup ...... 100 4.3 Simulation Algorithm ...... 102 4.4 Incisions ...... 112

4.4.1 Incision Method 1 ...... 114

4.4.2 Incision Method 2 ...... 116 4.5 Simulation Issues Related to the Renderer ...... 119 4.6 Advantages and Disadvantages of Method...... 121

5. Results ...... 123

5.1 Rendering Scientific Data Sets ...... 124 5.2 Deformation of Volumetric D ata ...... 129

6. Conclusions ...... 135

6.1 Contributions ...... 135 6.2 Future Work ...... 137

List of References...... 140

XI LIST OF TABLES

Table Page

Table 5.1 Timings in seconds for 640x480 images of blunt fin ...... 126

X I 1 LIST OF FIGURES

Eigum Page

Figure 2.1 Taxonomy of grids used in medical and scientific applications ...... 18

Figure 2.2 Pseudo code for voxel space algorithm ...... 21

Figure 2.3 Pseudo code for ray casting algorithm ...... 23

Figure 3.1 Relation between ray casting and slice based m ethod ...... 63

Figure 3.2 Example of placing edges in buckets ...... 67

Figure 3.3 Pseudo code for rendering algorithm ...... 72

Figure 3.4 Topologically unique cases of a plane-teu'ahedron intersection ...... 77

Figure 3.5 Topologically unique cases of a plane-convex hexahedron intersection .... 78

Figure 3.6 Polyhedron that yields a polygon with a hole when sliced ...... 79

Figure 3.7 Different slices through cell imply different points for the faces ...... 81

Figure 3.8 Slice through concave polyhedra ...... 82

Figure 3.9 Triangulation imposed on face of cell ...... 83

Figure 3.10 Exploded view of cells matching up ...... 84

Figure 3.11 Cells and polygon slices matching up ...... 85

Figure 3.12 Warping an image by stretching the rectangular polygon ...... 88

Figure 4.1 Pseudo code for basic Euler method for a mass spring system ...... 103

Figure 4.2 Large force applied to point a causes the grid to lose it shape and stability 106

XIII Figure 4.3 Safe regions in which each point can m ove ...... 108

Figure 4.4 Pseudo code for simulation algorithm ...... 113

Figure 4.5 Incision at a vertex in 2 D ...... 115

Figure 4.6 Incisions at an arbitrary location on an edge in 2D ...... 116

Figure 4.7 Springs added to stabilize cut cell ...... 117

Figure 4.8 3D example of second incision m ethod ...... 118

Figure 5.1 Blunt fin im a g e s ...... 127

Figure 5.2 Slice through original and deformed images ...... 130

Figure 5.3 Deformation caused by force applied to skin on right side of image 131

Figure 5.4 Two images from simulation of an incision ...... 132

XIV CHAPTER 1

INTRODUCTION

The field of computer graphics continues to become more important as the field

expands to many application areas such as scientific visualization and biomedical

visualization. Originally, computer graphics mainly used 2D surface manifold

representations (polygonal or parametric surfaces) to model data. For some applications, it

is much more natural to represent the data as 3D volumes. Volumetric data is represented

as scalar or vector data at a set of points in space. Examples of this include data from

computational fluid dynamics (CFD) simulations and medical data such as computed

tomography (CT) and magnetic resonance imaging (MRI). The points may be organized

in a regular lattice, as is common in CT or MRI data, or it may be irregularly spaced as is

common in CFD simulations. Volume rendering refers to the field of generating two

dimensional images from volumetric data.

In the past few years, interest in the use of volumetric graphics techniques for many applications has increased. This is mainly due to the rapid increases in the technological capabilities of computers. Until recently, computers did not have the computational power to adequately support many of these applications. Computers have been used for years to generate static images of medical data from computed tomography

(CT) and magnetic resonance imaging (MRI). As the power of computers has increased,

interacting and deforming these data sets has become feasible. This has led to interest in

many applications that require manipulating volumetric data sets. These applications

include volumetric modeling and surgical simulation. Because of the many potential

benefits, there is significant interest in surgery simulation from both the medical community and the computer graphics community.

Currently, surgical training involves using cadavers and watching and assisting experienced surgeons. There are several problems with using cadavers. Because of the changes that occur to the body after death, performing an operation on a cadaver does not accurately simulate performing the operation on a live human. It is expensive to prepare and store the number of cadavers needed to train many students. There are not enough cadavers donated to medical schools to allow unlimited practice by students. Watching and assisting an experienced surgeon on a real operation is a valuable and necessary part of training, but also has limitations. A mistake by the trainee could result in catastrophic problems for the patient. Also, the trainee may not see many anomalies or complications that do not occur frequently. Surgery training on a computer can help solve these problems. Once the surgery simulation system is built, the expense involved with a trainee using it is minimal. There is no risk of a trainee harming a real patient while practicing operations on the simulator. The simulator can be set up to allow a user to practice an operation with a rare complication many times. Of course, in order for the surgery simulation to be useful, it must be realistic and also the simulation must be interactive

enough to be useful; however, it does not have to completely accurate. As long as the

simulation is visually realistic, the actual accuracy of every facet of the simulation does

not matter.

Another related application is surgical planning. The goals and requirements of a

surgical planning system are similar to a surgical simulation system, but there are several

important differences. The goal of surgery planning is to try to find the best way to

perform the operation so that the best possible results are achieved. Because of this, exact

accuracy is important; however, interactive speeds are not as important. It may be acceptable to set up a simulation and wait for an hour to determine the results; however, if

the simulation takes too long, it may not be a useful tool for trying different variations of the operation.

Surgical simulation and planning require two underlying technologies: rendering and deformation. Deformation is needed to represent the changes that occur to the anatomy during the simulation. Rendering methods are used to generate the set of images representing the surgery from the deforming anatomical representations created by the simulation. Section 1.1 introduces the area of rendering, specifically volume rendering, because it is more suitable for surgical applications. Section 1.2 introduces the area of volume deformation. Section 1.3 briefly describes existing methods for volumetric rendering and deformation. Section 1.4 introduces the new solutions to these problems

presented in this dissertation and Section 1.5 provides an outline of the dissertation.

1.1 Volume Rendering

Originally, the field of computer graphics represented objects using surface

representations (polygons or parametric surfaces). Surface representations have the

advantages of a compact representation and they can be rendered quickly; however, they

also have a number of limitations. Many kinds of data are more naturally represented as

sampled data (e.g., CT, MRI, and CFD). CT and MRI data are usually stored as data

values at regularly spaced intervals in 3D space. The locations of these values are

connected in a regular cubical fashion to form a volumetric grid. CFD data is often stored

as data values at locations that are irregularly spaced. Connecting these locations requires

an irregularly shaped grid. To render sampled data using polygonal or parametric surface

methods, a surface representation must first be extracted from the volume data. Typically,

the surface representing a specific data value or set of data values is extracted.

Extracting isosurfaces is a reasonable approach for some applications, but for

many applications it results in the loss of too much information. Extracting an isosurface

ignores areas of the sampled data that do not have this value. The main problem for many

applications is that information about the interior of an object is lost since only the outer shell of the object is represented by the isosurface. For biomedical applications such as surgery simulation, information about the data below the surface is at least, if not more. important than the surface data. Volume rendering is much more suitable for applications that involve visualizing the interior of objects, especially when the data is originally represented as sampled data.

The approach of volume rendering is to directly render the sampled data without first generating an intermediate surface representation. The sampled data values are mapped to colors and opacities to produce a rendered image that represents the entire volume. Details of existing volume rendering methods are presented in Chapter 2. The disadvantages of volumetric representations are that they generally require more memory than surface representations and rendering volumetric data is more computationally expensive than rendering surface data. Until recently, these problems made it impossible to produce images at interactive rates; however, advances in the speed and memory capabilities of computer hardware, along with new algorithms for volume rendering, have made volume rendering more attractive for many applications. The larger memory requirement is becoming less of a problem as the amount of memory in most current computers is sufficient to hold reasonably sized volumetric data, but the computational requirements for fast volume rendering, especially for irregularly shaped volumetric grids, are still a problem on most computers.

Some work has been done to develop specialized graphics hardware for rendering volumetric data that is organized as a regular grid. Specialized hardware can speed up the rendering process significantly, but has a high cost. In general, it is not cost effective to develop customized hardware for most applications. Additionally, hardware for rendering

irregular grids does not currently exist. Specialized hardware for rendering polygons exists

and is becoming more common. Computer manufactures such as

Incorporated (SGI) manufacture a number of that provide hardware support

for polygon rendering. Also, a number of companies have begun making fairly

inexpensive video boards for personal computers that provide hardware support for

polygon rendering. This dissertation presents a volume rendering algorithm that can take

advantage of polygonal rendering hardware.

1.2 Volume Deformation

The deformation of volumetric data is useful in many applications such as modeling and surgical simulation. For modeling, the goal is to provide a deformation method that allows the most flexibility with the least amount of user specification. There are a number of different deformation techniques that are appropriate for modeling, including free-form deformations and physically-based modeling. The different deformation approaches are described in Chapter 2. For applications such as surgery simulation, the goal is to develop a system that realistic simulates the deformations that tissue undergoes during surgical operations. This requires a deformation method that reacts to user specified deformations such as probing and cutting. Not only does the tissue deform while there is user interaction, but in many cases, the tissue continues to deform after the user interaction ceases. A simple example is pressing the tissue and releasing.

Once the user stops pressing, the tissue typically responds by restoring to its natural resting state. Also, when making an incision, the tissue often continues to spread apart for a short time period after the cut is made.

Deformation of data usually requires changing the representation of the data.

There are some deformation methods that change the way an object is rendered rather than directly changing the object representation; however, these methods are much less common. Continuous deformations such as stretching and bending are the simplest type of deformations. For polygonal and volumetric data, these types of deformations involve changing the vertex locations. Deformations such as cutting or tearing an object are more difficult to implement since they involve changing the topology of the data. Topology changing deformations require modifying both the vertex locations and the connections between the vertices. In many cases, it may require adding vertices or deleting vertices.

For surgical applications, a deformation method that continuously deforms according to the state of the object and responds in a realistic manner is required.

Physically-based deformation methods meet this criteria and are discussed in detail along with specific methods for surgery simulation in Chapter 2.

1.3 Existing Methods

The field of volume rendering of regular grids is much more mature than irregular grids. The simple structure of regular grids allows faster algorithms due to the coherency of the data. Irregular grids destroy much of the coherency and require more complicated rendering algorithms that are slower.

Volumetric rendering algorithms are classified into image order (also referred to as ray casting) and object order algorithms. Details of these two classifications are provided in Chapter 2. Irregular grids can be rendered using a ray casting method but this is more expensive than using an object order method. One of the major problems with rendering irregular grids using object order methods is sorting the grid cells in a depth order. The depth sort is required so that occlusion resolution and the compositing of partially transparent cells is performed correctly. Depth sorting requires 0(n log n) time for n cells.

For large grids, this can consume a lot of time; however, an even more serious problem is that many irregular grids cannot be depth sorted and require splitting some cells to create a correct depth ordering. Currently, only heuristic methods for efficiently splitting cells exist and they are not adequate in all cases. Splitting cells can drastically increase the number of cells and thus, increase the rendering times and memory requirements. Once a depth order is achieved, the cells are mapped in a back-to-front or front-to-back order to the screen pixels. Details of existing volumetric rendering methods for both regular and irregular grids are discussed in Chapter 2.

Surgical simulation systems use deformation techniques that are a member of a class called finite element methods. Finite element methods model continuous objects by defining a discrete set of nodes in the object. A grid is formed by connecting nodes in the set. The simulation method then calculates values for the nodes, and values between the

nodes are determined by interpolation. Finite element methods are commonly used in

CFD simulations. For example, to study the air flow over a wing, a set of grid points are defined around the wing and cormections are made between the grid points. Setting up the

finite element grid and defining the initial conditions requires a significant amount of expertise. Defining the grid requires knowledge of the simulation method to create a grid that produces a numerically stable and accurate simulation. The CFD simulation uses initial conditions and physical equations to calculate values such as temperature, pressure, and air flow velocity at the grid points at various time increments. Using finite element methods for deformation requires specifying a set of nodes and a simulation method that calculates how the nodes move based on the material properties of the nodes and user specified interactions.

Existing surgical simulations use polygonal models to represent a subset of the bones, organs, and other tissues. Some methods use predefined artificial models of the anatomy that are designed using a modeling tool. The finite element model is set up using the polygonal surface vertices or a subset of the vertices along with other points to create the simulation grid. Knowledge of the underlying simulation method is required to define the grid and connections that will allow the simulation method to be numerically stable and produce accurate results. In the case where the same anatomical model is always used, this preprocessing step can be performed carefully by someone knowledgeable of grid generation techniques; however, this has the disadvantage of requiring an expert in grid generation to perform this time consuming task whenever the model is changed. In many

cases, it is desirable to simulate a surgery using the anatomical data for a specific patient.

It is not feasible to require an expert user to regenerate the simulation grid for many

different anatomical data sets.

The problem with using surface based representations of the anatomical data is that

it does not easily support deformations such as incisions. When a cut is made in a

polygonal model, there is no interior data. It is possible to define a set of polygonal

surfaces to represent various interior surfaces, but it still cannot represent a continuous

interior as volumetric models can. To make the simulation look realistic when a cut is

made, models of the tissues below the surface must also be included. Because the

polygonal models represent a discrete set of surfaces, it is difficult, if not impossible, to

include all the tissues below the surface. Existing methods model the set of tissues they classify as important but leave out many of the tissues. This results in images that do not

look realistic. Generally, these methods only model the bones and a subset of the organs.

This results in images that render the inside of the human body as mostly hollow. This is because the fatty tissue and many muscles are usually not modeled.

The other problem with existing methods is that they are computationally expensive. Therefore, they are not feasible for applications such as surgery simulation that require interactive visualization of the deformations. Details of existing deformation methods are presented in Chapter 2.

10 1.4 New Solutions

This dissertation extends the field of volume rendering and the field of volume

deformation to provide a framework for surgical simulation. A new volumetric rendering

algorithm for irregular grids is presented that overcomes the problems discussed in

Section 1.3 to produce good quality images faster than existing methods. The method

avoids the problem of depth sorting the cells, which cannot always be done, and replaces

the 0(n log n) sort with an 0(n) operation that produces a correct depth order. The method

can also take advantage of specialized graphics hardware, specifically polygonal rendering

hardware which is becoming more common in current computers, to render irregular

grids.

A new physically-based simulation method that uses volumetric models to avoid

the problems caused by a lack of complete data below the surfaces is presented. The design objectives of the method are that it be fast, numerically stable, and simple enough for naive users to set up. The simulation algorithm is a basic integration method with a few modifications to make it less computationally expensive and more numerically stable.

These modifications imply that the method is not completely physically-based, but for many types of deformations, it produces realistic looking images. The simulation method avoids the problem of requiring a sophisticated user to set up the simulation grid by starting with a regular grid. The material properties of the volumetric data are used to assign values to the simulation grid nodes via an automatic process.

II The simulation method supports simple continuous deformations such as stretching and squashing along with two methods for introducing incisions. The method is integrated with the new volume rendering algorithm to provide fast manipulation of reasonably sized volumetric data sets.

1.5 Overview of Dissertation

This dissertation presents background material and a survey of existing methods for volume rendering and volumetric deformation for surgical applications in Chapter 2.

Chapter 3 presents the new algorithm for volume rendering of irregular grids. Chapter 4 describes the physically-based simulation method and how it is integrated with the tenderer. Chapter 5 presents results for the tenderer and simulation method with real world data sets used as input. Chapter 6 summarizes the contributions of this dissertation and suggests areas for future work.

12 CHAPTER 2

RENDERING AND DEFORMATION OF

VOLUMETRIC DATA

In order to achieve interactive volumetric deformation for applications such as surgery simulation and volumetric modeling, both the rendering of the volumetric data set and the calculation of the deformation must be computed at interactive rates. This chapter presents previous work in the areas of volumetric rendering and physically-based deformation that are related to the techniques described in this dissertation.

A brief introduction to volume rendering is presented in Section 2.1. This includes the various types of grids commonly found in medical and scientific applications and the basic algorithms for rendering these grids. Section 2.2 describes acceleration techniques and fast algorithms for rendering both regular and irregular grids. In Section 2.3 a description of deformation techniques is presented. The section focuses on physically- based methods, since the method presented in this dissertation fits into this category.

Section 2.4 presents related work in the area of surgery simulation and surgery planning.

13 2.1 Volume Rendering

The interest in volumetric graphics, a subfield of computer graphics, has

significantly grown in the last decade. Traditional computer graphics algorithms used

surfaces defined by polygons or parametric equations to represent objects. Volumetric

graphics represents data as a set of values at various locations in space. The data values are

commonly referred to as voxels (volume elements). Often, these data values are organized

as a regular three dimensional lattice. Some common examples of these include data

obtained from computed tomography (CT) and magnetic resonance imaging (MRI). For

other applications such as computational fluid dynamics (CFD), the data is usually

organized in an irregular fashion. A detailed description of different grid organizations is

provided in Section 2.1.1. The process of generating two dimensional images of

volumetric data is referred to as volume rendering. Kaufman [37] provides an excellent

introduction to the field.

There are a number of trade-offs between algorithms that rely on models using

surfaces and models using volumes. Surface models can generally be represented more

compactly and with less aliasing effects since the surface can be represented as continuous

polygons. Because of the compact representation, transforming the data can be performed

more quickly than transforming volumetric data. The major advantage of volumetric graphics is that it provides a defined interior. Other advantages are that it is a more natural way to represent sampled data and that it is less sensitive to scene complexity. This is important in many applications. With surface graphics, there is generally not a model of

14 the object beneath the surface. For many medical applications the information below the surface is at least as important, if not more important, than the surface information.

Because of this major advantage, volumetric graphics is becoming more popular even though in general, rendering and other operations on volumetric data sets are more computationally expensive than similar operations on surface models.

Drebin et al. [21] describe the basic steps of volume rendering. First an optional segmentation process is applied to the raw data values to classify different material properties. For example, in medical data, the segmentation process attempts to classify voxels as either skin, bone, muscle, fat, or other tissues based on the density value and the values of surrounding voxels. At this time, segmentation is usually a semi-automatic process, requiring some human input to achieve accurate results. A transfer function is applied to voxel values to map each raw voxel value or the segmented value to a RGBa

(red, green, and blue color channels and opacity) value. If available, the segmentation information may be used to apply different color values to the different material properties. The color values for each voxel are then modified by a process termed shading.

Shading modifies the color values based on the viewpoint using a lighting model. Shading typically uses the gradient value of the voxel to determine surface normals and boundaries between different materials in the data. The voxels are usually shaded to emphasize these boundaries. Finally, these shaded color values are used to generate a two dimensional image of the volume for a specific view point.

15 As in traditional surface based computer graphics, there are two basic categories of

algorithms for generating a two dimensional image from the three dimensional volumetric

data. These categories are object space and image space. Object space methods traverse

the objects in the scene and determine which pixels each object affects. Image space

methods traverse each pixel of the image and determine which objects affect each pixel.

Voxel space (the counterpart to object space for surface models) algorithms process each

voxel and determine the color contribution for the pixels of the image that it affects. In

order to perform compositing correctly, the voxels must be processed in a correct depth

order. There are many different techniques for determining the contribution for a voxel:

generally, there is a trade-off between accuracy and speed. More details of the issues

involving voxel space methods are provided in Section 2.1.2.

Image space methods process each pixel and determine the set of voxels that

contribute to that pixel. For each voxel along the sight ray of the pixel, a value must be

sampled, interpolated, and composited with the current value for the pixel. Image space

algorithms require determining the first cell the ray for each pixel intersects. From this

cell, the ray is traversed through neighboring cells until it exits the grid. Because these

operations are expensive, image space methods are typically slower than voxel space

methods. More details of the issues involving image space algorithms are provided in

Section 2.1.3. There are also some approaches that combine the two traversal schemes and are referred to as hybrid methods. In hybrid methods, the voxels are traversed and mapped to the screen in object order while the exact contribution of the voxel to the screen is done

16 by traversing the pixels in the area covered by the voxel. Section 2.1.4 presents details of a hybrid approach.

The volumetric rendering algorithm presented in this dissertation can be applied to many different grid types. A taxonomy of the different types of grids commonly found in scientific applications is presented in the next section.

2.1.1 Grid Taxonomy

The field of grid generation produces various classes of grids as shown in Figure

2.1. This figure shows the grids in 2D so it is easier to discern the differences. The rendering algorithm presented in this dissertation assumes that the grid is composed of cells that are bounded by a set of general simple polygons (i.e., non-intersecting, without holes, potentially concave, and possibly non-planar). Grids are used in a wide variety of applications. Because the literature for each of these applications was originally developed independently, the taxonomy of grid types varies in different disciplines. The taxonomy presented here is from Speray and Kennon [79]. Rectilinear grids are composed of a set of connected cells of rectangular prism (brick) shape (Figure 2.1 (a-c)). The set of cells completely tessellates a rectangular cartesian sub-space. Restricting the cells to be homogeneous rectangular prisms yields the regular grids which are very common in biomedical applications when scanning resolutions are unequal in all dimensions (Figure

2.1(b)). Imposing a further restriction on the cells to be cubical yields the cartesian grids

(Figure 2.1(a)). Structured grids, commonly found in various simulation applications such

17 as computational fluid dynamics (CFD), result from applying a non-linear transformation to a rectilinear grid, yielding a grid composed of hexahedral cells (Figure 2. Id). A structured grid is not necessarily convex (Figure 2.1(d)) and may have holes, but it is connected. All the above grid types maintain an implicit neighborhood connectivity. A grid that does not have such implied connectivity is called an unstructured grid. In the general case, each cell can be an arbitrary polyhedron (Figure 2.1(e)), however, in some applications the unstructured grid is composed of only tetrahedral cells (Figure 2.1(f)).

(a) cartesian (b) regular (c) rectilinear

(d) structured (e) unstructured (f) unstructured tetrahedral

Figure 2.1 Taxonomy of grids used in medical and scientific applications

18 Two major advantages of tetrahedral grids are that the faces of cells are simple,

convex, and planar polygons (triangles) and that the basic cell is always convex.

Rendering tetrahedra is generally simpler than rendering hexahedra because of the

tetrahedron’s simple shape. Arbitrary polyhedra are even more difficult to render because

of the many possible different shapes.

Another major advantage of a tetrahedral grid is that other types of grids can be

converted to it. Structured grids can very easily be converted into tetrahedral grids where

each hexahedral cell yields five tetrahedra [75]. Unstructured grids can also be converted

into tetrahedral grids by applying a tetrahedration algorithm to the set of grid points, (e.g..

a Delaunay tetrahedration). The disadvantage of subdividing a grid into tetrahedral grids is

that more cells are created. The approach presented in this dissertation can be used to

render any polyhedral grid and does not assume any specific cell type or connectivity.

The most obvious way to render an irregular grid is to resample it into a regular grid and then render it with available methods. When resampling regular grids, the

Shannon sampling theorem can be used for the development of theoretically sound procedures [65]. While the corresponding theory for irregular data is still being investigated [22], several practical solutions have been developed. One technique for resampling is to traverse the volume with rays and store the samples at regularly spaced intervals in a 3D buffer. Another possibility is to intersect the irregular grid with boxes that

19 comprise a regular grid. In each box, the size of the volumes (partially) residing inside the

box is computed, and a weighted sum of their contribution is composed.

Some structured grids are the result of applying a deformation to a rectilinear grid.

For these types of grids, it is also possible to trace the rays in the regular grid and map the

sample locations into the deformed grid [25]. These variations on the resampling approach

have several difficulties. First, the resampling process has to be performed very carefully

in order to maintain data integrity and quality. The resampling is an expensive operation

and thus is not a reasonable approach to use to handle grids that are dynamically

deforming. Another difficulty stems from the fact that cells may vary in their size. In order

to maintain the resolution of the smallest cells, the regular grid may need to be created at

an impractical resolution. Finally, this approach (except as described in [25]) calls for two

resampling operations, once when mapping into the regular grid and once when the

regular grid is rendered. The alternative is to directly render the irregular volume. The

approach presented in this dissertation belongs to the hybrid category. In Section 2.1.2,

Section 2.1.3, and Section 2.1.4 the basic issues associated with using these three

techniques for rendering irregular grids are discussed.

2.1.2 Voxel Space Methods

Vbxel-space methods are also called projective or feed-forward methods. The viewing transformation matrix is applied to all voxels (enumerated in some order), thus

2 0 providing an intermediate volume that is then projected onto a 2D screen. The basic

pseudo code for voxel space algorithms is listed in Figure 2.2.

1. Transform the voxel vertices from object space to image space. 2. Sort the transformed voxels in depth order. 3. Render each voxel in a back-to-front or front-to-back order.

Figure 2.2 Pseudo code for voxel space algorithm

The implementation of step 1 is straightforward, but step 2 poses some major

difficulties. In general, it is not always possible to sort a collection of tetrahedral cells.

Arbitrary (possibly concave) polyhedra present even more problems with regards to

sorting. Only acyclic meshes and polyhedral meshes generated by Delaunay tetrahedration

can be depth sorted. Max et al [58] have presented a topological sort for acyclic grids

composed of convex polyhedra with planar faces, and a similar algorithm is described in

[97]. Various approaches for handling cells that cannot be depth sorted have been presented; however, none of them are completely adequate since they rely on heuristic

methods and often generate many more cells since they require splitting a cell into multiple cells. Efficient depth sorting of arbitrary grids is still an open question.

Step 3 can also be implemented in various ways, such as rendering faces, projection, and splatting. Detailed descriptions of these approaches and variations on them

21 are presented in Section 2.2.1. The major advantage of the voxel order approach is that

many operations can be performed by available graphics hardware. Vertex transformation

(step 1), as well as some rendering operations (step 3), can be performed by graphics

hardware; however, the voxel space approach suffers from a few disadvantages. First is

that the whole process has to be repeated when the viewpoint changes, since the sorting

(step 2) is view dependent. A more serious difficulty is that the sort operation can be

applied only to limited types of grids as mentioned above. In this dissertation a hybrid

approach that overcomes these difficulties is presented.

2.1.3 Pixel Space Methods

Pixel-space methods are also called backward-feed methods or ray casting. Kajiya

[36] and Sabella [72] present some of the early work in this area. The algorithm casts ray(s) from the eye through each screen pixel. Pseudo code for a ray casting algorithm is listed in Figure 2.3.

For regular grids, the ray casting approach is relatively simple. The regular structure allows the use of coherence and also makes the sampling along the ray simple; however, as Garrity [27] points out when presenting a ray casting algorithm for irregular grids, this approach suffers from several difficulties. First, the calculation of step 1 as well as the determination if a cell is the last cell intersected by a ray (step 4) in the case of non- convex grids can be very difficult and time consuming. A possible solution is to embed the boundary cells in a regular space-subdivision grid [27]. Another major difficulty is that, in for each pixel 1. find the first cell the ray intersects 2. search the cell’s faces to find the exit point and determine the next cell. 3. between the entry and exit points, sample and interpolate values, and composite them with the cumulative ray value. 4. if there are neighboring cells, use the exit point for this cell as the entry point to the next cell and return to step 2. else if there are non-connected cells father along the ray, return to step 1 and use the exit point of the cell as the starting point for the ray.

Figure 2.3 Pseudo code for ray casting algorithm

order to perform step 2 efficiently, neighborhood information is required. Therefore, for

unstructured grids, the algorithm must be preceded by a process that calculates a list of all

the neighbors for each cell. A number of acceleration techniques for ray casting regular

grids have been presented. A review of these approaches and whether or not they can be

applied to irregular grids is presented in Section 2.2.2. In Section 2.2.4 image space

methods designed specifically for irregular grids are discussed.

The main problem with ray casting of irregular grids is that the quality of images is

low due to point sampling in both image space and object space. In fact, ray casting is

equivalent to resampling into a regular grid. Cells that fit between rays may be completely

missed and not included in the contribution of any pixel. This is even more of a problem in perspective viewing since the rays diverge as they move farther away from the eye point. A

23 possible direction for research is to explore techniques for rendering cylinder-shaped (or cone-shaped in perspective) rays rather than rays that represent zero width lines. An equivalent technique is to use a larger reconstruction filter when determining the value for a sample. Finally, pixel-space rendering is expected to be slow, especially when step 2 involves sampling arbitrarily shaped cells.

2.1.4 Hybrid Methods

Upson and Keeler [90] present a method for volume rendering that is a hybrid approach that combines both object and image space techniques. In this method, the cells are processed in a depth order. The depth order algorithm of Frieder et al. [24] (discussed in Section 2.2.1) can be used to generate the depth order for regular grids. Each cell is processed using an image space method. The set of scan lines that the voxel contributes to is determined. For each scan line the voxel contributes to, the voxel is intersected with the plane corresponding to that scan line (i.e., the plane perpendicular to the view plane that contains the pixels of the scan line). The intersection produces a convex polygon (since the voxel is a regular convex polyhedron). The pixels along the scan line are grouped into spans defined by pixels that have the same front edge and back edge from the polygon. For regular grids, there will be at most five spans along a scan line for a cell. The values for these pixels are then integrated from front to back for each span of the polygon.

The hybrid algorithm of Upson and Keeler has been extended to irregular grids by

Giersten [28]. Giersten’s implementation assumes the cells are convex hexahedral cells so

24 the polygons generated by the intersection of the scan line plane and the cell are a convex polygon. This allows the same (as for regular grids) simple division into at most five spans for each cell. As with Giersten’s method, the approach presented in the next chapter is also based on incremental slicing; however, unlike Giersten’s method, the algorithm presented can employ available rendering hardware to achieve interactive rendering speeds, is not as sensitive to image resolution, and supports adaptive and progressive rendering.

2.2 Acceleration Techniques

Common volumetric data sets range in size from 64^ to 512^ voxels. Processing the large number of voxels is typically a very time consuming operation. In order to achieve reasonable rendering times for volumetric data sets, a number of acceleration techniques have been proposed in the literature. Many of these methods are software techniques that limit the number of voxels that are processed or take advantage of coherency while processing the voxels, while other approaches take advantage of available graphics hardware to reduce the rendering times. Some methods trade image quality for rendering speed. In Section 2.2.1 and Section 2.2.2, acceleration techniques for regular grids are reviewed and the possibility of extending each technique to irregular grids is discussed. Section 2.2.3 and Section 2.2.4 present approaches designed specifically for irregular grids.

25 2.2.1 Regular Grid Voxel Space Techniques

Frieder et al. [24] point out that for parallel projections of regular grids it is not

necessary to sort the voxels in a depth order. A correct back-to-front or front-to-back depth

ordering can be achieved by processing the voxels in a regular fashion. A nested set of

three loops is used. The three loops correspond to the x, y and z dimensions of the grid.

Each loop is either processed in an increasing or decreasing order depending on the

viewpoint. The order for each of the loops is determined once per viewpoint by comparing

whether the first voxel in a row or the last voxel in a row is closer to the viewpoint. Swan

[81] has developed a fast, straightforward but slightly more complicated algorithm that generates a depth ordering without an explicit sort for perspective projections of regular grids. Neither of these approaches can be extended to irregular grids since it is the regular structure of the grid that allows the simple technique for determining a depth order to be correct.

Westover [93] developed an efficient approach for mapping each voxel to a set of pixels. This method is referred to as splatting. The usual means of describing this approach is to relate it to throwing a snowball against a wall. The snowball “splats” with a larger

“thickness” in the middle and it falls off to the edge. The “footprint” of a splat is precomputed and used for each voxel. Larger footprints result in higher quality images, but require more computation to render each voxel. This technique is used by Yagel et al.

[105] in conjunction with hardware polygon rendering and texture mapping. This technique can be extended to irregular grids. The main difficulty is that it requires different

26 size and shape footprints for the cells. It also requires an explicit depth sort of the splats

[62],

Laur and Hanrahan [48] present an extension to splatting which allows faster

image generation at reduced quality. Their approach is to construct a complete octree of

the volume. For each cell in the octree a splat is created that represents the average RGBa

value, along with an error estimate, for all the voxels in the octree cell. This essentially represents a three dimensional mip-map [96] of the volume. The structure of the complete octree allows an easy traversal of the grid in depth order. Based on the error limit specified by the user, the corresponding level of each octree cell is processed to produce an image within the error limit. Extending this approach to irregular grids is problematic. The first problem is that the cells of the irregular grid may not fit exactly into one cell of the regular octree; many of the cells of the irregular grid may overlap cells of the regular octree. It would be difficult to split the cell’s contribution accurately between the octree cells it occupies. The second problem is traversing the octree now will result in an incorrect depth ordering since the contributions of some grid cells are in multiple octree cells.

Machiraju and Yagel [56] present a technique for using incremental calculations to transform the voxel coordinates using parallel projections. For regularly spaced grid coordinates, they show that the transformed coordinate for a voxel can be calculated based on the transformed coordinate of its neighbor using only three additions. The transformation of the first coordinate is done by a matrix vector calculation which requires

27 16 multiplications and 12 additions. Using the naive approach of transforming every point with a matrix vector calculation requires 16 multiplication and 12 additions for every point compared to the 3 additions required using the incremental method. They describe how this algorithm works well on pipelined or vector parallel machines. This technique cannot be applied to irregular grids since it is the regular structure that allows the incremental calculation.

Sobierajski et al. [77] present an algorithm for reducing the number of voxels that are actually processed during the rendering stage. They generate a “trimmed voxel list” that represents the voxels on surface boundaries. During a preprocessing step, this list of visible voxels is extracted from the original volume; gradient values can be used to determine whether or not a voxel represents a surface boundary. A color and normal are stored for each boundary voxel. During the rendering stage only these boundary voxels are processed and rendered. Each boundary voxel is rendered by the hardware as a point. They point out the problem of holes appearing in the image when rotating the volume because adjacent voxels do not transform to adjacent pixels. They resolve this problem by rendering additional points for some voxels depending on the viewing direction. They note that points consisting of multiple pixels could be used but since the hardware at that time did not support it, they did not implement it. They also use the normal value for each boundary voxel to cull back-facing voxels to further reduce the number of voxels that are actually rendered. This approach takes advantage of the fact that an explicit sort is not

28 required of regular grids in order to get a correct depth order. Because of this, extending it to irregular grids would require a depth sort to be performed on the extracted voxels.

Yagel et al. [105] describe a number of renderers including one that is an extension of Sobierajski's approach. By this time, available graphics hardware could quickly render points that are larger than one pixel and also had the capability to render textured polygons. Yagel et al. used this capability to render each boundary voxel as either a “fat” point or a textured splat [93]. The size of the points is adjusted to prevent holes in the image and the textured splats allow a higher quality image to be generated. Craw fis and

Max [17] also present a textured splatting method. They developed an ideal reconstruction filter that is used for the splats. As with Westover [93], these approaches can be extended to irregular grids, but require varying the splats’ shapes and sizes and require an explicit depth sort of the splats.

Lacroute and Levoy [46] present a fast volume rendering algorithm using a shear warp factorization. The basic idea is that each slice of the volume can be traversed in order if a shear is applied to the slices based on the viewpoint. For perspective transformations, it is also necessary to apply a scale to each slice. The slices can then be resampled and composited together. Finally, a warp operation based on the viewpoint needs to be applied to the composited image to generate the final image. This algorithm cannot be extended to irregular grids because it is the regular structure of the grids that allows the slices to be processed in order after the shear operation.

29 2.2.2 Regular Grid Image Space Techniques

Levoy [53] presents several optimizations to accelerate ray casting of regular grids.

The first technique is to impose an octree on the volumetric data set. The traditional octree

approach only subdivides in areas of high detail; however, Levoy uses a complete octree

(the entire octree is subdivided to the same level). The basic ray casting approach is

modified to advance the ray quickly through empty areas of the volume based on the data in the octree. Each cell in the octree indicates whether all of its children are empty or whether at least one contains a non-empty voxel. For each ray, the top level cell in the octree that it intersects is computed. If it is empty, the ray moves to the next cell on the same level. If the next cell has a different parent, the new parent cell is used instead. If this parent cell is empty, the ray can be advanced farther in the grid. If an octree cell is non­ empty, the ray is passed to the child cell that contains the current location of the ray. As long as the child cell is non-empty, this process continues. If the cell is the lowest level, the voxel values are sampled and a color and opacity are accumulated. For data sets with large empty areas, this will provide lower rendering times. This approach can be applied to irregular grids although the implementation for irregular grids is more complicated since the number of voxels in each octree cell will vary. Also determining which cell of the irregular grid to sample when the lowest level of the octree is reached is more difficult due to the Irregular structure of the grid. Because of these difficulties and the fact that irregular grids are usually organized so that there is only a high level of detail in non-empty areas, this approach will most likely not reduce rendering times as much as it does for regular grids.

30 The other optimization discussed by Levoy [53] is early termination of ray casting.

The idea is that after a ray is traced to the point where it reaches full opacity, the color is not going to change by continuing to sample and accumulate values. Once this point is found, the ray is terminated and the color obtained by tracing the ray to that point is used.

Essentially, once the opacity accumulated along the value nears 1.0, the ray can be terminated, since the color will not change significantly after this point. This technique can easily be applied to all types of grids.

Levoy and Whitaker [52] present a technique to reduce the computation of ray tracing. Their method takes advantage of the fact that the visual acuity of the human eye is highest in the center of the fovea and falls off to the peripheral of the visual field. Thus, they propose calculating more accurate pixels in the areas of the screen the user is looking at directly. Based on the user’s gaze direction, they modify the numbers of rays per unit area and the number of samples per ray to achieve a higher quality image in the area the user is looking at directly. A more quickly rendered, lower quality image is displayed in areas in the user’s peripheral vision. They pre-compute a three dimensional mip-map

(complete octree with color and opacity values for each cell of the octree) of the volume and store only the view independent shading information. As the viewpoint and gaze of the user change, the mip-map is used to generate the new image. For areas of the image the user is looking at directly, values in the high resolution part of the mip-map are used to generate those pixels. For areas away from the user’s direct view, areas in a lower resolution area of the mip-map are used. This technique is intended for applications where

31 many different view points will be chosen so that the cost of precomputing the mip-map is

amortized over the entire sequence of images.

This approach suffers the same problems as the hierarchical splatting by Laur and

Hanrahan [48], discussed in Section 2.2.1, when attempting to extend it to irregular grids.

The first problem is that the cells of the irregular grid may not fit exactly into one cell in the regular octree; most likely the cells of the irregular grid would overlap cells of the regular octree. Generating an accurate mip-map will be difficult. The second problem is sampling the mip-map at different levels will result in an incorrect depth ordering because of the fact that the contributions of some grid cells are in multiple octree cells. Because of these difficulties and the fact that irregular grids are usually organized so that there is only a high level of detail in non-empty areas, this technique will most likely not reduce rendering times for irregular grids as much as it does for regular grids.

Danskin and Hanrahan [20] present an acceleration technique for ray tracing that also uses a three dimensional mip-map structure. Their technique is based on the idea of importance sampling. More accurate samples should be calculated in areas that contribute the most to the final value of the pixel. Low opacity values for a sample imply that the sample does not contribute much to the final value of the pixel. If the maximum opacity value obtained from the mip-map for a given region is less than a specified threshold, the region is approximated with a single sample. More samples are taken in areas of higher opacity. They point out the problems are that it can take a single sample in very large areas

32 if that area has a low opacity and that it also ignores the accumulated opacity. Using a

large cell from the mip-map in an area of low opacity that has many different colors

introduces an error to the image; however, using a large cell from the mip-map for an area

that is relatively homogeneous does not introduce an error regardless of the opacity. They

also implement this method to reduce the number of samples that are required. There is

not a theoretical problem with implementing this approach for irregular grids; however,

because the grids are usually organized so there are more small cells in areas of high

variation and fewer large cells in areas of low variation, it will not produce large speedups.

Levoy [51] presents an algorithm that reduces computation by limiting the number

of rays that are cast. In this method, a subset of the pixels are calculated as in the

traditional ray casting approach. Values for the pixels between these samples are

interpolated from the surrounding pixels that were calculated. In areas of high gradient,

additional rays are cast instead of using interpolated pixel values. This technique is also

used as a way of producing a sequence of incrementally refined images. For the first

image, only a few of the pixels are calculated by casting rays and the other pixels are calculated by interpolation. The image is refined by casting rays for pixels that were previously calculated by interpolation. This incrementally improves the quality of the rendered image. This approach can be directly applied to irregular grids. Used in conjunction with information about the sizes of the voxels, this could greatly limit the number of rays that are cast.

33 Yagel and Kaufman [102] present a technique to speed up ray casting when a

parallel (orthographic) projection is used. They note that for parallel projections, each ray

follows the same type of discrete path through the voxels. For a given view, the path can be

computed once and then for each ray, the same increments can be used. They point out that

because of rounding errors when mapping the ray path to the discrete voxel space, the

template path can result in uneven sampling of the grid depending on the view point. They

fix this problem by calculating the template for the path based on a base plane that is

parallel to one of the major axes of the grid. Using the precalculated path for each ray reduces the computation required to move from one voxel to the next along the ray. They

report speedups of about 1.9 for grids of size 256^. This approach cannot be directly applied to irregular grids because it is the regular structure of the voxels that allow the precomputation of the path.

Cohen and Shefer [11] present a technique for skipping over empty cells using a technique they call proximity clouds. The idea is to include the distance to the nearest non­ empty voxel in each empty voxel. When an empty cell is encountered the ray can be incremented by the distance stored in the empty cell without missing any data. This method can be applied to irregular grids. The problem is that in general, skipping ahead a certain number of cells requires traversing through each cell in order to find the next cell the ray enters. Because of this, the technique produces the same results for irregular grids as noting which voxels are empty and not performing any sampling and accumulation calculations for empty voxels. Yagel and Shi [104] extend this idea to render multiple

34 images of an animation by storing the first non-empty voxel that a ray intersects for each

pixel. With their method, the information is transformed accordingly when the viewpoint

is changed and an update is performed to ensure that the information is correct.

Avila et al. [2] present a technique for accelerating ray tracing that uses a polygonal approximation of the non-empty voxels to assist in skipping empty voxels.

They refer to the method as polygon assisted ray casting (PARC). A subvolume that contains the non-empty cells is precomputed. Based on a specified view point and image resolution, the front faces of the subvolume are projected into a near z-buffer and the back faces of the subvolume are projected into a far z-bufîer. The values in the z-buffer can be used to determine the rays that only intersect empty cells and also to determine approximately where the first and last intersection for a given ray will occur. This reduces the amount of unnecessary computation that occurs when tracing through empty cells. For each pixel, the near z-buffer is used to determine if any non-empty voxels are intersected and, if so, the approximate location of the first intersection can be found in the near z- buffer. The far z-buffer specifies the last non-empty voxel the ray intersects and is used to terminate the ray at that point.

Because the subvolumes are an approximation of the surface, some empty cells may still be processed. Hardware for z-buffering can be used to calculate the near and far z-buffers and can be done quickly since no shading values need to be calculated. The authors note that the accelerations achieved by this algorithm are dependent on how tightly

35 the subvolume fits around the non-empty voxels. There is a trade-off between the resolution of the subvolumes and the number of empty cells that are traversed. A higher resolution for the subvolumes reduces the number of empty voxels that are processed, but increases the preprocessing time to calculate the z-buffer values. This algorithm will work best when the non-empty voxels can be approximated closely with a small number of faces for the subvolumes. This approach can be applied directly to irregular grids in a straightforward manner. For regular grids, axially aligned faces are used for the subvolume so three of the faces are front facing and three are back facing. This makes it simple to determine into which z-buffer to render the faces. For irregular grids, choosing the correct z-buffer for a face is slightly more complicated if the faces of the irregular grid are used.

The same method can be used if the subvolumes use axially aligned faces instead of faces of the irregular grid, but this complicates the preprocessing stage since the cells intersect the axially aligned faces.

Sobierajski et al. [78] extend the PARC algorithm to skip over the cells which do not contribute to the image but are between the first and last non-empty cells the ray intersects. They note that the PARC algorithm works well for applications when full opacity is reached at the first voxel intersected (e.g., when searching for an isovalue); however, in many cases, full opacity is not reached until farther along the ray. In these cases, some of the voxels between the first and last non-empty voxels do not contribute to the color and opacity. They may be empty or be a portion of the volume which the user does not desire to visualize. Sobierajski et al. [78] present a technique for using the color

36 buffer to generate a list of voxels along a ray that do contribute to the color and opacity for that ray. Cells are assigned a bit in the color value according to their distance along the major viewing axis. Initially the color buffer is assigned a value of zero for each pixel. For each cell that contributes, a bitwise OR operation of the current pixel color and the bit corresponding to that cell is performed to produce a value representing the set of cells that do contribute. The resolution of the color buffer limits the resolution of the subvolume grid that can be used. Most hardware color buffers are 24 bits so at most 24 subdivisions of the volume can be used or, partial values must be captured from the hardware and accumulated in software. The authors point out that the first plane that contains a non­ empty data value can be assigned the first bit in the color buffer so that bits are not wasted on empty cells in the boundary of the volume. The authors report that the PARC algorithm achieves a speedup of approximately two over naive ray casting and that the color buffer

PARC algorithm achieves another speedup of approximately two times. The performance of these algorithms is dependent on the actual data. This approach can also be applied to irregular grids but may require more color bits because of the structure of an irregular grid.

The number of cells along a ray may vary greatly depending on the orientation of the ray.

For some ray orientations, more color bits may be required because many cells are intersected. For other ray orientations only a small number of cells may be intersected.

Cross [18] presents a method similar to the color buffer PARC algorithm described above. His method is described as a ray tracing technique for all types of data (not just volumetric data). He mentions using the z-buffer to determine the distance along the ray of

37 the first object the ray intersects and also using an item buffer to contain an identifier indicating which object the ray first intersects. The paper also describes an opacity grid organized as a regular grid that is used to indicate the probability that a ray passing through a cell is occluded. This probability is used to determine if a light sources reaches the point or is blocked by other objects. For each cell, a number of samples are tested and the percentage of these that are occluded is assigned to the cell. The paper does not specify if the probability value is used exclusively to determine whether or not a point is in a shadow.

2.2.3 Irregular Grid Object Space Techniques

The splatting technique, introduced by Westover [93], has been extended to curvilinear grids by Mao et al. [57], but requires different footprints for each voxel because the voxels are not a constant size. They use a stochastic sampling method to essentially resample the grid in order to generate a set of points whose energy supports are spheres and ellipsoids. This allows a fast footprint evaluation. The main problem with this algorithm is that because of the stochastic sampling process, there is no guarantee of the quality of the image.

Shirley and Tuchman [75] present a voxel space algorithm that converts each voxel to a set of polygons and then renders and composites these polygons. The approach described requires tetrahedra so for regular grids, they subdivide each voxel into five tetrahedra. Each tetrahedron is projected onto a plane parallel to the viewing plane and

38 produces one to four triangles. Since the maximum color and opacity for the tetrahedron

occurs at the thickest point of the tetrahedron, the maximum thickness value is computed

along with the color and opacity at that value. They assume that the brightness and opacity

vary linearly across each of the triangles. This assumption reduces image quality, but they

point out that for tetrahedra with low opacity it will not cause major problems. Because

this algorithm renders polygons, it is best suited for machines with hardware polygon

rendering capabilities. As with most of the techniques discussed in this section, the

problem is that depth sorting of irregular grids is not always possible due to overlapping cells. Williams [98] implements this method in combination with a visibility ordering algorithm he developed for meshed polyhedra [97].

Stein et al. [80] have extended the approach of Shirley and Tuchman [75] described above that takes advantage of the texture mapping capabilities of graphics hardware to prevent the artifacts caused by the linear approximation of the opacity. The opacity should vary as an exponential function. The values based on the exponential function are stored in a texture table. When rendering the triangles, the texture coordinates at the vertices of the triangle are set according to the thickness of the tetrahedron at that point. Because most graphics hardware linearly interpolates the values in between, there are still some inaccuracies, but the artifacts are greatly reduced. In this paper, they also introduce a sorting algorithm for arbitrary convex polyhedra that is the three dimensional extension of the two dimension painters algorithm sort developed by Newell, Newell, and

Sancha [23]. Max et al. [58] introduce an accurate projection algorithm based on the

3 9 Shirley and Tuchman [75] approach. They accurately compute the depth of a cell across

the projected polygon. The depth values are used to properly weight the opacity for each

pixel as the polygon is scan converted.

Williams [97] presents an algorithm for determining the visibility order for

irregular grids. The idea is to construct the adjacency graph for the polyhedral cells. Each

cell is connected by an edge to cells that share a face with it. Based on a given viewpoint,

the edges are assigned a direction from the cell that is behind to the cell that is in front. A

topological sort (which can be calculated in 0(n) time for n cells) of this graph results in a

correct depth ordering; however, this approach cannot always be used. For a grid with

concave cells, one cell may not be completely behind or in front of an adjacent cell for a

given viewpoint. In this case, a direction cannot be assigned to the edge between the two cells. Fortunately, this situation is not common for grids in scientific applications. A more common problem is that the directed adjacency graph may have cycles and thus cannot be topologically sorted. A grid composed of convex cells may have cycles in its adjacency graph. To properly handle this, some of the cells need to be split into multiple cells to eliminate the cycles. This can result in a significantly larger number of cells. Williams presents some heuristic methods to handle these problems, but notes they increase the asymptotic running time of the algorithm. Van Gelder and Wilhelms [91] present a variation on this approach that they claim is simpler, but it also does not guarantee a correct depth order for nonconvex volumes; however, they report it has never failed in practice.

4 0 Van Gelder and Wilhelms [91] present three projection methods that can take

advantage of polygon rendering hardware. They use a variation on Williams [97]

technique for depth ordering a meshed grid. Their first approach is to simply render the

faces of each cell in a depth order. They note this algorithm is fast, but produces artifacts

because the distance between cell faces is not considered. Their second and third methods

use the approach of Shirley and Tuchman [75] and Max et al. [58]. For their second algorithm, they project the vertices of the cell onto the viewing plane and calculate the convex hull of those vertices. The data value for each vertex is weighted by the depth of the cell at that vertex. The convex hull polygon is then rendered as a Gouraud shaded polygon with these weighted data values. They note that this algorithm produces higher quality images than the first algorithm, but still may have artifacts. The third approach projects all the vertices and then separates the projection into polygonal regions that share the same front and back face from the cell. This allows the weighted data value based on depth interpolation to be more accurate across the polygon when using Gouraud shading.

This algorithm is the slowest of the three, but produces the best results. An initial implementation of the third algorithm is reported in [94], but only rectilinear cells were handled. The algorithm presented in [91] handles arbitrary hexahedra.

Haimes [29] describes a method for extracting isosurfaces using a slicing technique. His algorithm requires an initial sorting by minimum and maximum z values for all the cells. This requires 0{n log n) time for every view change, and an extra 0(log n) time is spent finding the z-limits in the sorted arrays for each slice. The algorithm

41 described in this dissertation generates slices but in contrast to Haimes’ method, the algorithm described in Chapter 3 uses 0(n) time to place the cells in buckets, and a total of only 0(n) time for all the slices to maintain the active cell list, irrespective of the number of slices.

Koyamada et al. [41] present a rendering algorithm for irregular grids that is also based on slicing. They generate a set of slices through the irregular grid and then render these slices. For perspective viewing, a set of concentric spherical slices are generated and for parallel projections, the set of slices are planar. They assume that the grid is composed of tetrahedra and represent each spherical slice as a set of triangles. Once the slices are extracted, they scan convert the triangles to generate a rendering of the image. The algorithm described in Chapter 3 also generates a set of slices and renders them; however, there are a number of differences in the algorithm. The main difference is the approach to generate the slices. Koyamada et al. [41] use an isosurface extraction method to generate the slices. Their first method is relatively slow and Koyamada and Ito present a modified technique [42]. This modified technique still relies on generating an isosurface by starting with seed cells and tracing the surface from cell to neighboring cells. In Chapter 3, a more efficient method of generating the slices is described. Also, the algorithm in Chapter 3 can handle different types of cells (e.g., hexahedra) more easily than their method (which as described only supports tetrahedra). This is because their algorithm generates an isosurface and the algorithm presented in Chapter 3 treats each cell individually without concern for the neighbors.

4 2 2.2.4 Irregular Grid Image Space Techniques

Unfortunately, there are not many possible techniques for accelerating ray casting of irregular grids as can be observed by the small number of papers in the literature on this topic. A few of the regular grid acceleration techniques can be applied to irregular grids

(early ray termination [53], casting fewer rays in areas of low gradient [51], polygon assisted ray casting [2][78]) as noted in Section 2.2.2. Garrity [27] presents the basic algorithm for ray casting irregular grids and points out that a regular spatial subdivision can be used to accelerate the determination of the first cell a ray intersects. Garrity also notes that adjacency information for the faces of the cells can be used to determine the next cell a ray intersects based on the face of the cell the ray exits. Friihauf [26] notes that hardware z-buffer and color buffers can be used to find the first cell a ray intersects and to determine if an exiting cell is the last cell along the ray. This is performed by rendering the exterior faces of the grid and encoding each face with a different color.

Friihauf [25][26] presents a slightly different approach for rendering structured grids such as curvilinear grids. Instead of casting the rays through the curvilinear grid, the rays are cast through the regular computational space grid and warped according to the warping function that transforms the regular computational space grid into the curvilinear grid. Friihauf notes the warping function can be precomputed and used for multiple viewing directions. This technique cannot be easily applied to arbitrary grids because computing the warping function for arbitrary irregular grids is not easily performed.

43 2.2.5 Summary

In general, object space algorithms are faster than image space algorithms. The rendering algorithm presented in this dissertation fits into the hybrid method category. It transforms the grid vertices to image space and then calculates the contributions of the cells in depth order in image space. This algorithm avoids the explicit depth sort required by object order algorithms. Depth sorting requires 0(n log n) time for n cells and is not always possible for polyhedral grids. The technique presented here requires 0(n) time to achieve the depth order and works on any grid composed of polyhedral cells. The algorithm can also take advantage of polygon rendering hardware to achieve fast rendering times.

2.3 Deformation

Deformation was first introduced to computer graphics as a modeling tool by

Bartnyk and Wein [5]. Its use in modeling is to create more complex shapes by applying a deformation to a simpler shape. Usually, the model is deformed; however, Kurzion and

Yagel [44] [45] have introduced a technique for rendering deformations by deforming the rays used to render the model. Sederberg and Parry [74] introduced a method for deforming the points of an object by adjusting a smaller set of control points that intuitively influence the shape of the original object; a number of extensions and improvements have been made to the original paper [12][13][47]. These approaches work reasonably well as a modeling tool, but they have limitations for producing a sequence of shapes for animation. The main problem is generating the sequence of control points for

44 each frame of the animation. Generating the location of each control point for every frame

by hand is tedious. Key-framing techniques can be used, but this presents the problem of

calculating a good set of in-between points. To alleviate these problems, physically-based

deformation techniques have been used.

The approach of physically-based methods is to try to model the actual physics

that define the behavior of objects. It is extremely difficult to exactly model every physical

force that is applied to an object and most models use a number of simplifying assumptions to make the simulations computationally feasible. The most common approach is to model deformable objects with springs and use numerical integration to solve the resulting equations. The major difficulty is that many of the differential equations are numerically unstable and may require extremely small time steps or more computationally expensive integration algorithms to solve accurately. Because of this, it is often not possible to perform these calculations interactively. Physically-based techniques have been applied to many areas of computer graphics including both rigid body motion and deformable models. In this dissertation, only those involving deformation are discussed.

A basic introduction to springs and particle systems can be found in most introductory physics book (e.g., [30]). Hooke’s law states that the magnitude of the force exerted by a spring is proportional to the amount the spring is compressed or stretched from its relaxed state. Equation 2.1 is the common form of Hooke’s law where /is the

4 5 force, k is the constant representing the strength of the spring, and .r is the amount the

spring is compressed or stretched from its rest length.

F{x) = ~kx (2.1)

If the spring is stretched, the direction of the force is inwards along the spring and if the

spring is compressed, the direction of the force is outwards along the spring.

Flexible objects are commonly modeled as a set of particles connected by springs.

Hooke’s law provides the force that the spring exerts on a particle. The forces applied by all the springs connected to a particle are applied by summing the forces applied by each spring. To generate the motion, the force must be converted to a positional update. The first step is to calculate the acceleration using Newton’s law (Equation 2.2 where F is the force, m is the mass of the particle and a is the acceleration).

F = ma ( 2 .2 )

Once the acceleration is known, one integration step produces the change in velocity and another integration step produces the change in position. Calculating the simulation involves subdividing time into discrete time steps and performing the integration numerically. A simple Euler integration method or a more robust method such as Runge-Kutta can be used (see any numerical analysis book such as [8] for a detailed description of these algorithms). By adjusting the spring strengths and the masses of the particles, the degree of flexibility in the object can be varied; however, the numerical methods can be unstable, especially for large spring strengths which are required to

4 6 simulate relatively rigid objects. Extremely small time steps may be necessary to

accurately perform the integration.

These integration techniques also suffer from several other problems. One is that

they do not elegantly allow constraints to be placed on the motion of the particles. If a

particle violates a constraint, the simulation must be backed up to a previous time step and

advanced with a smaller time step to the point where the constraint would be violated so

that it can be handled at that point. The other problem is that it takes a number of time

steps for a force to propagate through a system of springs and masses. When a force is

applied to a particle, the particles directly connected to it are affected at the next time step.

Two time steps later the particles that are two springs away from the original particle are

affected, and so on. Essentially, it takes n time steps for a particle n springs away from the

applied force to react. Because of this, the positions of the particles cannot be accurately

simulated unless a small time step is used. And as mentioned before, a small time step

increases the amount of computation required to simulate the system.

Witkin et al. present a practical and more detailed discussion of these problems in

[100]. They suggest using a different approach for solving the equations. They use an implicit formula that requires the solution to a linear system of equations. For regular lattices, the resulting system of equations has special properties that allow it to be solved faster than the regular O(n^) for a system of n equations. These algorithms also allow the

4 7 constraints to be included in the equations. For a more detailed description of the these

techniques, see [100].

2.3.1 Physically-Based Deformation

Terzopoulos is one of the earliest and most prolific contributors the area of

physically-based deformation. Terzopoulos et al. first introduce physically-based

simulation of elastic models in [82]. A more complete theoretical introduction to this area

can be found in [85]. This work describes elastic deformations and some inelastic

deformations such as viscoelasticity and fracture (also in [83]). With elastic deformations,

the deformed object returns to its original shape when all the forces causing the

deformation are removed. These types of deformations are easily modeled using a

Hooke’s spring model since a spring attempts to return to its resting length. Terzopoulos

[85] describes viscoelasticity as a combination of elasticity and viscosity; the shape an

object undergoing a viscoelastic deformation returns to depends on the entire history of

the deformation, not just the original shape as in an elastic deformation. Fracture

deformations involve the introduction of a discontinuity into the model. Terzopoulos

describes the differential equations that are used to simulate these deformation methods

and discusses different methods for solving these sets of equations. Terzopoulos [84][85]

also provides techniques for a hybrid model for combining rigid body dynamics and deformable models. Metaxas and Terzopoulos [60] extend the deformation techniques to

handle constraints on interconnected deformable objects. This requires additional terms and equations in the set of differential equations that model the system. They note that

48 “stabilization” terms need to be added to the equations and there are trade-offs between solution time and accuracy of the constraints.

Terzopoulos and Waters [86] introduce a physically-based approach for modeling facial expressions. They use a layered approach to model the skin, the muscles, and attachments to the bone. A regular grid is use to connect the skin, muscles, and bone.

Springs connect the grid points and a physically-based simulation algorithm is used to calculate new positions for the grid points when forces are applied. The skin vertex locations change because they are connected to the grid. The face is rendered with the changing skin vertices to produce the animated facial expressions. Lee, Terzopoulos and

Waters extend this work in [49]. They introduce more layers to their model to more accurately model the tissues beneath the skin and a more complete model for the tissue deformations including volume preservation. This is accomplished by adding additional terms to the differential equations.

Haumann [32] presents a test-bed for experimenting with physically-based methods by representing objects as systems of particles. The system represents objects as a system of “actors” such as mass, spring, gravity, hinges, aerodynamic drag, wind, etc.

The idea is to represent an environment by a set of these actors. The physically-based simulation determines the effects these actors have on each other to modify the objects in the environment. Using this system, a number of animations were created and are described in [32].

49 Chadwick et al. [10] use free-form deformations [74] in combination with a mass

spring system to produce deformations. They connect the vertices of the FFD lattice with

springs and use a physically-based simulation to update the lattice. A model is then

deformed based on the FFD lattice. This technique works well when exact control of the

deformations is not necessary. They use it to control the deformation of fatty tissues.

Witkin and Welch [99] present a fast method for computing physically-based deformations. Their approach is based on global deformations with only a few degrees of

freedom. They use algorithms discussed in [100] for solving these equations. The main problem with using this technique for many applications is that it produces global, rather than local, deformations.

2.4 Deformation For Medical Applications

Some of the most active areas in deformation research, specifically physically- based deformation, are medical and surgical applications. Computed tomography (CT) and magnetic resonance imaging (MRI) generate volumetric data sets. These imaging techniques can be used as diagnostic tools to visualize the interior of the body without cutting it open. They are used to visualize fractures, broken bones, tumors, and other abnormalities. They can also be used to generate anatomic atlases for medical education.

Until recently, only static images of these data sets were viewed. As computer technology has expanded, there is interest in deforming these anatomical data sets for surgical planning and surgical simulation.

5 0 Another application of volume deformation is motion tracking and fitting. The work in the area of motion tracking and fitting is mainly in the computer vision literature rather than the computer graphics literature. This work attempts to match observed or measured deformations and simulate it. As with surgical planning, the main distinction between this work and surgical simulation is that interactive speeds are not as important; however, exact accuracy is important. Some of these methods use physically-based deformation techniques while other use ffee-form deformations. Most of these techniques involve mathematics not directly related to the work in this dissertation. Terzopoulos and

Metaxas have published a number of articles in this area that uses their work in physically- based deformation discussed in Section 2.3.1. In [87], they describe a physically-based approach for computing deformations based on superquadrics. The superquadrics allow both global and local deformations. They place an initial superquadric on top of the model they are fitting. In areas where the superquadric does not fit well, forces are applied based on the fit and a physically-based simulation deforms the superquadric to achieve a better fit. In [59], they extend this technique to track dynamic objects that are deforming.

In Section 2.4.1, a brief description of surgical simulation systems related to the work presented in this dissertation is given. Section 2.4.2 presents work in the area of surgical planning involving deformations.

51 2.4.1 Surgery Simulation

The earliest work involving surgery in the computer graphics literature appears to

be from 1977 [33]. There are very few references involving surgical applications before the 1990s mainly because computers were not capable of performing the necessary calculations fast enough to be useful. In the last few years, this has become a hot research topic as the computational speed of computers continues to increase.

Most systems for surgical simulation are designed for a specific part of the body and are not general purpose. The main reason the simulations are limited to specific parts of the body is that it is much easier to develop a model tailored for only a portion of the body. Cover et al. [16] have developed a system that supports probing of a gall bladder.

Most existing systems also perform the simulation on polygonal data; however, most medical data is from MRI or CT scanners and is thus stored as a volumetric data set.

Systems that operate on polygonal data must first create polygonal surfaces from the volumetric data using methods such as the well-known marching cubes algorithm [54].

The problem with performing the simulation on surfaces is that information about the data inside the surfaces is lost. Hohne et al. [34] [68] list a number of the requirements for surgery simulation and planning. For realistic dissection, information about the interior of the object and free-form cutting are necessary. They point out that these requirements imply the need for volumetric models. They represent the anatomical data as voxels, but do not currently have any method for deforming the data (they state their models are static

[34], p. 30).

5 2 Cover et al. [16] have developed a system for simulating laparoscopic gall-bladder

surgery. The anatomical models for their system are created using a modeling package.

The initial model is defined as set of surface patches and then polygonized. A set of points

on the surface for calculating the simulation is distributed based on the curvature of the

surface so that there are more points in areas of high curvature. Virtual springs connect

each simulation point to its neighbors. The force at each simulation control point is

calculated by summing the forces applied by the springs connected to the control point. A

force is also applied based on the distance to the initial location of the point. The force at

each point along with the force at a random nearby point is used to determine the direction

the point should move to minimize the energy of the system. The locations of the points

are updated and the next image is rendered according to the new control points.

This system has a number of limitations. The system is tailored for the gall bladder so a new polygonal model and set of control points must be defined for each different part of the body that is to be simulated. This is especially problematic if one wishes to rehearse a surgery on anatomical data for a specific person obtained from MRI or CT data. In their implementation, only the gall bladder deforms; the organs around it are stationary. This is not a general problem with their approach; however, more computation would be required to deform the other organs. A large number of control points are needed due to the curvature of the gall bladder; their model of the gall bladder and a few organs around it requires 15,000 polygons. And as mentioned previously, simulating and visualizing the deformations below the polygonal surface is impossible because of the surface model.

53 Sagar et al. [73] present a model of the eye and an environment for simulating a

few deformations to the eye that can be used in a surgery simulator. Their model uses a

polygonal surface representation for the eye in combination with texture maps to reduce

the resolution of the polygonal model needed to achieve a realistic appearance. They point

out that the cornea is anisotropic and can have large nonlinear deformations. They use a

finite element model capable of modeling these nonlinear deformations. No details of the

relation between the geometry of the eye model and the finite element model are provided.

The only deformation shown is a simple depression of the cornea.

Cotin et al. [15] report the initial developments of a surgery simulator based on

volumetric models. They use a finite element method composed of tetrahedral elements.

The data they are deforming is polygonally based. Forces are applied in a non-linear

manner based on the displacements. A limit on the displacements is specified; for

displacements smaller than the limit, one spring strength is applied and for displacements

larger than the limit, a different spring strength is applied. The example provided in the

paper is a deformation of a polygonal model of a liver.

The advantage of this technique is the use of a volumetric finite element method;

however, their models are still polygonal. Details on how the polygonal model relates to

the volumetric finite element model are not provided. The authors report some simplifications to their finite element solver to make it faster, but do not provide any timing data. The size of the finite element grid for the liver model, the number of polygons

5 4 in the model, and the details of setting up the finite element model with the polygonal model are not provided. The use of polygonal models limits the use to surface deformations. In [15], Cotin et al. discuss that the density of the tetrahedral simulation grid is defined by the resolution of the boundary triangulation. They indicate they would like to write a tetrahedralization algorithm that will allow fewer tetrahedra in the middle of the object than on the surface. This solution seems to imply that they are only interested in surface deformations and do not consider interior deformations important.

Bro-Nielsen and Cotin [6] (also Bro-Nielsen [7]) present an approach similar to that described by Cotin [15] discussed in the preceding paragraphs. The main extension is the use of condensation to reduce the number of nodes in the finite element grid. The condensation process removes the interior nodes of the grid; only the nodes on the surface are retained. They report performance for their system on a Silicon Graphics ONYX with four MIPS R4400 processors using the SGI Performer library for rendering. An update rate of twenty frames per second for a finite element model with 250 nodes is reported; the number of polygons in the model is not reported.

They state that the goals of the work are to calculate the deformation in the smallest amount of time possible and to produce models that are “visually convincing”, not necessarily completely physically correct [6]. They also point out that eventually they would like to be able to make cuts in the model which will require changing the topology

55 of the model (since they use surface models). They note that volume models are necessary

to provide defined interiors.

Kuhn et ai. [43] briefly describe a system for surgical training without specific

details for most of the system. They use surface models (polygonal and NURBS) along

with a mass spring system for deforming the data. Details of how the simulation and

surface models are connected are not provided. From one of the figures, it appears that

stretch, squash, and cutting deformations are provided, but no details are provided.

2.4.2 Surgical Planning

There is a large body of literature in the area of surgical planning. Many of these papers do not include the simulation of deformations. In the remainder of this section, only work that involves simulation of deformations for surgical planning is discussed.

Work by two groups that do not currently contain any deformation simulations is mentioned here because they support complex general anatomical models and plan to include deformation [34][68][70]. The main distinction between surgical planning and surgical simulation is that interactive speeds are not as important; however, exact accuracy can be more important.

Girod et al. have published a number of papers in the area of craniofacial surgery simulation [38][39]. The purpose of their simulator is to allow multiple surgeries to be simulated so that the operation that improves the patient’s jaw functionality and

5 6 appearance the most is found. Their approach involves polygonal models and a physically- based simulation. Keeve and Girod have applied the layered mesh technique used by

Terzopoulos and Waters [86] (discussed in Section 2.3.1) to their surgical planning system. First, a polygonal mesh of bone is extracted from a CT scan and a polygonal mesh of the skin is obtained using a Cyberware 3D laser scanner. They apply a mesh reduction algorithm to reduce the number of points in the mesh. A mesh representing the muscles is added between the skin and bone and these three meshes are connected together. The system allows the bone to be manipulated as it would be in surgery and then the physically-based simulation is applied to determine the resting position the modified bone and tissue will reach. They have both a basic mass spring system and a more sophisticated finite element solver. They report timings of about one minute for the mass spring system and about ten minutes for the finite element method to reach an equilibrium point for

3,080 tissue elements using a SGI High Impact .

Koch et al. [40] use a technique very similar to that of Keeve discussed in the previous paragraph. They use a mesh for the skin and a mesh for the skull connected by springs and do not include the muscle mesh layer as Keeve does. Koch et al. use a finite element method to calculate the results of the simulated facial surgery and report timings of about 28 minutes for skin mesh of approximately 3100 triangles on a SGI Indigo- with a 200MHz R4400 processor.

57 2.4.3 Summary

Because of the requirements of surgery simulators, physically-based methods are

most appropriate. This dissertation presents a physically-based deformation method that solves many of the problems with existing physically-based methods for surgery simulation. Most existing algorithms are too slow to achieve the interactive rates required by surgery simulators and do not support volumetric models for rendering the deforming data. This dissertation presents a fast, numerically stable, physically-based simulation algorithm using volumetric models and is tailored for the types of deformations that are required by surgery simulators.

58 CHAPTER 3

SLICE BASED VOLUME RENDERING

This chapter presents a new volume rendering algorithm for irregular grids. The

method is general purpose and can be easily applied to grids composed of polyhedra. The

method slices a grid with a set of parallel planes. This produces a set of polygonal meshes.

These meshes can be rendered with polygon rendering hardware to achieve fast volume

rendering. The rest of this section discusses why it is difficult for existing algorithms to

render grids fast enough for interactive use.

The computational speed of computers continues to increases rapidly; however,

most computers are still not capable of rendering volumetric data sets that are commonly

used today (data sets on the order of 256^ voxels) at interactive rates using existing volume

rendering algorithms. A data set of size 256^ has over 16 million voxels. A conservative estimate for the number of operations required to calculate the contribution of each cell is

100 operations. For object space methods, the necessary operations are:

• transforming the voxels from object space to image space

• determining a depth ordering

• calculating the color and opacity values for each pixel the voxel affects

• compositing the color values with the current image

59 For image space methods, the necessary operations for each pixel are:

• intersecting the sight ray with the grid

• sampling and interpolating the values for the intersected voxels to obtain a color and opacity

• compositing these values with the current value for the pixel.

Typically, each voxel is intersected by more than one sight ray so the number of

operations per voxel is multiplied by this amount. Assuming 100 operations per voxel,

over 1.6 billion operations are required to render an image of the data set. This is too many

operations for most computers today to handle to render even one image per second, and certainly too many operations to render images at the rate (fifteen to thirty frames per second) required for true interactive manipulation of a data set.

In addition to faster processors, special purpose rendering hardware to increase rendering speed is becoming more affordable and more prevalent in computers today. All of the rendering hardware for common workstations and personal computers is for rendering polygonal models. Special purpose computers for direct volume rendering exist and more have been proposed [67], but they are not common nor cost effective in general.

In this chapter a new method for fast volume rendering that can take advantage of existing polygon rendering hardware is described. By taking advantage of the hardware, the total number of operations is split between the main CPU and the graphics hardware. Also, since the hardware is tailored for specific operations, these operations can be performed much faster than they could with the main CPU. Some initial results of this work are

60 presented in [106]. The basic algorithm is a general purpose algorithm that can handle grids composed of polyhedral cells. If more information about the grid is known, such as the cells are all tetrahedra or convex hexahedra, then optimizations to the algorithm can be included to reduce rendering times even more.

3.1 Overview of Slice Based Method

The rendering algorithm described here is designed to take advantage of the capabilities of existing graphics hardware available on many computers; however, if special purpose hardware is not available, the functions can be implemented in software.

The rendering hardware that the algorithm benefits from most is polygon-fill hardware and compositing hardware. It also can benefit from hardware for transforming coordinates and clipping. The algorithm can also take advantage of 3D solid texturing hardware; this is useful for rendering deformation of grids that are initially rectilinear, and is discussed later.

The basic premise behind this algorithm is that an image of the volume can be created by generating a sequence of 2D images that represent parallel slices of the volume and then compositing these images together to produce an image of the 3D volume. This idea of compositing 2D images to perform volume rendering has been used in other volume rendering algorithms such as the ones developed by Koyamada [41][42], Cabral

[9] and Cullip and Neumann [19]. Cabral [9] and Cullip and Neumann [19] use this idea and introduce the use of solid texturing hardware for volume rendering.

61 The slice based algorithm presented here intersects a polyhedral grid with a set of parallel slices. Slices parallel to the view plane are used to reduce artifacts and allow fast slicing. The result of intersecting a plane with a single convex polyhedral cell is a convex polygon, so the result of intersecting a plane with a grid consisting of convex polyhedra is a polygonal mesh composed of convex polygons. For concave polyhedra, the intersections are convex or concave polygons. Each polygon of the mesh is rendered (pixels filled by interpolating the data) based on interpolated values from the polyhedral cell. The rendered polygonal meshes are then composited together to produce the final image. Available hardware for polygon filling and compositing allows this algorithm to provide fast volumetric rendering.

This method can also be thought of as a reordering of the traversal of the ray casting method (pixel space) described in chapter 2. Both methods sample the grid at the same locations; however, the order the samples are processed is different. Figure 3.1 shows this similarity for 2D. In both cases, the volume is sampled at the solid dots. In ray casting, all the samples for each ray are calculated before calculating a sample for another ray (i.e., each vertical dashed line is calculated before moving to the next one). For the sliced based method, the samples are computed along each horizontal line before moving to the next one. The advantage of this method is that it uses the graphics hardware to sample and interpolate values in the volume. The algorithm also uses the graphics hardware to perform the rendering.

6 2 grid

slice

ray

image plane

Figure 3.1 Relation between ray casting and slice based method

The algorithm is composed of three basic stages. The initialization stage, described

in Section 3.2, involves transforming the volume based on viewing parameters and setting up data structures to allow efficient slicing. The slicing stage, described in Section 3.3, computes the set of edge intersections resulting from intersecting the polyhedral grid with a plane. Forming the polygons from the set of intersected edges and the rendering of these polygons is described in Section 3.4. Details of the polygon forming stage for different kinds of cells is discussed in Section 3.5. Techniques for using this algorithm to render deforming data are described in Section 3.6. Some extensions to the basic algorithm are

63 discussed in Section 3.7 and finally, some of the advantages and disadvantages of this algorithm are discussed in Section 3.8.

3.2 Initialization Stage

The first step is to transform the grid vertices from object space to image space so that the grid can be intersected with a set of planes that are parallel to the XY plane.

Intersecting the grid with a plane defined by z = c (for some constant c e 9Î ) involves less computation than intersecting the grid with an arbitrary plane. If the hardware performs a matrix transformation whenever the polygons are drawn, then two transformations are applied to the points and this results in a possible overhead; however, the hardware transformation is generally fast enough that it is not significant.

The implementation of this algorithm uses the OpenGL™ library [63] to perform the rendering. On many workstations, such as many of those manufactured by Silicon

Graphics Inc. (SGI), the OpenGL™ library is required to access the rendering hardware of the workstation. OpenGL™ provides a “feedback” mode for accessing some of the values at various stages of the graphics pipeline. This feedback mode can be used to send vertices to the hardware to perform the transformation and obtain the transformed coordinates. Using the hardware to transform the vertices may not be any faster than performing the calculations in software due to the overhead necessary to retrieve the values from the hardware. Depending on the relative speeds of the hardware and the general processor (especially since most processors now perform floating point

6 4 calculations directly), it may be advantageous to calculate the transformation in software.

Another issue is the precision of the transformed coordinates using the hardware may be lower than the precision of the software floating point transformation. Using the hardware transformed coordinates may introduce round-off errors that will cause problems in the slicing stage. The choice of using the hardware or software transformation method is thus dependent on the specific machine used.

Depending on the viewing parameters, some of the grid vertices may be outside the viewing frustrum. The cells completely outside the viewing frustrum do not contribute to the image, so vertices from these cells can be ignored in the remaining stages of the algorithm. Ignoring these cells reduces the computation time for the latter stages. The only difficulty is for cells on an edge of the viewing frustrum that have some vertices inside the frustrum and some outside the frustrum. Vertices outside the frustrum that are from a cell partially inside the frustrum must be kept track of because they are needed in the slicing stage. Determining which vertices are only from cells that are completely clipped and which vertices are from cells that are only partially clipped can be a time consuming operation. A fast, simple, but not completely adequate solution to this problem is discussed in the polygon forming and rendering section (Section 3.4).

Once the coordinates are transformed, the next stage is to intersect the transformed grid with a set of planes. These intersections must be performed in a front-to-back or back- to-front order so that the compositing in the rendering stage is performed correctly. This

65 implies that the grid needs to be traversed in the correct visibility order to efficiently

calculate the slices. A data structure to allow the edges and cells to be accessed in a

visibility order without actually sorting the cells is used. The basic idea is to create a data

structure that contains exactly the set of edges and cells that are intersected by a slice.

The first step is to compute the minimum and maximum z values and z„,ojc) of the transformed points and specify the number of slices {numSlices) to generate. The distance between the slices (Az) is calculated based on z^m' ^max^ and numSlices. A

“bucket” data structure is used to assist in calculating the set of edges and cells that a slice intersects. The number of slices specified determines the number of buckets. Each bucket corresponds to the z value range from one slice to the next slice and contains a linked list of edges. Each edge is stored in the bucket corresponding to the slice that will first intersect the edge. The bucket in which an edge is inserted is specified by Equation 3.1

(where z is the minimum z value of the edge and L J is the floor function).

bucket = |_(z-z^y^)/(Az)j (3.ii

Figure 3.2 illustrates how the edges are placed in buckets. In the example, there are five slices and four edges. Edges el and e4 are placed in the first bucket since slice one is the first slice that will intersect these edges. Similarly, edge e3 is placed in the second bucket and edge e2 is placed in the third bucket.

6 6 e l, e4

Figure 3.2 Example of placing edges in buckets

The exact same approach is used to place the polyhedral cells of the grid in buckets. A separate set of buckets is used to store the polyhedral cells. Each polyhedron is placed in a bucket based on the minimum z value of the polyhedron’s vertices. After the transformation of the vertices and the placing of the edges and polyhedra in buckets, the algorithm continues with the slicing stage.

3.3 Edge and Polyhedron Slicing

The slicing stage is essentially the 3D extension of the common scan-line polygon fill algorithm [23]. As the algorithm moves from one slice to the next, it updates an active list of edges {AEL) and an active list of polyhedra {APL) so that only those edges and

67 polyhedra that the current slice intersects are processed. This eliminates intersection calculations for edges and polyhedra that are not intersected by the current slice. As the algorithm proceeds from one slice to the next, the corresponding edge and polyhedron buckets are used to update the AEL and APL.

The slicing stage starts at the first slice, or equivalently at the first bucket. Initially the active lists {AEL and APL) are empty. The edges in the first bucket are processed and added to the AEL. The maximum z value of the edge, along with the x and y values of the intersection point of the edge and slice, are stored with the edge in the AEL. The maximum z value of the edge is used to determine when to remove the edge from the AEL. If the maximum z value of the edge is smaller than the z value of the slice, the edge is not added to the AEL since the edge is not intersected by any of the slices. If the edge is intersected by more than one slice, increments for the x and y values are also stored in the AEL.

Determining if more than one slice intersects an edge is calculated by comparing the maximum z value of the edge to the z value of the next slice. These increments allow the intersection point of the edge and following slices to be updated by two additions at each slice. Also, the data values at the intersection point are stored with the edge in the AEL. If the edge is intersected by more than one slice, increments for the data values are calculated and stored with the edge in the AEL.

The polyhedra in the first bucket are processed in a similar manner. The APL does need to contain the additional information about intersection values as the AEL does. If the

6 8 maximum z value of a polyhedron from the bucket is less than the z value of the slice, this

implies that the polyhedron is not intersected by any slices. In this case, additional slices

may be desired so that contributions for every polyhedral cell are included in the final

image. This extension is discussed in Section 3.7.1.

Once the first edge bucket and first polyhedron bucket have been processed, the

AEL and APL contain exactly the set of edges and polyhedra that the first slice will

intersect. After the polygons are formed and rendered for the first slice (as described in

Section 3.4), the AEL and APL lists are updated to process the next slice. Each edge in the

AEL is processed. If the maximum z value of the edge is less than the z value of the next

slice, it is removed from the AEL\ otherwise, the intersection point and data values are

updated based on the increment values stored with the edge. Finally, the next edge bucket

and polyhedron bucket are used to introduce new edges to the AEL and new polyhedra to the APL in exactly the same manner as the first buckets are processed.

With this scenario, going from one slice to the next is equivalent to going from one bucket to the next. It should be noted that the edges in each bucket need not be sorted, as all the edges in a bucket will be considered when we process a slice, irrespective of their z- values. Haimes [29] also implements volume slicing, which employs initial sorting by the minimum and maximum z values of all the cells. This requires 0(n log n) time for every view change, and an extra 0(log n) time is spent finding the z-limits in the sorted arrays for each slice. In contrast, the algorithm described here uses 0(n) time to place the cells in

69 buckets, and a total of only 0 (n ) time for all the slices to maintain the active cell list,

irrespective of the number of slices.

3.4 Polygon Forming and Rendering

This section describes the basic steps, forming the polygons and rendering them,

that complete the algorithm. Pseudo code for the entire algorithm is listed in Figure 3.3.

Methods for handling the clipping problem, mentioned in Section 3.2, are discussed in

Section 3.4.1. Handling different kinds of volumetric data such as scalar data or 3D solid textures in the polygon rendering stage is discussed in Section 3.4.2. Details of the polygon forming stage for different kinds of grids are discussed in Section 3.5.

After the slicing stage, the AEL contains the edges that are intersected and the intersection point and data values for the point. The APL contains the set of polyhedra that are intersected by the slice. By forming the polygon for each polyhedron in the APL, the polygonal mesh that represents the intersection of the plane (slice) and the grid is computed. For each polyhedron in the APL, the intersection points for the intersected edges of the polyhedron are collected from the AEL. The data values at the intersection points are also collected. The correct ordering for the points of the polygon is determined and the polygon is rendered. Since the vertices are already in screen space, an orthographic projection that maps the points onto the viewing window is loaded onto the matrix stack that OpenGL™ applies to the vertices of the polygon. Because the slices are

7 0 computed in a visibility order (front-to-back or back-to-front), the compositing of the

slices is performed correctly.

3.4.1 Clipping Problem

The problem with cells that are partially clipped by the view frustrum (as

mentioned in Section 3.2) becomes apparent during the polygon forming stage. Ideally,

cells that do not intersect the viewing frustrum are removed from processing before the

bucket stage discussed in Section 3.2. During the transformation stage, it is easy to

determine which points are inside the viewing frustrum; however, some points that are

outside viewing frustrum are vertices of cells that intersect the viewing frustrum. The correct method is to retain the vertices of edges that are partially clipped. Determining

which vertices, edges, and cells need to be retained, and which can be ignored requires extra computation during the transformation stage. It is much easier to ignore the cells that have any points outside the frustrum. The problem with ignoring these cells is that it can create holes in the slices along the boundary of the image. If the cells are ignored, no contribution to the image from cells that intersect the viewing frustrum is included. The simple and fast solution is to specify a window slightly larger than the actual image size in the orthographic projection. This method does not calculate intersections and form polygons for most of the cells outside the frustrum; however, depending on the size of the window specified with the orthographic projection, some polygons which are not displayed may still be calculated. Also, if a partially clipped cell is large, there still may be missing portions along the side of the image; however, for most cases this method works

71 /* Initialization Stage */ transform grid vertices from object space to image space compute Zmin of transformed vertices compute distance between slices (Az) based on user-specified numSlices for each edge e z = minimum z coordinate of e

insert e into edgeBucket[b] for each polyhedron p z = minimum z coordinate of p

insert p into polyBucket[b]

I* Slicing Stage */ z = z value of first slice; zi = z value of next slice; AEL = empty; APL = empty for 6 = 1 to numSlices do for each edge e in edgeBucket[bJ if e.zMax >= z, add e to AEL and compute intersection points, etc. for each polyhedron p in polyBucketfbJ if p.zMax >= z, add p to APL

I* Polygon Forming Stage */ for each polyhedron p in APL pointList = NULL for each edge e o f p that is in AEL add e.intersection to pointList form polygon poly from point list /* Rendering Stage render polygon poly

!* update intersection points for next slice */ for each edge e in AEL if e.zMax> zl, e.intersection = e.intersection + e.increment else remove e from AEL z = zl zl = z + Az

Figure 3.3 Pseudo code for rendering algorithm

7 2 adequately. If there are holes along the edge of the image, the size of the projected window

can be changed to prevent it.

3.4.2 Rendering Different Data Types

Volumetric data sets from scientific visualization applications typically have both

scalar and vector data at the grid points. For example, computational fluid dynamics

(CFD) simulations often produce values for temperature, pressure, and velocity at the grid

points. The scalar values (temperature and pressure) can be easily represented with still

images. Sometimes vector values (e.g., velocity) are represented as still images, but more

often, an animation is used to visualize the vector values. To display the scalar values, a

color mapping is applied to translate the scalar values to a color range. This is usually done in an intuitive manner. For example, low temperature values are mapped to blue color values and hot temperature values are mapped to red color values. The values between are mapped to colors that blend from blue to red.

Rendering these scalar values requires interpolating values for locations between the grid points using the values specified at nearby grid points. In the slice based algorithm, there are two interpolation steps. A data value must be interpolated at the edge intersection points during the slicing stage. This can be done by linearly interpolating the values at the two grid vertices that define the edge. More sophisticated interpolation methods can be used if desired. Next, the data values must be interpolated across the polygon as it is filled. Most hardware polygon-fill implementations only allow colors to be

7 3 interpolated across the polygon. They require colors to be specified at the polygon vertices

and these colors are smoothly interpolated across the polygon as it is filled. The color

interpolation is usually done in the RGB color space. Thus, the data values must be

mapped to color values at the polygon vertices and the color values are interpolated in

RGB space. In most cases, this produces different results than the desired method of interpolating the data value across the polygon and then finally applying the mapping to the color at each pixel.

For polygon fill routines that support ID textures, the desired method can be implemented. Each data value is mapped to a color. A ID texture containing all the colors for the data values is created based on the order of the data values. At each grid vertex, the texture coordinate corresponding to the color value for the scalar value at that grid vertex is stored. During the slicing stage, the texture coordinate is interpolated at the edge intersection points. The polygons are rendered by specifying the interpolated texture coordinate with each vertex, instead of a color. The polygon fill routine interpolates the texture coordinates across the polygon and then maps the texture coordinate to a value using the ID texture, producing the desired results.

Volumetric medical data sets are typically represented by a sequence of 2D images. These images are usually obtained from a MRI, CT, or ultrasound device.

Together, the slices represent a 3D volume. Some of the high-end graphics workstations

(such as the SGI Reality Engine [1]) support 3D textures. On these machines, the set of 2D

7 4 slices can be stored as a 3D texture in the hardware. Usually, a mapping is applied to the

raw volumetric data to create RGB a values for storing in the texture. On many machines,

the texture memory is not large enough to hold large data sets. In this case, the data can be

split into subvolumes and these subvolumes loaded during the rendering in a manner that

provides a correct depth ordering. For regular data sets, the slice based method described

in this chapter can be used to render these volumes with just one cubical cell. At each of

the eight vertices, the texture coordinate for the corresponding boundary of the volume is

stored. When the cube is sliced, the three texture coordinates are interpolated at each edge

intersection. For each slice, one polygon is drawn. The hardware interpolates the texture

coordinates across the polygon and for each pixel, maps the texture coordinate to the

corresponding point in the texture to obtain the correct color. This is essentially the

method proposed by Cabral [9] and Cullip and Neumann [19].

On machines that do not support 3D textures, a grid at the same resolution as the

volume could be used and color values stored at each vertex. As before, ID textures could

be used to correctly interpolate across the polygons. Obviously, this will be much slower

than using hardware 3D textures since many more cells need to be sliced and more

polygons drawn. On machines without 3D texture support, this method is not practical for regular grids and other volume rendering algorithms are more appropriate.

75 3.5 Polygon Forming Details

The complexity of the polygon forming algorithm depends on the kinds of

polyhedral cells that form the grid. In many cases, the cells of a grid are composed of the

same kinds of primitives (e.g., all tetrahedral cells or all hexahedral cells). For convex

cells, the correct ordering of the points can be determined by forming the convex hull of

the set of intersection points; however, this method is expensive. A better method for

convex cells is presented in Section 3.5.1. Methods for handling concave cells are

discussed in Section 3.5.2.

3.5.1 Convex Cells

If the grid is composed of only convex polyhedral cells, then each polygon

resulting from slicing through a cell is a convex polygon. This is because the intersection

of two convex sets is convex. If a grid contains polyhedral cells which are concave, then

the polygons may be convex or concave.

Ordering the intersection points correctly is easy if it is known that the polygon is

convex. A simple convex hull algorithm can be executed on the points; however, this is

expensive. A much faster method can be used by noting that the ordering of the points

resulting from the intersection of a convex polyhedron and a plane can be determined

directly from the edges of the polyhedron that are intersected. Every slice that intersects a

specific set of edges produces the same ordering of the vertices to form the convex polygon. For tetrahedral cells, a plane can intersect three or four edges of the tetrahedron.

76 See Figure 3.4 for the two cases that encompass ail the topologically unique cases. For convex hexahedral cells (where each face of the hexahedron is formed by a quadrilateral), a plane can intersect three, four, five or six of the edges. See Figure 3.5 for the five topologically unique cases. For the hexahedron, there are two topologically different ways a plane can intersect four edges and one way for it to intersect three, five, or six edges.

Figure 3.4 Topologically unique cases of a plane-tetrahedron intersection

Since the point ordering that forms the convex polygon is determined by the edges that are intersected, tables can be used to specify the ordering of the points so that the computational expense of the convex hull algorithm is not necessary. For any convex polyhedron, these tables can be generated. This allows the ordering to be calculated quickly and easily. For each of the five topologically unique hexahedron cases, there are

77 Figure 3.5 Topologically unique cases of a plane-convex hexahedron intersection

24 ways the hexahedron can be intersected in this manner. Thus, a table of 120 entries handles all the possible cases.

3.5.2 Concave Cells

For grids that contain concave cells, determining the point ordering for the polygons is more difficult. The intersection of a plane with a concave polyhedron may yield a convex or a concave polygon. It is possible for an arbitrary concave polyhedron to produce polygons with holes when it is sliced with an arbitrary plane (see Figure 3.6 for an

78 example); however, these kinds of polyhedra rarely, if ever, occur in scientific visualization applications. The method presented in this dissertation handles polyhedra that produce one simple polygon or multiple disjoint polygons when each of the polyhedra is intersected with a plane.

The diagonal edges on the front face go into the polyhedron. A slice parallel to the back face that intersects the diagonal edges generates a polygon with a hole.

Figure 3.6 Polyhedron that yields a polygon with a hole when sliced

Some applications use cells whose faces are not planar polygons. An example of this is the CFD method described by Baker and Pepper [3]. Also, the method for deforming volumetric data described in Chapter 4 starts out with rectangular prism-

79 shaped polyhedra and then moves the vertices to generate the deformations. As soon as a

vertex moves, the faces of the cells are generally not planar so the same situation arises.

Each cell is essentially a warped cube. These kinds of grids cause problems for ray casting

methods, and some object space methods, because the boundaries between the cells are

not easily defined. The slice based rendering method presented here can handle these kinds of cells without additional computations.

Determining the boundary between these cells is somewhat an arbitrary decision.

Four points that do not lie in the same plane define a hyperbolic paraboloid ([71], pp. 415-

416). The face can also be viewed as two triangles (by connecting two diagonally opposite points on a face). The choice of which method to use to interpret the faces is usually an arbitrary one. In most cases, the four points are relatively close to forming a planar polygon so the interpretation method for the face does not affect the results significantly.

Defining the intersection of a plane with these kinds of cells (without well-defined faces) as the polygon formed by the set of points generated from the edge intersections allows fast generation of the intersection and produces a reasonable result. Again the exact definition of the intersection is dependent on how the face is defined. The edge of the polygon generated by the intersection implies that the edge is part of the face of the polyhedron; however, for different planes, the definition of the face changes. For example, in Figure 3.7, the edge formed by points a and b does not necessarily intersect the edge formed by points c and d. This implies that as the viewpoint for the data set changes, there

80 can be small variations in the pixel values calculated along these faces; however, in general, these variations are so minor that they cannot be detected. This does not cause problems such as holes between neighboring cells since the same plane intersects both neighboring cells and they share the edge along the boundary.

Edge ab does not necessarily intersect edge cd

Figure 3.7 Different slices through cell imply different points for the faces

A problem that can occur when the faces are not planar is that multiple disjoint polygons can be created by the slice. See Figure 3.8 for an example of this. Clearly, this cannot be handled by the cases in Figure 3.5; however, if these six edges are intersected,

81 the two triangles formed will always be the same. Note that this choice of polygons essentially imposes a triangulation on the face of the cell. The bold/wide edges in Figure

3.9 show the triangulation that is imposed. If the other triangulation of the top face is chosen, there is a hole in the planar slice through the cell. In order for the grid cells to match up, the cell above this cell must have the same triangulation. Figure 3.10 shows an exploded view of the two cells matching up along the imposed diagonal. Again, if the other triangulation is used for one of the cells, the two cells do not match up and create a hole in the grid.

Figure 3.8 Slice through concave polyhedra

82 Figure 3.9 Triangulation imposed on face of cell

Figure 3.11 shows how the polygons generated by slicing through the two cells fit together without any overlap and without any holes. The slice through the bottom cell in

Figure 3.11 generates the same two triangles as in Figure 3.9. The slice through the top cell intersects six of the cell’s edges. It intersects the four edges that make of the bottom face of the cell, plus two diagonally opposite vertical edges. From these six edges, one polygon is formed (the cross-hatched polygon in Figure 3.11).

83 Figure 3.10 Exploded view of ceils matching up

84 \

Figure 3.11 Cells and polygon slices matching up

85 Up until now, the slice through the polygon has been viewed as intersecting a set of

edges; it can also be viewed as a plane that separates the vertices of the cell into two sets.

The information about which side of the plane each vertex is on specifies which edges are

intersected. If an edge has one vertex on one side of the plane and one vertex on the other

side of the plane, that edge is intersected. This is the same method that the marching cubes

algorithm [54] uses to determine where an isosurface passes through a voxel. If one vertex

has a data value higher than the specified iso value and one vertex has a lower value, then

the isosurface intersects that edge. If the plane slicing through a cell is viewed as

separating the vertices into two sets, it is clear that the marching cubes tables can be used

to specify ordering of the polygon vertices. Unlike the marching cubes method, the polygons do not need to be triangulated. The marching cubes algorithm requires triangles because it is generating an isosurface and the polygons may not be planar unless they are subdivided into triangles. Since the slice based method described in this chapter generates the polygons in a planar slice, the polygons are always planar. It is advantageous not to triangulate the polygons since it requires sending more polygons to the hardware and each vertex is sent more times, resulting in additional overhead. On the SGI hardware, the rendering is the bottleneck of the algorithm so reducing the number of polygons sent through the rendering hardware results in a significant decrease in rendering time.

Quantitative results of this are discussed in Chapter 5.

8 6 3.6 Rendering Deforming Data

This rendering method can take advantage of polygon-fill and texture mapping

hardware to quickly render grids that are dynamically changing. The approach is to

change the shape of the polygons without changing the texture coordinates; this produces

a deformed image. For example, if a polygon is stretched, the image that is texture mapped

onto that polygon also stretches. In Figure 3.12, a circle is texture mapped onto the square.

When the square is stretched in the horizontal direction, the circle is also stretched

horizontally resulting in an oval. Wolberg [101] describes the theoretical background for

the sampling and filtering required to reduce artifacts caused by the warping images.

These computations can be expensive and complicated to implement. Fortunately, most texture mapping hardware performs the resampling calculations; however, the hardware uses a simplified version of the calculations so they are not completely theoretically accurate. OpenGL™ and SGI Reality Engine [1] hardware support mip-mapped textures to reduce artifacts caused by inaccuracies in the resampling and filtering.

In order to use this method to produce local deformations, the polygon must be tessellated to a level that corresponds to the localization of the deformations desired. The same technique can be used in 3D. The 3D volume is subdivided into a set of polyhedra that can deform, and the renderer slices through these cells. Kurzion and Yagel [45] also tessellate their model to allow localized deformations. Using the deflectors they define, they are able to generate fast deformations and also use the texture mapping hardware to render the deformations. The deflector method works by adjusting the texture coordinates

87 Figure 3.12 Warping an image by stretching the rectangular polygon

rather than adjusting the polygon vertex coordinates. Controlling the deformations in this manner is more difficult since essentially an inverse calculation is necessary to produce a specified deformation. It is easier and more natural to adjust the vertex locations rather than performing an inverse calculation to get the correct texture coordinate for a specified deformation.

3.7 Extensions

The method and data structures of the slice based method allow a number of extensions. The algorithm described in this chapter performs regular slicing, where the distance between slices remains constant. A possible problem is that small cells may lie

88 between two consecutive slices, and thus are not sliced, and do not contribute to the image.

Since the cells are small, they do not contribute a significant amount to the image; however, an accurate and robust algorithm should produce an image that contains the contributions of all the cells. Section 3.7.1 describes a solution to this problem. Another problem is that for large data sets, interactive frame rates are not possible while rotating and slicing the data. A common technique is to display a lower quality image while the view is changing. Once the desired viewpoint is reached, a higher quality image is displayed. Two extensions using this idea to allow faster interaction are presented in

Section 3.7.2 and Section 3.7.3.

3.7.1 Adaptive Slicing

The slice based algorithm can easily be extended to ensure that every cell is processed, or to limit the number of cells that are missed. During the phase when the cells are placed in buckets, it is easy to determine whether or not a cell will be sliced. If the maximum z value of the transformed cell is less than the z value of the slice corresponding to the bucket it is placed in, the cell will not be sliced. This can be detected and additional slices added if desired.

Typically, the user specifies a number of slices to use and one bucket is created for each slice. The simplest way to support adaptive slicing is to create multiple buckets for each slice that the user initially specifies. The normal slicing procedure is used except that all the buckets corresponding to a slice are first processed to determine if any cells have

89 maximum z coordinates less than the next slice. If some of the cells do, then these cells

would not be sliced. A slice is added at a bucket before the maximum z coordinate of the

first cell that would have been missed. The opacity used to render the polygons in the slice

must be adjusted since the distance between the slices changes. After the slice is

processed, the algorithm continues by using the next bucket to add cells to the active

polyhedron list. Essentially, slices are added after buckets wherever they are needed to

ensure that cells are not missed.

Using a pre-set number of buckets limits the total number of slices that can be

created and may still allow cells to be missed; however, if the goal is to ensure every cell is

intersected, the minimum distance between slices necessary to do this can be determined

from the data. Based on this distance, the number of buckets necessary to achieve this are

created. The main purpose of the adaptive slicing method is to allow a larger distance

between slices if all the cells across a slice are large and a smaller distance between slices

in areas where necessary. Unfortunately, the size of the cells may vary across a slice and

many additional slices may be needed between each of the initial slices. A possible area

for future work is to only slice the cells that are missed, rather than all the cells across the

slice. The problem with only slicing some of the cells is that it complicates the opacity accumulation and cannot be easily implemented when the hardware is used to do the compositing.

90 3.7.2 Progressive Slicing

A common technique to provide more interactive rendering is to first generate a

lower quality image and then incrementally improve the image. The slice based rendering

method presented here can easily be extended to support incremental refinement. By first

slicing through the grid with a set of coarsely spaced cells, a rough image of the entire data

set can be generated. After this is displayed, the user can choose to change the view or

additional slices between the original slices can be calculated and added to the Image to

improve the quality of the image. The drawback of this method is that in order to perform

the compositing correctly, all the slices must be redrawn, not just the slices that are added.

This requires storing the polygonal meshes for each slice so they can be redrawn quickly

without recalculating them. It also increases the total rendering time to render an image

with a specified number of slices, but does allow quicker viewing of a lower quality

image. Another option to allow faster interaction is to draw the slices in a front-to-back order directly to the screen (instead of the common double-buffer scheme that is used to reduce flicker). With the single-buffer method, a user can choose to stop the slicing and change the view without having to store all the slices.

3.7.3 Stored Slices

The same method of storing the slices described above can be used to provide faster interaction for changing the view, but with a loss of quality. Once a set of slices are generated and stored, these slices can be viewed from different directions. This method is not completely satisfactory because if the view changes by 90 degrees, the stored slices are

91 now perpendicular to the new view so nothing appears. Also, artifacts appear at the edges

because it is easier to see the discrete slices when viewed at an angle; however, for positioning the view, this method produces reasonable images and provides fast

interaction. When the user stops changing the view the data can be resliced with a set of slices that are parallel to the viewing plane.

In order to correctly rotate the stored slices, the polygon vertices must be transformed back to object space using the inverse of the transformation matrix that was originally applied. Next, the transformation matrix for the new viewpoint is applied. Using

OpenGL™ and the graphics hardware, the inverse transformation can be applied without much additional computation. The only extra computation required is to compute the inverse of the 4 x 4 viewing matrix when the slices are computed. OpenGL™ also applies a viewport transformation and normalizes the Z values from the range [-1,1] to [0,1] during the original transformation. Multiplying the inverse matrix by the matrix defined in

Equation 3.2 (where vpw and vph are the viewport width and height, respectively) creates one matrix that removes the viewport normalization and also transforms the points back to object space.

2/vpw 0 0 -1 0 2/ vph 0 -1 (3.2) 0 0 2 -1 0 0 0 I

9 2 The calculation of the inverse matrix only needs to be performed once when the

slices are created. OpenGL™ allows the specification of two matrices, the projection

matrix and the modelview matrix. Before applying the transformation to the points, these

two matrices are multiplied together. The inverse matrix described above is placed in the

modelview matrix and the new viewing transformation matrix in the projection matrix.

When the vertices are transformed, the graphics hardware applies the inverse transform

and the new transform without any additional computation. Using this technique, the

stored slices can be displayed for a sequence of views without any additional computation

other than the 4x4 inverse matrix computed when the slices are created.

3.8 Advantages and Disadvantages of Method

The main advantage of this algorithm is that it produces good quality images of irregular grids extremely fast by taking advantage of polygon rendering hardware. Images and timing results are provided in Chapter 5. All the object space methods described in

Chapter 2 with the exception of Koyamada [41][42] require a depth sorting of the cells; because irregular grids cannot always be depth sorted, this is a problem for these algorithms. The method presented here does not require depth sorting, but instead uses the buckets to avoid the sorting. This also results in a time savings since sorting requires 0(n log n) time, whereas placing the placing the edges and cells in buckets only requires 0(n) time for n cells.

93 This method allows different kinds of grids to be handled easily. The only

difference between the algorithm for tetrahedral cells versus polyhedral cells is the tables

used to form the polygons based on the edge intersections. This allows a grid consisting of

different kinds of cells to be handled easily. Concave grids complicate image space

algorithms because it is more difficult to determine the last cell the ray intersects. Concave

grids do not require any additional computation or complicate the slice based algorithm.

The only disadvantage of the algorithm is that it does not easily allow view

dependent shading such as specular highlights to be included. Three different methods for

interpolation across the polygons are discussed in Section 3.4.2. The three methods are:

interpolating colors across the polygons (first interpolating the data values from the cell vertices to generate a vertex color for the polygon), using ID textures to interpolate a data value across the polygon, and using 30 textures to interpolate a 3D texture coordinate across the polygon. All these methods require that the colors corresponding to a data value or texture coordinate be defined statically which does not allow view dependent information to be used. Of course, the colors could be calculated for each view, but this would defeat the purpose of the fast renderer since another rendering method would be required to calculate these colors. Van Gelder and Kim [92] present a method for including shading information with solid textured based rendering based on the idea of changing the texture for any change in view or lighting parameters. They report it increases rendering times by approximately a factor of ten.

94 Eventually, hardware capabilities may increase to allow view dependent shading.

In order to calculate view dependent shading such as specular highlight, it is necessary to

use a normal value at each pixel and perform an additional calculation to affect the color

value at that pixel based on the normal. Currently, the texture coordinates are interpolated

across the polygon and the color is obtained by mapping to the corresponding elem ent in

the texture. Ideally, the 3D texture capabilities of the hardware would also store a normal

value along with a color value. The color and the normal would be used in a lighting equation models such as Phong to produce a shaded color for each pixel. See [23] for a description of illumination models. This would allow fast view dependent shading to be incorporated into the algorithm described in this chapter.

95 CHAPTER 4

FAST PHYSICALLY-BASED VOLUME

DEFORMATION

Section 2.3 in Chapter 2 describes the basic approaches of deforming data In the computer graphics field. The goal of deformation as a modeling tool is to allow the greatest flexibility for the user with an intuitive and easily specified set of controls. For modeling and similar types of applications, many deformation techniques can be used

(e.g., free-form deformations and physically-based methods). For applications such as surgery simulation and planning, the goal is realistic interaction with a model representing all or a portion of the human anatomy. For these applications, a physically-based method is the most appropriate deformation technique. This chapter presents a fast method for physically-based deformation that can be used in many applications. The method is presented using surgery simulation as the application.

Most existing surgical simulation methods use a finite element method (FEM) to perform the simulation. The basic approach of the finite element method is to represent a continuous object as a set of discrete nodes (elements) with connections between the

96 nodes. The simulation method operates on these nodes. Information about the object

between nodes must be interpolated from nearby nodes. For surgery simulation, the

simulation calculates new locations for the nodes to represent deformations to the tissue.

To represent incisions, connections between the nodes must be broken. Based on these changes to the nodes, a new rendering of the object must be generated. There is a trade-off

between the accuracy of the simulation and the amount of computation required by the simulation. Both the number of nodes and the simulation algorithm control the accuracy of the method. For less robust and fast simulation algorithms, more nodes are generally necessary to generate the same accuracy level as a more robust and computationally expensive simulation method.

As discussed in Chapter 2, existing surgery simulators represent the anatomy as a polygonal model; however, anatomical data is usually acquired in volumetric form by CT or MRI scanners. To use actual CT or MRI data, existing methods must first generate a polygonal model using an algorithm such as marching cubes [54]. The finite element nodes are usually the vertices, or a subset of the vertices, of the polygonal mesh with the connections between the nodes defined by the edges. This can result in large variations in the distance between nodes which can cause numerical instability problems for the simulation program. It also does not generate connections across the interior of objects which are necessary to simulate deformations; these additional connections must be added procedurally by user specification. In this chapter, a fast physically-based simulation method that does not suffer from these problems is presented.

97 4.1 Overview of Physically-Based Simulation

The approach taken in this work is to develop a practical method rather than a theoretical model. The design constraints imposed are that the simulation run at interactive, or close to interactive rates, that it not suffer from numerical stability problems, that it support varying material properties in the object, and that it can be combined with a fast rendering method to support practical use. The method is designed to support true volumetric deformation rather than just surface deformation and is designed for applications such as surgery simulation. It includes support for making cuts or incisions in the volumetric data set. Although the method is designed for surgery simulation, it is fairly general and can be used in other applications such as volumetric modeling.

Based on the design goals described above, a modified mass spring system is used for the basic simulation method. Mass spring systems are typically fast but can suffer from numerical accuracy and instability problems. To avoid these problems, some modifications are made to the integration step. These modifications do not have a physical basis, but are designed to prevent the numerical problems without sacrificing too much of the realism of the physically-based model.

The basic idea is to impose a simulation grid on the volumetric data grid.

Irregularly shaped grids that fit the shapes of the different material properties could be used. The drawback is that creating this kind of grid requires the input of a user

98 knowledgeable of the simulation method so that the grid works well with the simulation

method. CFD simulations that use irregular grids require the input of a sophisticated user

familiar with the CFD simulation method. Entire books have been written about

generating grids for CFD simulations. The simulation method described here is designed

to be used by people without a knowledge of the underlying simulation method. Because of this, a regular grid is used for the simulation grid.

The simulation grid typically has a lower resolution than the volumetric grid to reduce the amount of computation in the simulation. Each simulation grid point is assigned a mass based on the material properties of the surrounding volumetric grid points. The simulation grid points are connected by springs. When a force is applied to a simulation grid point, the simulation calculates the movement of the simulation grid points. Based on this change to the simulation grid, the volumetric grid is deformed accordingly and rendered. The simulation grid and its relation to the volumetric grid are described in Section 4.2. Details of the simulation algorithm are provided in Section 4.3.

The two incision extensions to the simulation algorithm are described in Section 4.4. The connection between the simulation method and the fast slice based rendering method

(described in Chapter 3) is discussed in Section 4.5. The advantages and disadvantages of the method are discussed in Section 4.6.

9 9 4.2 Simulation Grid Setup

Ideally, for accuracy, the simulation grid resolution would be the same as the

volumetric data set. Unfortunately, interactive rates cannot be achieved for grids as large as common data sets (128^ to 256^). Also, large amounts of memory are needed to store all the information associated with each simulation grid point. Simulation grids on the order of 32^ or 64^ are more reasonable for the computational power and memory size of current computers. Typically, each simulation grid point represents a2x2x2to8x8x8 set of points from the volumetric data set.

A mass must be assigned to each simulation grid point based on the values of the volumetric data points around the simulation grid vertex. Segmentation information from the volumetric data set can be used to assign different masses based on the different material properties of the volumetric data set. Assigning a simulation grid point a large mass implies that a large force is needed to cause the point to move. Thus, for portions of the volumetric data set that are rigid, large mass values are assigned and for portions of the volumetric data set that are flexible, small mass values are assigned. In the case of anatomical data sets, bone is assigned a large mass value, while skin and fatty tissue are assigned small mass values. The implementation described here uses an automatic process to assign masses to the simulation grid points based on the segmentation information from the surrounding volumetric data set voxels. The spring strengths can also be set based on the segmentation information; however, in practice using different mass values based on the material properties and uniform spring values produces good results.

100 As mentioned above, typically, each simulation grid point represents multiple grid

points in the volumetric data set. If all these volumetric grid points have the same material

property, it is obvious which mass should be assigned to the simulation grid point;

however, if the volumetric grid points have different material properties it is more difficult

to determine a mass value that produces an accurate simulation. For example, consider a

simulation grid point that represents eight volumeunc data points, four of which are a rigid

material and four of which are a soft, flexible material. If a large mass is assigned to the

simulation grid point, both the rigid volumetric data points and the flexible data points will

not move, whereas if a small mass is assigned, both will move. Clearly, neither case is desirable. This is a disadvantage of using a lower resolution simulation grid. The simplest solution is to increase the resolution of the simulation grid, but this increases the amount of computation. In surgical simulations it is much more disconcerting to see the bone move than to not see the soft tissue move. Because of this, it is generally better to use higher mass values for areas that include both bone and soft tissue.

The simulation grid vertices are connected by springs. In a mass spring system, forces applied to one mass point are propagated to neighboring mass points by the connecting springs. To reduce the computation, the fewest number of springs necessary should be used. Unfortunately, just creating springs for the edges of a regular grid (i.e., 12 springs for a cube) creates problems in the simulation. These 12 springs do not create enough stability to maintain the basic shape of a cube when forces are applied. To alleviate this problem, all the vertices of the cube are completely connected resulting in 28 (8

101 choose 2 in combinatorial terms) springs to connect the eight vertices of the cube. The

completely connected cube is much more stable when forces are applied; however, the

amount of computation increases linearly with the number of springs since each spring

must be evaluated to calculate the force it applies to the two mass points it connects.

4.3 Simulation Algorithm

The simulation method is a basic Euler integration method for solving differential

equations with a few modifications to make it faster and more numerically stable. The

pseudo code for a basic Euler method for a system of mass points connected by springs is

listed in Figure 4.1. The code is straightforward and the amount of computation is linearly

related to the number of particles and springs.

The main problem with Euler’s method is that it is not numerically stable unless small time steps are used. The mathematical reasoning for this can be found in any numerical analysis textbook (e.g., [8]). A good introduction can also be found in the physically-based modeling course notes from SIGGRAPH’94 [100]. Using small time steps requires more computation to simulate the same amount of time. When large time steps are used, the error at each time step can increase significantly.

More sophisticated solvers such as Runge-Kutta [8] can be used to solve the differential equations. Numerical integration techniques such as Runge-Kutta perform multiple evaluations of the function that is being integrated at each time step to obtain a

102 /* for each mass point, initialize the calculated force to the value of any external forces applied to the point (0, if no external forces) */ for each mass point p p.calculatedForce = p.extemalForce

/* for each spring calculate the force it exerts on the two points it connects */ for each spring s p i = point 1 for s\ p2 = point 2 for s length = distance between p i and p2 difference = length - s.restingLength force = difference * sspringStrength pl.calculatedForce +=force p2.calculatedForce -= force

/* for each particle, update velocity and position based on applied forces */ for each mass point p acceleration = p.calculatedForce / p.mass p.velocity += timeStep * acceleration p.position += timeStep * p.velocity + 0.5 * acceleration * timeStep^

Figure 4.1 Pseudo code for basic Euler method for a mass spring system

more accurate update to the current value. Because each time step performs multiple evaluations, the amount of computation per time step is increased; however, because the updated value is more accurate, a larger time step can be used. In general, Runge-Kutta methods result in less computation than Euler’s method to achieve the same accuracy. The

Runge-Kutta method is not used in the method presented here because it has problems when constraints are introduced to the system. This is discussed in more detail later.

103 The problem with Euler’s method and Runge-Kutta methods is that numerical stability and accuracy are not guaranteed. Some differential equations are extremely unstable. These are usually referred to as “stiff differential equations” because mass spring systems with large spring constants (a stiff spring) have these problems. Even for more accurate methods such as Runge-Kutta, solving these equations can require extremely small time steps. Another approach (instead of always using small time steps) is to use an adaptive method that estimates the error, backs up the simulation to the previous time step, and uses a smaller time step when necessary.

The other difficulty with both Euler’s and Runge Kutta methods, and especially methods such as Runge Kutta that require multiple evaluations per time step, is that it is difficult to handle constraints with these methods. An example of a constraint is requiring a particle to maintain a specified distance from another particle. This constraint could be met by placing a spring with a large spring constant between the two particles, but this would cause the stiffness problems discussed above. A better approach is to use the

Lagrangian dynamics method which uses an implicit integration method. Details of the

Lagrangian dynamics method can be found in [100]. Lagrangian dynamics methods require the solution to a system of linear equations. At first, this might seem unreasonable since a system of n equations requires O(rt^) time to solve; however, if the mass spring system is arranged in a regular manner, the matrix of equations has a regular structure that allows a faster method to be used. The Lagrangian method is not used in this dissertation because the other modification of only performing the simulation calculations for a subset

104 of the particles and springs would require changing the structure of the matrices during the

simulation. Also, since the subset of particles and springs included in the calculation may

not be regularly arranged, it may not be possible to use the faster linear system solution

technique.

The Runge-Kutta method and other methods that perform multiple evaluations per

time step cannot easily handle constraints. These methods assume the differential equation

is continuous. This assumption is used when the method determines the update for the

time step based on the multiple evaluations performed at fractional values of the time step.

A constraint occurring between the multiple evaluations introduces a discontinuity and

results in an incorrect update. It is simpler and faster to handle constraints with Euler’s

method since only one evaluation is used per time step. Euler’s method also assumes a continuous function; however, the discontinuity from constraints is introduced between steps. Applying a constraint between time steps is equivalent to restarting Euler’s method with a new initial boundary condition, since Euler’s method does not depend on updates from previous time steps.

The main problem with using a straightforward Euler or Runge-Kutta method for volumetric deformation using the setup described in Section 4.2 is that the structure of the grid can be lost when one point crosses through the boundary of a neighboring cell due to the numerical stability problems. This is illustrated for 2D in Figure 4.2. This is caused by a large force being applied to the particle marked ‘a’. If the force is large enough so that

105 the point’s position changes a large amount in one time step, then the other springs do not

have a chance to react to prevent the instability. In 2D, two intersecting lines indicate this problem. In 3D the problem can occur when one point passes through the face of a cell and is much more computationally expensive to detect. This is because once the points start moving, the faces of the simulation grid cells are no longer planar. Once the grid loses its structure, it generally does not return to its original structure because some of the springs are now pulling in the wrong direction. For example in Figure 4.2, one spring has now changed its orientation. The spring which the small arrow is pointing to, connects point a to a point above it in the left image. In the right image, this spring now connects point a to a point below it. This causes the spring to pull in the wrong direction.

Figure 4.2 Large force applied to point a causes the grid to lose it shape and stability

106 If a small enough time step is used, the problem does not occur; however, it is

difficult to predict the size of the time step required. To ensure that the situation does not

occur, it would be necessary to use an extremely small time step. The approach taken in

this dissertation is to limit the movement of each point so that these instability problems

do not occur, regardless of the time step. This can reduce the accuracy of the simulation if

the movement of the points needs to be restricted too much, but in practice, with a

reasonable time step, it produces good results.

Kass [100] discusses the two main problems with numerical simulations: accuracy and stability. Accuracy refers to the errors that result from making a numerical approximation to a continuous differential equations. Stability refers to whether or not the simulation method converges to a solution. As Kass points out, for physically-based computer animation, stability is generally much more important than accuracy. Small errors in the solutions are generally not visible in the graphical representation; however, if the simulation does not converge, but instead produces wild, unpredictable results, these are very noticeable. The approach taken in this dissertation is to ensure stability at the expense of some accuracy.

In order to ensure stability, the simulation must prevent situations such as Figure

4.2, and require that the grid maintain its basic structure. Conceptually, the idea is to determine a region for each point in which it can move without causing any edge crossings. The simulation then restricts the movement of each point to that region. For

107 example in Figure 4.3, if each point remains within the oval shaped region during the next time step, no edge intersections can occur and the system remains numerically stable.

Figure 4.3 Safe regions in which each point can move

A maximal safe region for each point cannot be defined because the movement of other points also affects whether or not intersections occur. If every point moves the same amount in the same direction, then the points can move any distance without causing any intersections. The regions defined in Figure 4.3 are worst case scenarios that assume neighboring points are moving towards each other, rather than in the same direction. To achieve better results than using a small time step with a Runge-Kutta method, the calculation of the safe regions must be fast and also not too conservative so that in most

108 cases the calculated movement is within the safe region. This limitation affects the types of deformations that can be reasonably simulated and is discussed in Section 4.6.

The stability and intersection problems occur when one point moves past one of its neighbors. A quick way to determine a region that does not cause any stability problems is to limit the movement of each point to less than 50% of the minimum distance to its neighbors. If two points are moving towards each other, but each point moves less than

50% of the distance between them, the points do not move past each other. Because of numerical round-off errors, a value of 40% produces better results. Numerical errors also cause problems when the distance between the two points is very small. In this case, there is not enough precision to accurately calculate the 40% maximum movement. To solve this problem, the points are also required to maintain a minimum epsilon distance between them.

The calculation of the percentage distance between each neighbor can be performed without much additional cost because the distance between the neighbors is already calculated to determine the force the spring should apply. When the new velocity is used to update the point, the calculated positional update is compared to the maximum allowed distance and limited to this value if necessary. These calculations do not significantly slow down the performance of the algorithm and create a much more stable system.

109 The other modification to the basic Euler method is to only perform the simulation

calculations for a subset of the mass points and springs. Typically, the deformations used

in surgical applications are local rather than global. For example, common deformations in

a surgical simulation are produced by a virtual probe interacting with a small portion of

the anatomy, or by making a small incision. These interactions do not deform the entire

anatomy, but rather a small localized area. The other portions of the anatomy do not

change so there is no need to perform simulation calculations on these areas. This

significantly reduces the amount of computation required. Essentially, a list of active

simulation grid points and springs is maintained and the simulation calculations are only

performed for these points and springs.

Since not all the simulation points and springs are active, a mechanism for

introducing points and springs into the active list and removing them from the active list

when they are no longer moving must be implemented. Introducing the points is a

relatively easy operation. The simulation grid is static until an external force is introduced

to the system. The external force can be generated by events such as a probe pushing a

point in a specified direction or an incision at a specified location. Ideally, the set of points

and their attached springs that will move are introduced to the active list. It is difficult, if

not impossible, to predetermine exactly which points will move because of the introduced external force. The approach implemented is to insert the set of simulation points and their attached springs that are in a neighborhood around the applied force into the active list.

110 The size of this neighborhood should be chosen conservatively so that all points that will move are included.

Ideally, the points and springs are removed as soon as they stop moving.

Determining this is difficult, if not impossible, because of the numerical simulation. The points typically do not completely stop, but instead move such small amounts that it is not visually noticeable. One approach is to remove the points and their attached springs once the points are only moving at most some epsilon value each time step. This method has several drawbacks. It requires extra calculation to determine when the point stops moving.

Also, some points on the fringe of the neighborhood may not move at first, but eventually will once the forces propagate outward. If the epsilon approach is used, these points would be removed from the active list and would never move unless points are added dynamically. One possible approach is to add the neighbors of the currently active points to the active list since they will be affected at the next time step. This requires additional data structures and checking to determine which neighbors are active and which ones are not active. Because this slows the simulation down, it is not currently implemented.

The approach implemented is to specify a number of time steps the points and springs should be active when they are introduced to the active list. The lifetime should be chosen conservatively so that points are not removed while the positional update of the point from one time step to the next is visually noticeable. Because the simulation uses numerical calculations, the points usually do not reach a velocity of zero; instead, they

III oscillate back and forth extremely small amounts. This oscillation is usually not visually

noticeable. If another external force is introduced nearby, the lifetime of currently active

points and springs can be incremented accordingly. Pseudo code for the modified algorithm is listed in Figure 4.4.

The actual implementation is slightly different. In the pseudo code, the lifetime of the points and springs is calculated separately. This can lead to springs being active when the two points they connect are not active, or points being active when the springs attached to them are not. To alleviate this problem, the active list stores cells of the simulation grid rather than individual points and springs. Thus, all the points and springs for a cell are introduced and removed from the active list together. Because points are shared by neighboring cells, some particles are active without all their springs being active, but only on the edge of the active area. This does not cause any problems since cells are introduced to the active list in a conservative manner so that there is not much movement on the edge of the active region. Adding cells to the active list is not included in the pseudo code because cells are added to the active list when external forces are introduced into the system, rather than as a reaction to the simulation

4.4 Incisions

The simulation method described in Section 4.3 supports continuous deformations such as stretching and squashing. For some applications (e.g., surgery simulation) discontinuous deformations such as incisions are necessary. Simulating incisions requires

112 /* for each active mass point, initialize the calculated force to the value of any external forces applied to the point (0, if no external forces) */ for each active mass point p p.calculatedForce = p.extemalForce p.maxMove = infinity

/* for each active spring calculate force it exerts on two points it connects */ for each active spring s p i = point 1 for s\ p2 = point 2 for 5 length = distance between p i and p2 maxMove = 0.4 * length difference = length - s.restingLength force = difference * s.springStrength pl.calculatedForce +=force p2.calculatedForce -=force if imaxMove < pl.maxMove) pl.maxM ove = maxMove if ImaxMove < p2.maxMove) p2.maxMove = maxMove s. life time— if Is. lifetime = 0) remove s from active spring list

/* for each particle, update velocity and position based on applied forces */ for each active mass point p acceleration = p.calculatedForce / p.mass p. velocity += timeStep * acceleration update = timeStep * p.velocity + 0.5 * acceleration * timeStep- updateLength = length of update if {updateLength > p.maxMove) update *= p.maxMove / updateLength p.position += update p.lifetime— if {p.lifetime = 0) removep from active point list

Figure 4.4 Pseudo code for simulation algorithm

113 changing the topology of data. For simulation methods that use a polygonal representation

for the model, an algorithm that introduces a cut into an arbitrary polygonal mesh is

required; however, the more difficult problem with polygonal models is that there is no

data below the surface. When the cut is made in a surface model, there is no data visible

through the cut unless additional data below the surface is available. The advantage of

volumetric data is that the data below the cut is automatically included in the rendering

and simulation. Also, when using the simulation grid setup described in Section 4.2,

introducing an incision requires splitting a hexahedron which is much simpler than

splitting an arbitrary polygonal mesh. In Section 4.4.1 and Section 4.4.2 two simple

methods for introducing cuts to the hexahedral simulation grid are described.

4.4.1 Incision Method 1

The simplest method for creating an incision in the hexahedral grid is to split the

grid at a vertex. This requires adding one vertex to the simulation grid and creating

additional data structures to keep track of the edge connections to the original and

replicated vertex. Figure 4.5 shows how the grid is split at a vertex in 2D. One vertex and two edges are added. In the implementation, the edges are not stored explicitly, but instead are implied by the vertex structure so additional data structures for the edges are not necessary; however, it does require additional computation during the simulation. Forces must be calculated for the two springs implied by the two new edges.

114 Figure 4.5 Incision at a vertex in 2D

The extension to 3D is straightforward. In 3D, vertices on an outer edge of the grid are only shared by two hexahedra, except for the corner vertices. Vertices on a face are shared by four hexahedra and interior vertices are shared by eight hexahedra. Just as in 2D, one new vertex is added, but in 3D, eight additional spring calculations are necessary when a hexahedral cell is cut at a vertex.

The drawback to this method is the obvious restriction that cuts occur at a vertex.

When the simulation grid has a lower resolution than the volumetric data grid, this limits the locations of incisions. The next section presents a more general, but also more complicated and expensive method that can be used for more arbitrarily specified incisions.

115 4.4.2 Incision Method 2

The second incision method allows an incision to be made at an arbitrary location

on a hexahedral cell’s edge. This requires splitting hexahedral cells into multiple cells. A

picture of this in 2D is shown in Figure 4.6. It also requires specifying end points for the

incision (points a and b in Figure 4.6). These end points for the incision can be on various

edges of the cell. This method also requires creating new springs for the cells that are cut.

The new springs must be added so that the cut cells are stable and maintain their shape.

For the incisions made in Figure 4.6, a spring could be created for each edge along with

the bold edges show in Figure 4.7. Depending on the shape of the cells after the cut, it may

be difficult to connect the vertices of the cut cells with springs so that the shape is stable.

Figure 4.6 Incisions at an arbitrary location on an edge in 2D

116 Figure 4.7 Springs added to stabilize cut cell

Extending this method to 3D is more complex than extending the first incision method described in Section 4.4.1 to 3D. There are five topologically unique cases for splitting a hexahedral cell into two cells with a planar slice (the five plane-convex hexahedron intersection shown in Figure 3.5). Adding springs on either side of the incision to ensure the cells are stable may be difficult. The simplest solution is to completely connect each of the vertices for the cut cells. This may require a large number of additional springs. For the case where six edges of the hexahedra are intersected by the plane, 90 springs are required to completely connect each half of the cut cell. The cut creates six extra points, resulting in ten vertices for each of the two cells. In combinatorial terms, 10 choose 2 is 45, so 45 springs are required for each half of the cut cell.

1 1 7 Currently, only incisions which split the hexahedron into two hexahedra (second

case in the top row of Figure 3.5) are implemented in the system. Figure 4.8 shows how

the cut is made in a hexahedral cell. The cell below it is a mirror image so that the edges of

the cut match up. Similarly, the edges of the other neighboring cells match up. Points a, b

and c in Figure 4.8 are specified as a percentage distance between the two end points of

the edge they are on. The implementation supports incisions of arbitrary cell depths and

arbitrary cell lengths by also cutting the edges of neighboring cells. For example, in Figure

4.8, the edge with point a could also be split to create a deeper incision.

Figure 4.8 3D example of second incision method

118 Creating connecting springs for this case is relatively simple. The incision creates two hexahedra so each of the two hexahedra are completely connected using 28 springs for each one. The resting lengths for the springs are calculated based on the percentage distance of points a, b and c along their respective edges. The incision is made to spread apart during the simulation by either applying outward forces to points d and e or by shortening the resting length of some of the springs connected to points d and e. Example images from animations of cuts are shown in Chapter 5.

4.5 Simulation Issues Related to the Renderer

In order to visualize the deformation at interactive rates, the simulation must be combined with a fast renderer. This section describes how the simulation algorithm is integrated with the rendering algorithm described in Chapter 3. The volumetric data is stored as a 3D texture in the texture memory of the computer. A simulation grid with a lower resolution than the volumetric grid is used and the mass and spring values are set according to the data values in the subvolume as discussed in Section 4.2. Texture coordinates for each simulation grid vertex are set according to the volumetric grid points that are inside the simulation grid cell. The simulation grid cells are deformed by the simulation and are sliced by the renderer. The polygon or polygons generated by slicing through a simulation grid cell are rendered as a textured polygon.

One additional extension is implemented for the renderer to support fast volume rendering. Since most of the deformations are local, only a portion of the image actually

119 changes from one frame to the next. The algorithm described in Chapter 3 is extended to

support incremental image update. The idea is to determine a rectangular shaped area of

the image that contains the portions of the image that will change in the next frame. Only

this portion of the image needs to be updated so many of the cells do not need to be

rendered. Determining the portion of the image that may change can be calculated easily

during the simulation phase when all the active cells are traversed.

While traversing the active cells, the minimum and maximum x and y values of the

transformed active cells are determined. This provides a rectangular area in which all the changes to the image will take place. During the rendering phase, only cells that have x and y extents that intersect this rectangle are rendered. The rendered polygons are clipped to the rectangular area using four clipping planes which are supported in OpenGL™ (and often implemented in hardware). It is necessary to clip to the rectangle so the rendered polygons do not affect pixels outside the rectangle which are already correctly rendered from the previous frame. This eliminates the rendering of many of the cells. All the cells still need to be processed, but many cells can be quickly eliminated by comparing their x and y extents to the rectangular area. Those not in the rectangular area do not need to be sliced and rendered. As is shown in the timing results in Chapter 5, the rendering and slicing stages are the two most expensive stages, so eliminating many of the cells that need to be sliced reduces the rendering time significantly. Timing results for this method are provided in Chapter 5.

120 4.6 Advantages and Disadvantages of Method

The major advantages of this algorithm are that it produces relatively fast and

stable simulations, supports true volumetric deformations, and that it can be integrated

with a fast slice based renderer. This allows the deformations to be visualized at interactive

rates on high end graphics workstations. The simulation method supports both continuous deformations such as stretching and squashing along with discontinuous deformations such as incisions, allowing it to be used in a wide variety of applications from modeling to surgery simulation.

The disadvantage of the algorithm is that it Is not as accurate as sophisticated finite element methods. These finite element methods generally use more robust physical models than simple mass spring models, but of course also require more computation.

Because a mass spring model is used, there are some limitations on the type of deformations that can be simulated. The method is ideal for local deformations that occur relatively slowly. Most of the deformations required for surgical simulation fall into this category. Examples are using a probe to press on tissue or making an incision. In general, deformations that are a result of extremely large forces cannot be handled well by this method.

The simulation method cannot accurately handle deformations that are extremely fast and cause major changes. An example of this is a bullet flying through tissue. When a bullet strikes tissue at a high velocity, an extremely large force is applied at the contact

121 point. The large force generates a large positional update to the contact point. Without the

movement limitation modifications the simulation grid would lose its shape. With the movement limitations, the positional updates to the points would be limited so much that the results would not be accurate. Extremely small time steps would be required to simulate deformations of this types.

122 CHAPTER 5

RESULTS

This chapter presents results of the irregular grid renderer described in Chapter 3 and the deformation method described in Chapter 4. The rendering method is a general purpose renderer for irregular volumetric grids. It can be used to render both static and dynamically changing grids. Although this dissertation focuses on methods for surgery simulation, meaningful comparisons cannot be made between the rendering algorithm presented here and rendering algorithms in other surgery simulation systems. The main reason the comparisons are useless is that other surgery simulation methods render surface data rather than volumetric data. The other problem is that most of the papers in the literature do not provide separate timings for the renderer and simulation, or enough data about the models to compare methods. The volumetric rendering literature provides more detailed information about the timings and data so the new rendering method is first compared to these rendering methods using a common irregular grid from the scientific visualization field. These results and comparisons are provided in Section 5.1. In Section

5.2, timings and images of anatomical data sets undergoing deformations appropriate for surgery simulation are presented.

123 5.1 Rendering Scientific Data Sets

The volume renderer presented in Chapter 3 is a general purpose renderer for

irregular grids. The implementation is written using the C++ programming language and

is a straightforward translation of the pseudo code in Figure 3.3. The only optimizations to

the basic code are the use of free lists for inserting and deleting the edges and polyhedra

from the active lists. Free lists reduce the number of system calls to allocate memory.

Allocating many small chunks of memory is more expensive both in terms of CPU time

and total memory required than allocating one large chunk of memory. The use of free

lists allows items to be inserted and deleted from the active edge and polyhedron lists without making a system call for every insertion and deletion. For a detailed description of free lists, see [61].

The timing results are for a SGI Infinite Reality Onyx. The machine has 256 megabytes of memory and four 194 MHz R10000 processors; however, the code is not parallelized so the code runs on one processor. The timings are subdivided into the four stages of the algorithm listed in Chapter 3: transforming the vertices, placing the edges and polyhedra in buckets, slicing, and rendering. The transformation stage is performed in software, rather than hardware, because it produces slightly better results. Because the slicing and rendering are coupled together, it is difficult to obtain a separate timing for each of these stages. In order to time only the rendering stage, a separate call to the system clock is necessary for each polygon that is drawn. This results in a significant amount of overhead being included in the timings. To remedy this situation, the program is run twice

124 with different timing code. In the first run, times for the transform stage, the bucket stage and the combined total for the slicing and rendering stages are obtained. The second time, just the rendering stage is timed by placing a call to the system clock before and after each polygon is drawn and summing the total of all these times. The rendering stage time is then subtracted from the combined total of the slice and rendering stages to obtain a separate time for each of these two stages. This is not completely accurate since the timings are obtained using slightly different versions of the code and two different executions of the program, but it provides a reasonable estimate of the times for each stage. The total time reported for the algorithm is accurate since it is obtained from the first timing without all the calls to the system clock for each polygon.

The performance of the implementation is evaluated using the well-known blunt fin data set [35]. The data set has 40,960 vertices and is organized as a 40 x 32 x 32 hexahedral grid. The size of the images is 640 x 480 pixels and the data set fills most of the image in the horizontal direction. Table 5.1 summarizes the results of these timings for the various stages. Data for two different views, a straight view of the data set, and a 45 degree angle view of the data set are provided. The number of polyhedra intersected by the slices is dependent on the viewing angle. As the table shows, the majority of the time is spent in the rendering stage; because of this, as the performance of polygon rendering hardware continues to improve, the performance of this algorithm will improve significantly. For straight views, 50 slices produce a good quality image; however, for the

125 angled view, approximately 100 slices are necessary to prevent artifacts. The four images corresponding to the timings in Table 5.1 are in Figure 5.1.

Trans­ Ren­ form Bucket Slice der Total View # Slices Time Time Time Time Time # Polygons straight 50 0.0375 0.1899 0.1416 0.4347 0.6141 59.241 angle 50 0.0371 0.2122 0.2285 0.3274 0.5933 45,371 straight 100 0.0377 0.1912 0.26710.6601 0.9651 120.900 angle 100 0.0371 0.2064 0.2106 0.6530 0.9010 90.784

Table 5.1 Timings in seconds for 640x480 images of blunt fin

Wilhelms et al. [95] present an algorithm that is capable of handling hierarchical irregular grids. They also present comparison timings to some of their earlier projection algorithms reviewed in Chapter 2. Their new algorithm is actually slower for non- hierarchical grids, but it is more general. The best timing result they report for the blunt fin data set is 16.4 seconds [95] using the incoherent projection algorithm presented in [91].

This result is for a 500 x 500 pixel image rendered using a SGI Onyx with Reality Engine n graphics and one 150 MHz processor. This machine is slightly slower than the machine used for the results in Table 5.1, but is the closest result for comparison in terms of data set and architecture found in the literature. It is difficult to exactly compare the performance of other algorithms because of different hardware used, different data used for reporting results, and the limited data timing data that is published. The timing results produced by the rendering algorithm presented here are faster than any previous work of which the author is aware. Using similar hardware with a slightly faster CPU than Wilhelms et al.

126 50 slices 50 slices

100 slices 100 slices

Figure 5.1 Blunt fin images

[95], timings of less than a second for the same data set are produced using the algorithm described in Chapter 3 versus 16.4 seconds for the best time by Wilhelms et al.

An earlier version of the rendering algorithm presented in this dissertation is described in [106]. The referenced version only renders tetrahedral grids and also does not use table lookups to form the polygons. Slicing through a tetrahedron generates a triangle

1 2 7 or quadrilateral. The triangles do not require an ordering to form the polygons. For

quadrilaterals, a simple convex hull algorithm for four points is used to generate the

polygons. Rendering hexahedral grids is achieved by subdividing each hexahedron into

five tetrahedra. This earlier version of the algorithm renders the blunt fin using 50 slices in

approximately nine seconds using a 100 MHz SGI Crimson Reality Engine. When the algorithm is extended to handle hexahedra directly and to use tables to form the polygons, the rendering time reduces to 2.7 seconds. The faster processor and more advanced graphics hardware of the 194 MHz R10000 SGI Infinite Reality reduces this time to less than a second as shown Table 5.1. Timings for the extensions discussed in

Section 3.7 are provided in [106].

Giersten’s slicing method [28] is not able to take advantage of rendering hardware and thus is much slower than the algorithm presented in this dissertation. Giersten presents a time of 38 seconds for a 512 x 512 image of a grid composed of 3,681 cells, which is less than a tenth the size of the blunt fin grid. Giersten’s timings are from an IBM RISC

System/6000 model 530 in 1992. The results would be better on more current equipment, but still would most likely be significantly slower than the method presented here.

Koyamada and Ito [42] present results for a grid composed 61,680 cells which is a little larger than the blunt fin data set. It is not completely clear from the results, but it appears their timings include the time to extract the polygonal slices, but not to render them. They report a time of approximately ten seconds to generate 50 slices using a IBM

128 3990 which is a relatively fast machine. Their algorithm is slower than the method presented here because they use a less efficient method to create the slices.

5.2 Deformation of Volumetric Data

The deformation method described in Chapter 4 is designed for surgical simulation, but it can also be used for a variety of applications. The renderer and deformation method are integrated together to provide a framework suitable for visualizing the deformations as they are calculated. This section presents timings and images from animations of deforming anatomical data.

McDonald et al. [55] are interested in simulating the tissue deformations that occur as a baby moves down the birth canal. Some preliminary work has been done to simulate deformations that occur as the baby grows in the uterus. The data set used is a MRI of a female pelvic region. Only the right side of the anatomical data is used to better show the interior of the data set. The size of the MRI volume for the right side of the body is 129 x

200 X 79. A 10 X 20 X 16 simulation grid is used. For the local deformations, only a 6 x 6 x

4 area of the simulation grid is required. These 144 cells are placed in the simulation’s active list. The MRI data is stored as a 3D texture with the appropriate texture coordinates assigned to each vertex of the 10 x 20 x 16 simulation grid. The image is rendered by slicing through the simulation grid as described in Chapter 4. Except for Figure 5.2, 125 slices were used to produce each image. All images are rendered at a resolution of 640 x

480 pixels.

129 When the baby’s head grows, the tissue Is pressed outwards. For these

deformations it is difficult to discern a difference between two static images of the entire

volume. Animating between the two images clearly shows the deformation. Because it is

difficult to see the deformation in still images, the images in Figure 5.2 show a single slice

through the data set from two images of the animation. The purpose of this example is to

simulate the squashing of tissues that is caused by the baby’s head growing.

Figure 5.2 Slice through original and deformed images

Two images of the entire 3D data set are provided in Figure 5.3. These images show a deformation created by applying an inward force to the skin on the right side of the image. Even with this deformation on the exterior of the data set, it is still difficult to see the difference between the two static images. Again, the animation shows it clearly.

1 3 0 Figure 5.3 Deformation caused by force applied to skin on right side of image

A necessary feature for surgical simulation is the ability to make incisions. The

simulation method presented in this dissertation supports two methods for making

incisions. The first method is simpler, but only supports cuts at a simulation grid vertex.

The second method is more general and supports arbitrary cuts, although currently only one of the six cases is implemented. The advantage of this method versus existing methods

is that the volumetric model supports continuous interiors. The same MRI data set and simulation grid described above are used to show an example of an incision. The incision shown in Figure 5.4 is made by cutting a total of six simulation grid cells.

For the simulations shown in Figure 5.3 and Figure 5.4 the simulation of the active cells (144 cells) requires 0.004 seconds per time step. Two time steps are executed between each rendered image. The 640 x 480 images are rendered using 125 slices in

1 3 1 Figure 5.4 Two images from simulation of an incision

approximately 0.135 seconds per frame. When the incremental image update described in

Section 4.5 is used, the rendering time reduces to 0.100 seconds per frame. Thus, the system can deliver almost ten frames per second for a reasonably sized data set.

Bro-Nielsen has published a number of papers about surgery simulation. He uses a finite element method for the simulation with a linear elastic model similar to that of springs. In his more recent papers [6][7], he advocates using volumetric models because they are necessary to simulate incisions. He has begun incorporating volumetric models for the simulation but removes the interior nodes using a method referred to as

“condensation” before starting the simulation. This reduces the amount of computation required by the simulation. He claims the simulation produces the same results for the surface nodes as it would if the interior nodes were not removed. The problem is that if

1 3 2 incisions are made, no calculations are performed on the interior since these nodes were removed. The rendered images are still produced using surface models.

Bro-Nielsen’s method requires a significant amount of work to set up the simulation [7]. For volumetric data, the data must be converted to a polygonal model. He first manually draws contours for the boundary of skin and bone. Next, a tetrahedral mesh is created connecting the vertices of the contours. Finally, the condensation process is used to remove the interior vertices. Using a SGI Onyx with four R4400 processors, he reports speeds for the simulation of 20 frames a second for a system with 700 nodes and forces applied to three of the nodes. These timings do not appear to include time for rendering. In addition to the significant manual setup required, the method only performs calculations on the surface of the model so incisions cannot be simulated accurately. Even though he strongly advocates volumetric models, the method renders the mesh as polygons, limiting the ability to show interiors of the object. The example provided in [7] is applying a force to the skin of a leg. None of Bro-Nielsen’s papers provide techniques for making incisions.

Cotin et al. [15] present a method similar to Bro-Nielsen’s system using a linearly elastic finite element method. They also include simple haptic feedback using a force feedback device and virtual tool for probing tissue. They use a surface representation during the rendering and do not present methods for performing incisions. No timing results are provided for their system.

133 Kuhn et ai. [43] briefly describe a system for endoscopic surgery simulation. Their

system uses polygonal and/or NURBS (nonuniform rational B-splines) surfaces to

represent the anatomy. They use a SGI Onyx with two 200 MHZ MIPS 4400 processors

and VTX graphics. The graphics hardware is used to render the polygonal and NURBS

surfaces. A mass spring system is used to compute the deformations. Their system

supports virtual reality tools, but they do not appear to include haptic feedback yet. They

report a speed of 14 frames per second for interacting with a model of the gall bladder but

do not provide details of the model (e.g., how many polygons are represented in the

model).

None of the existing systems described in the literature support all the features provided by the simulation and tenderer described in this dissertation. Most do not support incisions, which are extremely important for surgery simulation. None of the systems that support incisions use volumetric models, so they cannot provide an accurate representation of the interior of the object that is cut. Chapter 6 summarizes the advantages and disadvantages of this method along with possible areas for future work.

134 CHAPTER 6

CONCLUSIONS

This dissertation presents new techniques for the two underlying areas that are required for surgery simulation: volume rendering and volume deformation. A general purpose volume renderer capable of handling many different types of irregular grids is described. A simulation technique based on physically-based methods that allows the incorporation of varying material properties for the voxels is presented. The simulation method is used in conjunction with the renderer to provide interactive volumetric deformation. The techniques presented in this dissertation can be used in a number of applications including scientific visualization of irregular grids, volumetric modeling, and medical applications such as surgical simulation and planning.

6.1 Contributions

Chapter 3 presents a new method for volumetric rendering that overcomes one of the major problems existing rendering methods have with irregular grids. Object order methods require a depth sort of the cells to allow correct compositing of the image.

Sorting n cells requires 0(n log n) time, which is a time consuming operation. The more

135 difficult problem is that not all irregular grids can be depth sorted due to cyclic overlap of

the cells. To avoid this problem, some of the cells must be split to break the depth cycle.

Currently, only heuristic methods exist for efficiently splitting cells and these may result in a large number of additional cells, increasing the rendering time and memory requirements.

The new algorithm presented in Chapter 3 provides a correct depth order without an explicit sort by slicing the grid with a set of parallel planes. A data structure that ensures only the cells intersected by a slice are processed is used to enable efficient slicing. Each cell and edge is placed into a bucket corresponding to the first slice that intersects it. This requires only a few operations that are performed in constant time, resulting in 0(n) operations to place all n cells in their buckets. The buckets are used to efficiently maintain a list of the edges and cells that the current slice intersects. The algorithm can take advantage of the polygon rendering hardware that is becoming more prevalent in today’s computers. The hardware is used to render the polygonal meshes created by slicing the irregular grid with planes. By avoiding the explicit sort and using polygon rendering hardware, the new method renders irregular grids faster than existing methods. The algorithm does requires polygon rendering hardware to achieve its fast rendering times. For machines without this hardware, other algorithms may produce better results. The other disadvantage of the algorithm is the lack of view dependent shading without impacting performance.

136 Chapter 4 presents a physically-based simulation method for volumetric deformation. The main advantages of it over existing methods are that it has all the

following attributes: fast, numerically stable, supports varying material properties, supports incisions, easy to set up, and can be easily integrated with a fast volume renderer.

Existing volumetric deformation systems lack one or more of these features. The simulation method is integrated with the volume renderer described in Chapter 3 and is demonstrated with deformations in medical applications. This method is ideal for small, localized deformations required by surgery simulators. Other simulation methods are more suitable for applications that require large or global deformations.

6.2 Future Work

The renderer and simulation show promise for applications such as surgical simulations; however, this is early work in the area and a number of enhancements are needed before the system is ready for real-world applications. The renderer is fairly complete. The main feature needed to improve it is support for view dependent shading such as specular highlights. With current rendering hardware, view dependent shading cannot be incorporated into the rendering algorithm without significant performance degradation. As rendering hardware continues to improve, it is likely that the hardware will support the operations required to support view dependent shading without significant loss of performance.

137 The simulation algorithm is not as advanced as the rendering algorithm. The main

problem with the simulation algorithm occurs when the simulation grid has a lower

resolution than the data grid and the material properties of the data in a simulation grid cell

vary. The solution that is necessary currently is to increase the resolution of the simulation

grid, but this decreases performance. In many situations, it is likely that there are large

areas that are homogenous and can be simulated accurately with a large simulation grid

cell. The solution is to support hierarchical simulation grids or irregular grids so that large

cells can be used in homogenous areas and multiple smaller cells can be used in areas

where the material properties vary. This is not conceptually difficult, but does require

significant changes to the data structures of both the simulation and renderer. One problem

that may occur is the stability of the simulation may also be affected by the varying cell

sizes.

In order to use the system for real applications such as surgery simulation and

surgery planning, the method needs to be tested and refined. Currently, only the common method for evaluating computer graphics (“if it looks correct, it is correct”) has been applied. The current system looks realistic enough that the medical doctors who have seen it are excited about using it. The method needs to be compared with the actual deformation of real tissue to determine how realistic the method is. Based on these tests, the model may need to be tuned to achieve accurate results. This may be as simple as adjusting mass and spring values or may require improvements in the simulation calculations.

138 For real applications, more sophisticated input methods than the mouse and

keyboard are necessary for interacting with three dimensional data. Virtual reality type

input devices that support six degrees of freedom input are necessary to interact more

naturally with the volumetric data. For surgery simulation, devices that support haptic

feedback are also necessary. Again, incorporating these capabilities into the system is not

conceptually difficult, but would require significant additions to the code.

The methods in this dissertation provide techniques that can improve the speed and realism of existing methods for surgery simulations. As the processing power of computers continues to increase, interactive rates will be achievable with more accurate simulation methods. By combining the rendering and deformation techniques presented here along with haptic feedback methods, surgery simulators should become more realistic and usable for training new surgeons within the next five to ten years.

139 LIST OF REFERENCES

[1] Akeley K., “Reality Engine Graphics”, Computer Graphics ('Proceedings of SIGGRAPH ’93), 1993, pp. 109-116.

[2] Avila R., Sobierajski L., Kaufman A., “Towards a Comprehensive Volume Visualization System”, Proceedings of Visualization '92, October 1992, pp. 13-20.

[3] Baker A., Pepper D., Finite Elements 12 3, McGraw-Hill, New York, 1991.

[4] Barr A., Fleischer K., “Global and Local Deformations of Solid Primitives”. Computer Graphics (Proceedings of SIGGRAPH ’87) Vol. 18, 1984, pp. 21-30.

[5] Bartnyk N., Wein M., “Computer Animation of Free Form Images”, Computer Graphics (Proceedings of SIGGRAPH ’75) Vol. 9, No. 1, pp. 79-80.

[6] Bro-Nielsen M., Cotin S., “Real-Time Volumetric Deformable Models for Surgery Simulation Using Finite Elements and Condensation”, Computer Graphics Forum (Eurographics ’96), 15(3):57-66.

[7] Bro-Nielsen, M. “Fast Finite Elements for Surgery Simulation”, Proceedings of Medicine Meets Virtual Reality V (MMVR-V’97), 1997.

[8] Burden R., Faires J., Numerical Analysis, PWS-Kent Publishing Company, 1989.

[9] Cabral B., Cam N., Foran J., “Accelerated Volume Rendering and Tomographic Reconstruction Using Texture Mapping Hardware”, Proceedings of 1994 Symposium on Volume Visualization, October 1994, pp. 91-98.

[10] Chadwick J., Haumann D., Parent R., “Layered Construction for Deformable Animated Characters”, Computer Graphics (Proceedings of SIGGRAPH ’89), Vol. 23, 1989, pp. 243-252.

[11] Cohen D., Shefer Z., “Proximity Clouds, An Acceleration Technique for 3D Grid Traversal”, The Visual Computer, Vol. 11, No. 1, November 1994, pp. 27-38.

140 [12] Coquillart S., “Extended Free-Form Deformation: A Sculpturing Tool for 3D Geometric Modeling”, Computer Graphics (Proceedings of SIGGRAPH ’90), Vol. 24, 1990, pp. 187-196.

[13] Coquillart S., Jancene P., “Animated Free-From Deformation: An Interactive Animation Technique”, Computer Graphics (Proceedings of SIGGRAPH ’91), Vol. 25, 1991, pp. 23-26.

[14] Cotin S., Delingette H., Bro-Nielsen M., Ayache N., Clement J., Tassetti V, “Geometric and Physical Representations for a Simulator of Hepatic Surgery”, Proceedings of Medicine Meets Virtual Reality IV (MMVR-IV’96), 1996, pp. 139- 151.

[15] Cotin S., Delingette H., Ayache N., “Real Time Volumetric Deformable Models for Surgery Simulation”, Proceedings of the 4th International Conference on Visualization in Biomedical Computing (VBC ’96), Hamburg, Germany, Sept. 22- 25, 1996, pp. 535-540.

[16] Cover S., Ezquerra N., O’Brien J., Rowe R., Gadacz T., Palm E., “Interactively Deformable Models for Surgery Simulation”, IEEE Computer Graphics and Applications, 1993, 13(6):68-75.

[17] Crawfis R., Max N., ‘Texture Splats for 3D Scalar and Vector Field Visualization”. Proceedings of Visualization '93, October 1993, pp. 261-266.

[18] Cross R., “Interactive Realism for Visualization Using Ray Tracing”, Proceedings of Visualization '95, November 1995, pp. 19-26.

[19] Cullip T., Neumann U., “Accelerating Volume Reconstmction with 3D Texture Hardware”, Tech Report TR93-027, Department of Computer Science, UNC at Chapel Hill, 1993.

[20] Danskin J., Hanrahan P., “Fast Algorithms for Volume Ray Tracing”, Proceedings of the 1992 Workshop on Volume Visualization, October 1992, pp. 91-98.

[21] Drebin R., Carpenter L., Hanrahan, P., “Volume Rendering”, Computer Graphics (Proceedings of SIGGRAPH ’88) Vol. 22, 1988, pp. 65-74.

[22] Feichtinger H., Grdchenig K., “Theory and Practice of Irregular Sampling”, In Benedetto J. and Frazier M., editors. Wavelets: Mathematics and Applications, CRC Press, 1993. pp. 305-363.

[23] Foley J., van Dam A., Feiner S., Hughes J., Computer Graphics: Principles and Practice, Second Edition, Addison-Wesley, 1990.

141 [24] Frieder G., Gordon D., Reynolds R., “Back-to-Front Display of Voxel-Based Objects”, IEEE Computer Graphics and Applications, January 1985, Vol. 5, No. 1. p p .52-60.

[25] Friihauf T., “Raycasting a Nonregularly Structured Volume Data”, Eurographics'94. Vol. 13, No. 3, 1994, pp. 293-303.

[26] Friihauf T., “Raycasting with Opaque Isosurfaces in Nonregularly Gridded CFD Data”, Visualization in Scientific Computing ‘95, Springer Verlag, 1995, pp. 45-57.

[27] Garrity M., “Ray Tracing Irregular Grids”, Computer Graphics, Vol. 24, No. 5, December 1990, pp. 35-40.

[28] Giertsen C., “Volume Visualization of Sparse Irregular Meshes”, IEEE Computer Graphics & Applications, March 1992, Vol. 12, No. 2, pp. 40-48.

[29] Haimes R., ‘Techniques for Interactive and Interrogative Scientific Volumetric Visualization”, unpublished manuscript available at http://raphael.mit.edu/visual3/ visual3.html

[30] Halliday D., Resnick R., Fundamentals of Physics, Third Edition Extended, John Wiley & Sons, 1988.

[31] Hanrahan P., “Three-Pass Affine Transforms for Volume Rendering”, Computer Graphics, November 1990, 24(5):71-77.

[32] Haumann D., “Using Behavioral Simulation To Animate Complex Processes”, Ph.D. Dissertation, Department of Computer and Information Science, The Ohio State University, 1989.

[33] Hawrylyshyn P., Tasker R., Organ L., “CASS: Computer-Assisted Sterotaxic Surgery”, Computer Graphics (Proceedings of SIGGRAPH ’77), Vol. 11, 1977, pp. 13-17.

[34] Hohne K., Pflesser B., Pommert A., Riemer M., Schiemann T., Schubert, R., Tiede U., “A ‘Virtual Body’ Model for Surgical Education and Rehearsel”, IEEE Computer, 1996, 29(1):25-31.

[35] Hung C., Buning P., “Simulation of Blunt-Fin Induced Shock Wave and Turbulent Boundary Layer Separation”, Paper 84-0457, AIAA Aerospace Sciences Conference, Reno, NV, January 1984.

[36] Kajiya J. Von Herzen B., “Ray Tracing Volume Densities”, Computer Graphics (Proceedings of SIGGRAPH ’84) Vol. 18, 1984, pp. 165-174.

142 [37] Kaufman A. (éd.). Volume Visualization, EEEE Computer Society Press, 1990.

[38] Keeve E., Girod S., Girod B., “Craniofacial Surgery Simulation”, Proceedings of the 4th International Conference on Visualization in Biomedical Computing (VBC ’96), Hamburg, Germany, Sept. 22-25, 1996, pp. 541-546.

[39] Keeve E., Girod S., Pfeifle P., Girod B., “Anatomy-Based Facial Tissue Modeling Using the Finite Element Method”, Proceedings o f Visualization ’96, Oct. 27-Nov. 1, 1996, San Francisco, CA, pp. 21-28.

[40] Koch R., Gross M., Carls F., von Buren D., Fankhausser G., Parish Y., “Simulating Facial Surgery Using Finite Element Models”, Computer Graphics (Proceedings of SIGGRAPH ’96), 1996, pp. 421-428.

[41] Koyamada K., Uno S., Doi A., Miyazawa T., “Fast Volume Rendering by Polygonal Approximation”, Journal of Information Processing, Vol. 15, No. 4, 1992, pp. 535- 544.

[42] Koyamada K., Ito, T., “Fast Generation of Spherical Slicing Surfaces for Irregular Volume Rendering”, The Visual Computer, Vol. 11, 1995, pp. 167-175.

[43] Kuhn C., Kuhnapfel U., Krumm H., Neisius B., “A ‘Virtual Reality’ Based Training System for Minimally Invasive Surgery”, Computer Assisted Radiology ’96, (Proceedings of the International Symposium on Computer and Communication Systems for Image Guided Diagnosis and Therapy),

[44] Kurzion Y., Yagel R., “Space Deformation using Ray Deflectors”, Proceedings of the 6th Eurographics Workshop on Rendering, Dublin Ireland, June 1995, pp. 21-32.

[45] Kurzion Y, Yagel R., “Space Deformation with Hardware Assistance”, accepted to IEEE Computer Graphics and Applications.

[46] Lacroute P., Levoy M., “Fast Volume Rendering Using a Shear-Warp Factorization of the Viewing Transformation”, Computer Graphics (Proceedings of SIGGRAPH ’94), pp. 451-458.

[47] Lamousin H., Waggenspack W., “NURBS-Based Free-Form Deformation”, IEEE Computer Graphics and Applications, 1994, 14(6);59-65.

[48] Laur D., Hanahan P., “Hierarchical Splatting: A Progressive Refinement Algorithm for Volume Rendering”, Computer Graphics (Proceedings of SIGGRAPH ’91), pp. 285-288.

143 [49] Lee Y., Terzopoulos D., Waters K., “Realistic Modeling for Facial Animation”, Computer Graphics (Proceedings of SIGGRAPH ’95), 1995, pp. 55-62.

[50] Levoy M., “Display of Surfaces from Volume Data”, IEEE Computer Graphics and Applications, May 1988, 8(5):29-37.

[51] Levoy M., “Volume Rendering by Adaptive Refinement”, The Visual Computer, Vol. 6, No. 1, February 1990, pp. 2-7.

[52] Levoy M., Whitaker R., “Gaze Directed Volume Rendering”, Computer Graphics, Vol. 24, No. 2, March 1990, pp. 217-233.

[53] Levoy M., “Efficient Ray Tracing of Volume Data”, ACM Transactions on Graphics. Vol. 9, No. 3, July 1990, pp. 245-261.

[54] Lorensen W, Cline H., “Marching Cubes: A High Resolution 3D Surface Construction Algorithm”, Computer Graphics (Proceedings of SIGGRAPH ’87). Vol21, 1987, pp. 163-169.

[55] McDonald J., Yagel R., Schmalbrock P., Stredney D., Reed D., Sessanna D.. “Visualization of Compression Neuropathes Through Volume Deformation”, Proceedings Medicine Meets Virtual Reality (MMVR ’97).

[56] Machiraju R., Yagel R., “Efficient Feed-Forward Volume Rendering Techniques for Vector and Parallel Processors”, Proceedings o f SUPERCOMPUTING '93, Portland, Oregon, November 1993, pp. 699-708.

[57] Mao X., Hong L., Kaufman A., “Splatting of Curvilinear Volumes”, Proceedings Visualization '95, Atlanta, G A, October 1995, pp. 61-68.

[58] Max N., Hanrahan P., Crawfis R., “Area and Volume Coherence for Efficient Visualization of 3D Scalar Functions”, Computer Graphics, Vol. 24, No. 5, December 1990, pp. 27-33.

[59] Metaxas D., Terzopoulos D., “Shape and Nonrigid Motion Estimation Through Physics-Based Synthesis”, IEEE Transactions on Pattern Analysis and Machine Intelligence, June 1993, 13(6):580-591.

[60] Metaxas D., Terzopoulos D., “Dynamic Deformation of Solid Primitives with Constraints”, Computer Graphics (Proceedings of SIGGRAPH ’92), Vol. 26, 1992. pp. 309-312.

[61] Meyers S., Effective C4-+; 50 Specific Ways to Improve Your Programs and Designs, Addison Wesley Publishing Company, 1992.

144 [62] Môller, T. personal communication.

[63] Neider J., Davis T., Woo M., OpenGL™ Programming Guide, Addison-Wesley Publishing Company, 1993.

[64] Novins K., Arvo J., “A Power Series Algorithm for Highly Accurate Volume Rendering”, Proceedings of 1992 Workshop on Volume Visualization, Boston, October 1992, pp.83-90.

[65] Oppenheim A., Schafer R., Digital Signal Processing. Prentice Hall Inc., Englewoods Cliffs, NJ, 1975.

[66] Pentland A., Williams J., “Good Vibrations: Modal Dynamics for Graphics and Animation”, Computer Graphics (Proceedings of SIGGRAPH ’89), Vol. 23, 1989, pp. 215-222.

[67] Pfister H., Kaufman A., “Cube-4 - A Scalable Architecture for Real-Time Volume Rendering”, Proceedings 1996 Symposium on Volume Visualization, San Francisco, CA, October 1996, pp. 47-54.

[68] Pflesser B., Tiede U., Hohne K., “Towards Realistic Visualization for Surgery Rehearsel”, Lecture Motes in Computer Science (Proceedings of CVRMed ’95), Springer-Verlag, 1995, pp. 487-491.

[69] Platt J., Barr A., “Constraint Methods for Flexible Models”, Computer Graphics (Proceedings of SIGGRAPH ’88), Vol. 22, 1988, pp. 279-288.

[70] Robb R., Hanson D., Camp J., “Computer-Aided Surgery Planning and Rehearsel at Mayo Clinic”, IEEE Computer, 1996, 29(l):39-47.

[71] Rogers D., Adams J., Mathematical Elements for Computer Graphics, Second Edition, McGraw-Hill, New York, 1990.

[72] Sabella P., “A Rendering Algorithm for Visualizing 3D Scalar Fields, Computer Graphics (Proceedings of SIGGRAPH ’88), Vol. 22, 1988, pp. 51-58.

[73] Sagar M., Bullivant D., Mallinson G., Hunter P., Hunter I., “A Virtual Environment and Model of the Eye for Surgical Simulation”, Computer Graphics (Proceedings of SIGGRAPH ’94), 1994, pp. 205-213.

[74] Sederberg T, Parry S., “Free-Form Deformation of Solid Geometric Models”. Computer Graphics (Proceedings of SIGGRAPH ’86) Vol. 20, 1986, pp. 65-74.

145 [75] Shirley P., Tuchman A., “A Polygonal Approximation to Direct Scalar Volume Rendering”, Computer Graphics, Vol. 24, No. 5, December 1990, pp. 63-70.

[76] Silva C., Mitchell J., Kaufman A., “Fast Rendering of Irregular Grids”, Proceedings 1996 Symposium on Volume Visualization, San Francisco, CA, October 1996, pp. 15- 22.

[77] Sobierajski L, Cohen D., Kaufman A., Yagel R., “A Fast Display Method for Volumetric Data”, The Visual Computer, Vol. 10, No. 2, 1993, pp. 116-124.

[78] Sobierajski L., Avila R., “A Hardware Acceleration Method for Volumetric Ray Tracing”. Proceedings o f Visualization '95, November 1995, pp. 27-34.

[79] Speray D., Kennon S., “Volume Probes: Interactive Data Exploration on Arbitrary Grids”, Proceedings of San Diego Workshop on Volume Visualization, Computer Graphics, Vol. 24, No. 5, December 1990, pp. 5-12.

[80] Stein C., Becker B., Max N., “Sorting and Hardware Assisted Rendering for Volume Visualization”, Proceedings of 1994 Symposium on Volume Visualization. Washington D.C., October 1994, pp. 83-89.

[81] Swan J., “Object-Order Rendering of Discrete Objects”, Ph.D. Dissertation, Department of Computer and Information Science, The Ohio State University, 1997.

[82] Terzopoulos D., Platt J., Barr A., Fleischer K., “Elastically Deformable Models”, Computer Graphics (Proceedings of SIGGRAPH ’87), Vol. 21, 1987, pp. 205-214.

[83] Terzopoulos D., Fleischer K., “Modeling Inelastic Deformation: Viscoelasticity, Plasticity, Fracture”, Computer Graphics (Proceedings of SIGGRAPH ’88), Vol. 22, 1988, pp. 269-278.

[84] Terzopoulos D., Witkin A., “Physically-based Models with Rigid and Deformable Components”, IEEE Computer Graphics and Applications, 1988, 8(6):41-51.

[85] Terzopoulos D., Fleischer K., “Deformable Models”, The Visual Computer, Vol. 4, No. 6, Dec. 1988, pp. 306-331.

[86] Terzopoulos D., Waters K., “Physically-based Facial Modelling, Analysis, and Animation”, The Journal of Visualization and Computer Animation, Vol. 1, No. 4, 1990, pp. 73-80.

[87] Terzopoulous D., Metaxas D., “Dynamic 3D Models with Local and Global Deformations: Deformable Superquadrics”, IEEE Transactions on Pattern Analysis and Machine Intelligence, iu\y 1991, 13(7):703-714.

146 [88] Terzopoulos D., Mcinemey T., “Deformable Models and the Analysis of Medical Images”, Proceedings o f Medicine M eets Virtual Reality V (MMVR-V’97), 1997.

[89] Thompson D., Buford W„ Myers L., Giurintano D., Brewer J., “A Hand Biomechanics Workstation”, Computer Graphics (Proceedings of SIGGRAPH '88), Vol. 22, 1988, pp. 335-343.

[90] Upson V, Keeler M., “V-Buffer: Visible Volume Rendering”, Computer Graphics (Proceedings of SIGGRAPH ’88), Vol. 22, No. 4, August 1988, pp. 59-64.

[91] Van Gelder A., Wilhelms J., “Rapid Exploration of Curvilinear Grids Using Direct Volume Rendering”, Proceedings of Visualization '93, San Jose, CA, October 1995, pp. 70-77.

[92] Van Gelder A., Kim K., “Direct Volume Rendering with Shading via Three- Dimensional Textures”, Proceedings 1996 Symposium on Volume Visualization, San Francisco, CA, October 1996, pp. 23-30.

[93] Westover L., “Footprint Evaluation for Volume Rendering”, Computer Graphics, August 1990, 24(4):367-376.

[94] Wilhelms J., Van Gelder A., “A Coherent Projection Approach for Direct Volume Rendering”, Computer Graphics (Proceedings of SIGGRAPH ’91), Vol. 25, No. 4, July 1991, pp. 275-284.

[95] Wilhelms J., Van Gelder A., Tarantino P., Gibbs J., “Hierarchical and Parallelizable Direct Volumer Rendering for Irregular and Multiple Grids”, Proceedings of Visualization '96, San Francisco, CA, October 1996, pp. 57-64.

[96] Williams L., “Pyramidal Parametrics”, Computer Graphics (Proceedings of SIGGRAPH ’83) Vol. 17, 1983, pp. 1-11.

[97] Williams P., “Visibility Ordering Meshed Polyhedra”. ACM Transactions on Graphics, Vol. 11, No. 2, April 1992, pp. 103-126.

[98] Williams P., “Interactive Splatting of Nonrectilinear Volumes”, Proceedings of Visualization'92, Boston, MA, October 1992, pp. 37-44.

[99] Witkin A., Welch W., “Fast Animation and Control of Nonrigid Structures”, Computer Graphics (Proceedings of SIGGRAPH ’90), Vol. 24, 1990, pp. 243-252.

[100] Witkin A., Baraff D., Kass M., “An Introduction to Physically Based Modeling”, SIGGRAPH '94 Course Notes 32, 1994.

147 [101] Wolberg G., Digital Image Warping, IEEE Computer Society Press, 1990.

[102] Yagel R., Kaufman A., ‘Template-Based Volume Viewing”, Computer Graphics Forum [Proceedings of EUROGRAPHICS ’92), Vol. 11, No. 3, September 1992, pp. 153-167.

[103] Yagel R., “Rendering Polyhedral Grids by Incremental Slicing”, OSU-CISRC-10/93- TR35, Department of Computer and Information Science, The Ohio State University, October 1993.

[104] Yagel R., Shi Z., “Accelerating Volume Animation by Space-Leaping”, Proceedings of Visualization ’93, October 1993, pp. 62-69.

[105] Yagel R., Ebert D., Scott J., Kurzion Y, “Grouping Volume Renderers for Enhanced Visualization in Computational Fluid Dynamics,” IEEE Transactions on Visualization and Computer Graphics, July 1995, 1(2): 117-132.

[106] Yagel R., Reed D., Law A., Shih P., Shareef N., “Hardware Assisted Volume Rendering of Unstructured Grids by Incremental Slicing”, Proceedings 1996 Symposium on Volume Visualization, San Francisco, CA, October 1996, pp. 55-62.

148