<<

Syracuse University SURFACE

College of Engineering and - Former Departments, Centers, Institutes and College of Engineering and Computer Science Projects

1991

Three--dimensional medical : Algorithms and computer systems

M. R. Stytz Air Force Institute of Technology, Department of Electrical and Computer Engineering, Wright-Patterson AFB

G. Frieder Syracuse University, School of Computer and

O. Frieder George Mason University, Department of Computer Science

Follow this and additional works at: https://surface.syr.edu/lcsmith_other

Part of the Computer Sciences Commons

Recommended Citation Stytz, M. R.; Frieder, G.; and Frieder, O., "Three--dimensional : Algorithms and computer systems" (1991). College of Engineering and Computer Science - Former Departments, Centers, Institutes and Projects. 6. https://surface.syr.edu/lcsmith_other/6

This Article is brought to you for free and open access by the College of Engineering and Computer Science at SURFACE. It has been accepted for inclusion in College of Engineering and Computer Science - Former Departments, Centers, Institutes and Projects by an authorized administrator of SURFACE. For more information, please contact [email protected]. Three-Dimensional Medical Imaging: Algorithms and Computer Systems

M. R. STYTZ

Department of Electrical and Computer Engineering, ALr Force Institute of Technology, Wrzght-Patterson AFB, Ohzo 45433

G. FRIEDER

School of Computer and Information Science, Syracuse University, Syracuse, New York 13224

0. FRIEDER

Department of Computer Science, George Mason Unwersity, Fairfax, Virgznia 22030

This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical , with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallehsm used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging.

Categories and Subject Descriptors: A 1 [General Literature]: Introductory and Survey; C.0 [Computer Survey Organization]: General; C,5 [Computer Systems organization]: Computer System Implementation; 1,3.2 []: Graphics Systems; 1.3,7 [Computer Graphics]: Three-Dimensional Graphics and Realism; J.3 [Computer Applications]: Life and Medical Science

General Terms: Algorithms, Design, Experimentation, Performance

Addltlonal Key Words and Phrases: Computer graphics, medical imaging, surface rendering, three-dimensional imaging,

CASE STUDY ity. A three-dimensional (3D) model of the MRI study reveals flattening in the A patient with intractable seizure activ- gyri of the lower motor and sensory ity is admitted to a major hospital for strips, a condition that was not apparent treatment. As the first step in treatment, in the cross-sectional MRI views. The the treating physician collects a set of 63 physician orders a second study, using magnetic resonance imaging (MRI) im- positron emission tomography (PET), to age slices of the patient’s head. These portray the metabolic activity of the two-dimensional (2D) slices of the pa- brain. Using the results of the PET study, tient’s head do not disclose an abnormal- the physician assembles a 3D model of

Permission to copy without fee all or part of this material KSgranted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its data appear, and notice is given that copying is by the permission of the Association for Computing Machinery To copy otherwise, or to republish, requires a fee and/or specific permission. @ 1991 ACM 0360-0300/91/1200-421 $0150

ACM Computing Surveys, Vol 23, No. 4, December 1991 422 * Stkytz et al.

I CONTENTS the abnormality within the lower part of the motor and sensory strips. The physician uses a surgery rehearsal program to simulate surgery on the com- CASE STUDY bined brain model. To assist the physi- INTRODUCTION cian, the program displays the brain 1. UNIQUE ASPECTS OF THREE-DIMENSIONAL model and the overlying skin surfaces MEDICAL IMAGING side by side. Using a mouse-controlled 2 THREE-DIMENSIONAL IMAGING COORDINATE SYSTEMS, OBJECT SPACE cursor, the physician outlines the abnor- DEPICTION MODELS, AND TYPES OF mal area of the brain on the combined IMAGE RENDERING model. The rehearsal program uses this 3 THREE-DIMENSIONAL MEDICAL IMAGING tracing to perform a simulated cran- RENDERING OPERATIONS iotomy. Pictures of the simulated opera- 4 THREE-DIMENSIONAL MEDICAL IMAGING MACHINES tion are taken to surgery to guide the 41 Farrell’s Colored-Range Method actual procedure. An intraoperative elec- 42 Fuchs/Poultm Pixel-Planes Machine troencephalogram demonstrates seizure 43 Kaufman’s Cube Architecture activity at the site predicted by the medi- 4.4 The Mayo Clime True Tree-Dlmenslonal Machine and ANALYZE cal images. After resectioning of the 45 Medical Image Processing Group Machines abnormal area, the patient’s seizure 4.6 Pmar/Vlcom Image Computer activity ceased [Hu et al. 19891. and Pixar;Vicom II 47 Reynolds and C.oldwasser’s Voxel Processor INTRODUCTION Architecture SUMMARY The Case Study illustrates the three APPENDIX A: HIDDEN-SURFACE REMOVAL basic operations performed in 3D medical ALGORITHMS imaging: data collection, 3D data dis- ‘ APPENDIX B: RAY TRACING play, and data analysis. Three-dimen- APPENDIX C: IMAGE SEGMENTATION BY THRESHOLDING sional medical imaging, also called APPENDIX D: SURFACE TRACKING three-dimensional medical image render- ALGORITHMS ing or medical image volume visualiza- APPENDIX E: SHADING CONCEPTS tion, is the process of accurately and APPENDIX F: SHADING ALGORITHMS rapidly transforming a surface or volume ACKNOWLEDGMENTS description of medical imaging modality REFERENCES data into individual pixel for 3D data display, Three-dimensional medical imaging creates a depiction of a single structure, a select portion of an imaged the average cortical metabolic activity. volume, or the entire volume on a com- The model reveals a volume of hyperme- puter screen. Stereo display, motion tabolic activity. Because PET has rela- parallax using rotation, , tively poor modality resolution, the hidden-surface removal, coordinate location of the abnormality cannot be transformation, compositing, trans- correlated with surface anatomical land- parency, surface shading, andlor marks. To correlate the results of the two shadowing3 can be used to provide depth studies, the physician combines the 3D MRI model of the patient’s brain with the 3D PET model using post-hoc image ‘Rendering is a general term for the creation of a depiction of a structure or volume on a computer registration techniques to align the two screen using a sequence of operations to transform sets of data. This combined model pro- the structure/volume information from object space vides the anatomical and metabolic in- (a data structure in memory) to screen space (the formation required for precise location of CRT) 3Coordinate transformation, which consists of translation, rotation, and scaling, orients the vol- lImage, or tissue, registration permits Images of ume to be displayed Hidden-surface removal en- volumes acquired at different times and with differ- sures that only the visible portions of the objects in ent modalities to be combined mto a single image the volume are displayed Shading provides depth for display cues and enhances image reahsm

ACM Computing Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging 8 423 cues and make the rendered image closely These modalities can be used to acquire resemble a perceived or photographic im- data for accurate diagnosis of bony and age of the structure/volume. (We de- soft tissue diseases and to assess physio- scribe these operations and their use later logical activity without exploratory in this paper. ) Three-dimensional medi- surgery. Data archiving, using special- cal imaging presents the remotely sensed ized picture archiving and communica- morphological and physiological patient tion systems (PACS) [Flynn et al. 1983], data in such a way that the physician is is an acknowledged requirement for the relieved of the chore of mentally recon- modern radiology department and has led structing and orienting the volume and to the development of techniques for tis- instead can concentrate on the practice of sue /image registration. The effect of medicine. To achieve this capability, 3D these related advances is an ever- medical imaging techniques have been increasing need for higher display reso- developed to give the user the ability to lution, increased image rendering speed, view selected portions of the body from and additional main memory to render any angle with an appearance of depth to the image data rapidly and precisely. the image. The requirement for accurate images The Case Study also points out the arises from the need to formulate a diag- difference between 3D medical imag- nosis in the data analysis step. Accuracy ing and . Three- concerns us because the displayed data dimensional medical imaging generates must provide trustworthy visual infor- accurate graphical representations of a mation for the physician. There are two physical volume. The broader field of 3D components to display accuracy — data computer graphics forms realistic images collection accuracy and 3D medical im- from scene descriptions that may or may age rendering accuracy. Data collection not have a physical counterpart. The two is the radiologist’s arena and deals with fields share a common challenge in their issues concerning the statistical signifi- attempt to portray a 3D volume within a cance of collected data, patient dosage, 2D space and so use many of the same medical imaging modality operation, graphics operations to provide depth development of new techniques and cues. These operations include coordi- modalities for gathering data, and 2D nate transformation, stereo display, mo- image reconstruction. Data rendering is- tion parallax using rotation, perspective, sues belong to the computer scientist. hidden-surface removal, surface shading, They encompass computer graphics, sys- and/or shadows. Three-dimensional med- tem throughput, computer architecture, ical imaging uses these techniques to image processing, database organization, generate credible portrayals of the inte- numerical computation accuracy, and the rior of patients for disease diagnosis and user interface. As attested by the num- surgical planning. ber of radiologists and physicians in- The development of 3D medical imag- volved in researching 3D data display ing techniques has not occurred in issues, a knowledge of the physician’s isolation from other fields of medical requirements, as well as the capabilities imaging. A few examples illustrate this and limitations of the modalities, is es- point. On the image acquisition side, the sential to addressing the questions posed recent development and ongoing im- by the image display process. provement of medical imaging modali- The difficulty in meeting the physi- ties, such as x-ray computed tomography cian’s need for image accuracy and high (CT), ultrasound, PET, single photon image processing speed arises from the emission computed tomography (SPECT), characteristics of the data produced by and MRI, provide a wide range of image the modalities. Because the modalities acquisition capabilities. Barillot et al. sample space and reconstruct it mathe - [19881 and Stytz and Frieder [19901 pre- matically, image accuracy is limited to sent a survey of the operation of these the imaging modality resolution. The modalities. large amount of data processed, up to

ACM Computing Surveys, Vol 23, No 4, December 1991 424 “ Stytz et al.

35MB (megabytes) produced per patient since they facilitate the incorporation of by a single modality study, hinders rapid improvements in 3D medical imaging al- image formation. The challenge posed to gorithms into the rendering system. The the computer scientist lies in the devel- comparatively recent advent of high- opment of techniques for rapid, accurate power, inexpensive processors with large manipulation of large quantities of data address spaces opens the possibility for to produce images that are useful to a using a multiprocessor architecture and physician. a software-based algorithm implementa- There are many computational aspects tion to provide interactive displays to 3D medical imaging. The most promi- directly from the raw modality data. nent ones are the significant (2-35 MB) This recent approach to 3D medical im- quantity of data to be processed (prefer- age rendering foregoes medical imaging ably in real time~), the need for long- modality data reduction operations, range retention of data, and the need for achieves rapid display rates by distribut- manipulation and display of the result- ing the workload, and attains implemen- ing 3D images of complex internal and tation flexibility by using software to external anatomical structures (again realize the rendering algorithms. preferably in real time). Even as researchers approach the goal Traditionally, when rendering 3D of real-time 3D medical image rendering, medical images for disease diagnosis,s they continue to address the perennial rendering speed has been sacrificed for 3D medical imaging issues of workload reconstructed image accuracy. The first distribution, data set representation, and machines, from the Medical Image Pro- appropriate image rendering algorithms. cessing Group (presently at the Univer- Addressing these issues and the peculiar sity of Pennsylvania), used algorithms needs of 3D medical imaging brought for data management and medical imag- forth the development of the computer ing modality data reduction by organ systems that are the subject of the pre- boundary detection to provide their dis- sent survey. 6 Other surveys of the field plays. Even with a significant reduction of 3D medical imaging that the reader in the quantity of data manipulated, gen- may find useful are Barillot et al. [1988], erating a single series of rotated views of Farrell and Zappulla [1989], Gordon and a single x-ray CT study was an over- DeLoid [1990a], Herman and Webster night process [Artzy et al. 1979, 1981]. [1983], Ney et al. [1990b], Pizer and Initial attempts at achieving interactive Fuchs [1987b], Robb [19851, and Udupa display rates using the raw modality data [1989]. Fuchs [1989b] presents a brief, relied upon special-purpose multiproces- but comprehensive, introduction to sor architectures and hardware-based research issues and currently used algorithm implementations. Although techniques in 3D medical imaging. these special-purpose architectures theo- We organized this article as follows. retically produce images rapidly, the Section 1 discusses the unique aspects of drawback to their approach is in the 3D medical imaging. Section 2 des- placement of image rendering algorithms cribes 3D medical imaging data models, in hardware. Because 3D medical imag- types of rendering, and coordinate sys- ing is a rapidly evolving field, flexible tems. Section 3 describes 3D medical algorithm implementations are superior image rendering operations. Section 4 discusses eight significant 3D medical imaging machines. The focus of the de-

4A real-time image display rate m defined to be a display rate at or above the flicker-fusion rate of the eye, approximately 30 frames per 6The inclusion of a particular machine does not second. constitute an endorsement on the part of the au- 50ther applications of 3D medical imagmg are dis- thors, and the omission of a machine does not imply cussed in Jense and Huusmans [1989] that it is unsuitable for medical image processing

ACM Computmg Surveys, Vol. 23, No 4, December 1991 Three-Dimensional Medical Imaging “ 425 scription of each machine is on the inno- erations. Third, for a 3D representation vations and qualities that set the to provide an increase in clinical useful- machine apart from others. The paper ness over a 2D representation it must be concludes with a brief prognostication capable of portraying the scene from all on the future of 3D medical imaging. points of view. The convergence of these The appendixes provide brief descrip- facts sets 3D medical imaging apart from tions of selected 3D image rendering all other imaging tasks. algorithms. A final aspect of 3D medical imaging is the wide range of operations required to 1. UNIQUE ASPECTS OF form a high-quality 3D medical image THREE-DIMENSIONAL MEDICAL IMAGING accurately and interactively. Typical re - quired capabilities include viewing un- There is no one aspect of 3D medical derlying tissue, isolating specific organs imaging that distinguishes it from other within the volume, viewing multiple or- graphics processing applications. Rather, gans simultaneously, and analyzing data. the convergence of several factors makes At the same time, the physician demands this field unique. Two well-known, closely high quality and rapidly rendered im- related factors are data volume and com- ages with depth cues. Although the algo- putational cost. The typical medical im- rithms used for these operations are not age contains a large amount of data. In unique to 3D medical imaging, their use relation to data volume, consider that further complicates an already computa- the average CT procedure generates more tionally costly effort and so deserve than 2 million voxels per patient exami- mention. nation. MRI, PET, and ultrasound proce- dures produce similar amounts of data. The second factor is that the algorithms 2. THREE-DIMENSIONAL MEDICAL IMAGING COORDINATE SYSTEMS, OBJECT SPACE used for 3D medical imaging have great DEPICTION MODELS, AND TYPES OF computational cost even at moderate 3D resolution. IMAGE RENDERING Our survey of the medical imaging lit- Before examining 3D medical image pro- erature disclosed additional factors. First, cessing machines, it is appropriate to there are no underlying mathematical investigate the techniques used to models that can be applied to medi- manipulate 3D medical image data. This cal image data to simplify it. In medical section presents an overview and class- imaging, a critical requirement is accu- ification of the terminology used in the rate portrayal of patient data. Physicians 3D medical imaging environment. The permit only a limited amount of abstrac- 3D medical imaging environment tion from the raw data because they base includes the low-level data models used their diagnosis upon departures from the in 3D medical imaging applications, the norm, and high-level representations may coordinate systems, and the graphics op- reduce, or eliminate, these differences. erations used to render a 3D image. Therefore, from both the clinical and Detailed descriptions of the graphics medical imaging viewpoints, models of processing operations and conventions organs or organ subsystems are irrele - discussed here and in the following sec- vant for image rendering purposes. tion can be found in a standard graphics Second, the display is not static. Typi- text such as Burger and Gillies [19891, cally, physicians, technicians, and other Foley et al. [19901, Rogers [19851, and users want to interact with the display to Watt [19891. perform digital dissection and slicing op- The section opens with a discussion of the voxel data model and is followed by a discussion of the coordinate systems used in 3D medical imaging. The importance 7Dissection provides the illusion of peeling or cutting away overlying layers of tissue of the voxel model comes from its use in

ACM Computing Surveys, Vol. 23, No 4, December 1991 426 0 Stytz et al. the CT, MRI, SPECT, and PET medical dimensions of a voxel in a CT slice are imaging modalities as well as for 0.8 mm x 0.8 mm x 1-15 mm. 3D medical image rendering. The voxel Figure 1 presents a voxel representa- model is the assumed input data format tion of a cube-shaped space. Figure 2 for the three major approaches to medi- presents a notional voxel representation cal image object space modeling de- of a slice of CT data. The figure shows an scribed in this section: the contour idealized patient cross section with a grid approach, the surface approach, and the superimposed to indicate the division of volume approach. These three object space into voxels, A 2D slice of voxel space modeling techniques provide data naturally into a 2D array. the input data to the two major classes To represent a volume, a set of 2D of 3D medical image rendering voxel-based arrays is combined, possibly techniques—surface and volume render- with interpolation, to form a 3D array ing. We summarize these rendering tech- of voxels. The 3D voxel array naturally niques at the end of this section. maps into a 3D array. Each array ele- A voxel is the 3D analog of a pixel. ment corresponds to a voxel in the 3D Voxels are identically sized, tightly digital scene. The array indexes corre- packed parallelepipeds formed by divid- spond to the voxel coordinates. The value ing object space with sets of planes paral- of an array element is the value of the lel to each object space axis. Voxels must corresponding voxel. be nonoverlapping and small compared Three-dimensional medical imaging to the features represented within an im- systems commonly use four different aged volume. In 3D medical imaging, coordinate systems: the patient space each voxel is represented in object space, coordinate system, the object space image space, and screen space by its three coordinate system, the image space coor- coordinate values. Because medical dinate system, and the screen space coor- imaging modalities characterize an im- dinate system. Patient space is the 3D aged volume based on some physical pa- space defined by an orthogonal coordi- rameter, such as x-ray attenuation for nate system attached to the patient’s na- CT, proton density and mobility for MRI, tive bone. The z-axis is parallel to the number of positrons emitted for PET, and patient’s head-to-toe axis, the y-axis is number of y-rays emitted for SPECT, the parallel to the patient’s front-to-back value assigned to each voxel is the physi- axis, and the x-axis is parallel to the cal parameter value for the correspond- patient’s side-to-side axis. Mapping pa- ing volume in the patient. For example, tient space into object space requires the in the data collected by a CT scan the sampling and digitization of patient space value assigned to a voxel corresponds to using a medical imaging modality. the x-ray attenuation within the corre- The 3D coordinate system that defines sponding volume of the patient. 8 The and contains the digitized sample of pa- higher the resolution of the modality, tient space is object space. The 3D coordi- the better the modality can characterize nate system that contains the view of the the variation of the physical parameter object space volume desired by the ob- within the patient. The lower limit on server is image space. The 2D coordinate the cross section of a voxel is the resolu- system that presents the visible portion tion of the modality that gathered the of image space to the observer is screen image data. Representative values for the space. Figure 3 shows the relationship between object and image space, with the direction of positive rotation about each axis indicated in the figure. The object and image space coordinate ‘The value is calculated using a process called systems are related to each other using computed tomography; a complete discussion of the issues involved is presented m Herman and Coin geometric transformation operations [19801. [Foley et al. 1990]. The object space coor-

ACM Computing Surveys, Vol 23. No 4, December 1991 Three-Dimensional Medical Imaging “ 427

Y

z .4 ,** ***f ,* z x Figure 1. Voxel representation of a cube in object space

Volume Grid

VOX(2I

Figure 2. Voxel representation of a CT slice

ACM Computing Surveys, Vol. 23, No 4, December 1991 428 ● Stytz et al. Y “1

x

Figure 3. Object and image space coordinate systems.

dinate system, represented by x, y, and other visual depth cue information is used z, is fixed in space and contains a com- in this space to portray the 3D relation- plete or partial description of the volume ships upon a 2D CRT. as computed by a medical imaging To accommodate the unique require- modality. A partial description can be ments imposed by the 3D medical imag- the surface of one or more organs of in- ing environment, three major approaches terest, whereas a complete description is to portraying an object in a 3D medi- the entire 3D voxel data set, possibly cal image volume have been developed. interpolated, output from a medical These are the contour, surface, and vol- imaging modality. The image space coor- ume object space depiction methods, also dinate system, signified by x’, y’, and z’, called the contour, surface, and volume contains the volume description obtained data models. Each technique uses a dif- after application of coordinate transfor- ferent type of scene element 10 to depict mation matrixes, image Segmentationg information within an imaged volume. operators, or other volume modification We summarize these methods here and operations. Image space contains the refer the reader to Farrell and Zappulla view of the volume desired by the ob- [19891 and Udupa [19891 for detailed server. The object space representation surveys of these models. remains unchanged as a result of the Contour and surface object space por- operations; changes are evident only in trayal methods provide rapid image ren- image space. The screen space coordinate dering by deriving lower dimensional system, signified by u and v, is the 2D object space representations for an iso- space that portrays the visible contents lated surface of an organ/object of inter- of image space. Shading, shadows, and est from the complete 3D object space

10Ascene element is a primitive data element that ‘Segmentation is the process of dividing a volume represents some discrete portion of the volume that into discrete parts. One way to perform segmenta- was imaged A scene element M the prlmltive data tion 1s by thresholding. See Appendix D. element in object space

ACM Computmg Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging ● 429 array. Accordingly, these methods re - The contour, lD-primitive, approach duce the amount of data manipulated uses sets of lD contours to form a de- when forming an image. The data reduc- scription of the boundary of an object of tion is obtained at the cost of being able interest within the individual slices that to provide only one type of rendering, form a 3D volume. The boundary is rep- called surface rendering. These tech- resented in each slice as a sequence niques suffer from the requirement for of points connected by directed line reprocessing the volume to extract the segments. The sequence of points corre- surface of the organ/object from the im- sponds to the voxels in the object that are aged volume whenever selecting a differ- connected and that lie on the border be- ent organ/object or altering the organ/ tween the object and the surrounding object. For example, cutting away part of material, Before forming the boundary the surface of the object using a clipping representation, the desired object contour plane alters the visible surface of the must be isolated using a segmentation object. In this instance, the 3D object operator. Thresholdingll can be used space array must be reprocessed to ex- if the boundary occurs between high- tract that portion of the object that is contrast materials. Otherwise, boundary visible behind the clipping plane. A sur- detection using automatic or manual vey of machines that perform 3D medical tracking 12 is used for segmentation. Due imaging based on contour and surface to the complexity found in the typical methods can be found in Goldwasser medical image, automatic methods some- et al, [1988bl, Volumetric object space times require operator assistance to portrayal methods avoid the volume re- extract the boundary of an object accu- processing penalty by using the 3D object rately. Figure 4 contains the object in space array data directly. These methods Figure 2 represented as a set of contours, pay a render-time computation penalty with the heads of the arrows indicating due to the large volume of data manipu- the direction of traversal around the ob- lated when generating an image. Be- ject. The use of this methodology for dis- cause volumetric methods use the entire play of 3D medical images is described data set, they support two different types in Gibson [1989], Seitz and Ruegseger of rendering: the display of a single sur- face and the display of multiple surfaces or composites of surfaces. Note that all llThresholding is a process whereby those voxels three portrayal methods begin with a 3D that lie within or upon the edge of an organ are array of voxel data derived from a 3D determined based solely upon the value of each medical imaging modality. The differ- voxel, Typically, the user selects a range of voxel ence between the three lies in the exis- values that is assumed to lie within the object and also do not occur outside of the object. The voxels tence of an intermediate one-dimensional within the volume are then examined one by one, (ID) or 2D representation of a structure with those voxels that fall within the range se- of interest for contour and surface meth- lected for further processing to depict the surface. ods and the lack of this representation lzTracking is a procedure whereby those voxels that lie within or upon the edge of an organ are for volume (3D) methods. 1 sum- determined based upon the value at each voxel and marizes the salient characteristics of the the values of the voxels lying nearby. For manual three object space portrayal methods. tracking, the user defines the surface by moving a Referring back to Figure 2, it contains cursor along the border of the object, with the border defined by high visual contrast between the an idealized representation of a slice of object and surrounding material In automatic medical imaging modality data. The fig- tracking, a single seed voxel is specified by the ure shows a single object within object viewer; then the remainder of the voxels on the space, with a well-defined borde~ and border are located by gradually expanding upon the varying voxel values within the slice. We set of voxels known to lie on the edge One way of expanding the set is to select the next voxel in the use this as the basis for the edge based upon its adjacency to voxels known to following descriptions of the 3D medical be on the edge and its similarity in value to voxels imaging data models. known to be on the edge.

ACM Computing Surveys, Vol 23, No. 4, December 1991 430 “ Stytz et al.

Table 1. Three-Dimensional Medical Imagmg Object Space Portrayal Methods

Method Contour Surface Volume

Data primitive ID 2D 3D dimension Object Directed contours of Tiles between in-shce None; voxels used representation the object boundary contours or faces of dmectly object boundary voxels Type of Surface Surface Surface or volume rendering supported Object selection Performed in a Performed in a Performed at image- preprocessing” step preprocessing step rendering time using using segmentation using segmentation segmentation Number of One per preprocessing One per processing Determined at image- objects step step rendering time displayed Representation Low; must reprocess Low; must reprocess High; all decisions flexibility the data for each new the data for each new deferred until image- object or change in object or change in rendering display time object object Object space Lowest Low High memory requirement

‘Preprocessing is a term used to indicate the operations that must be performed before the volume, or a given object within the volume, can be rendered For example, the use of a contour-based model of an object requu-es extraction of the contours of the object in a preprocessing step before the object can be rendered.

MRI, PET, SPECT, or ultrasound scan- ner have gaps between the slices, a method that approximates the surface between the slices must be used to repre- sent the surface. An additional difficulty faced by this technique and ID primitive techniques is the separation of the desired surface from the surrounding modality data. The two major classes of surface depiction algorithms are the tiling meth- ods and the surface-tracking methods. The goal of tiling techniques [Ayache 1989; Chen et al. 1989b; Fuchs et al. Figure 4. Contour-based representation of a slice 1977; Lin and Chen 1989; Linn et al, of medical imaging modality data 1988; Shantz 1981; Toennies 1989; Toennies et al. 1990], also called patch- ing, is to determine the surface that en- [1983], Steiger [1988], Tuomenoksa closes a given set of slice contours, then [19831, Udupa [19851, and Udupa and represent the surface with a set of 2D TUY [1983]. primitives. As a preliminary step, tiling Surface, 2D-primitive, approaches to requires the extraction of the contours of 3D medical imaging use 2D primitives, the object of interest in each slice using usually triangles or polygons, to show any of the methods for the contour ap- the surface of a selected portion of the proach. The extracted contours are then volume. Because the data from a CT, smoothed, and geometric primitives, typ -

ACM Computing Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging g 431 ically triangles, are fitted to the contours to approximate the surface of the object in the interstice space. This approach is popular because tiled surfaces furnish re- al istic representations of the objects se- lected for display, especially when objects are differentiated with and. when multiple light sources are used to illumi- nate the scene. Because of the data re- duction inherent in going from the input data to a tiled surface representation, the computational cost of displaying the ob- ject is low. The disadvantages inherent A B c in this approach are the time required to extract the surface, the difficult y in per- Figure 5. Tiled representation of a portion of the surface of three slices of medical imaging modality forming the tiling (structures can split data. and merge between slices), and the re - quirement for reaccomplishing object contouring whenever the user selects a different organ or alters the organ. Fig- of the 3D volume. Surface tracking be- ure 5 contains a tiled depiction of a gins after the user specifies a seed point portion of the surface defined by three in the object. The surface is then adjacent slices; the raw data for each “tracked” by finding the connected vox- slice is simi lar to that in Figure 2. A els in the object that lie on the boundary contour description represents each slice; of the object. Voxel connectivity can be the inter slice portion of the volume is established using a variety of criteria, delineated by triangles. the most common being face connectiv- Surface-tracking methods [Artzy 1979, ityy, face or edge connectivity, and face, 1981; Chine et al. 1988; Ekoale 1989; edge, or vertex connectivity. If the face- Gordon and Udupa 1989; Herman and connectivity criterion is used, for exam- Liu 1978, 19’79; Liu 1977; Lorensen and ple, two voxels are adjacent in the object Cline 1987; Rhodes 1979; Udupa 1982; if any face on one voxel touches any face Udupa and A~anagadde 1990; Udupa and on the other. 14 When displaying the sur- Odhner 1990; Udupa et al. 1982; Xu and face, the surface display procedure se- Lu 1988] also represent the surface of an lects only the voxel faces oriented toward object of interest using 2D primitives. In the observer, The highest quality display this case, the 2D primitive is the face of a procedures shade each visible voxel face voxel. The object’s surface is represented based on an estimation of the surface by the set of faces of connected voxels norma115 at the center of the face. Figure that lie on the surface of the object. 6 shows a portion of the surface of three Surface-tracking operations generally be- adjacent slices formed by a surface- gin by interpolating the original image tracking operation. The visible faces of data to form a new representation of the the cubes are dark, with the faces that volume composed of voxels of equal lie on the interior of the object colored length in all three dimensions. The white. Appendix D presents a short de- structure to be examined is isolated by scription of three surface-tracking algo- applying a segmentation operator to the rithms. volume to form a binary 13 representation .— 14Edge connectivity requires that the edge of one voxel touch the edge of the other, vertex connectiv- ity requires that the vertex of one voxel touch a 131n a binary representation, voxels located within vertex of the other, the object are assigned a value of 1, all other voxels 15The surface normal is the line perpendicular to a are set to O line tangent to a given point on the face,

ACM Computing Surveys, Vol 23, No. 4, December 1991 432 * Styt.z et al.

The voxel-based object space volume is often represented within the machine as a 3D array, with array access based upon the object space x, y, and z coordinates. One method for extracting informa- tion from the 3D array is back-to-front access,17 described in Frieder [1985a]. To compute a visible surface render- ing without segmentation, the algorithm starts at the deepest voxel in the object space array and works its way to the closest voxel, transferring voxel values A B c from object space to image space as it proceeds. When it concludes, the algo- Figure 6. A portion of the surface of three slices of rithm has extracted the visible portions medical imaging modality data formed using a sur- face-tracing operation. of all the objects within object space for display. Because of the computational cost of this procedure, techniques based on this algorithm that reduce image ren- The volume, 3D-primitive, -based vol- dering time have been developed. These ume approach has grown in popularity techniques accelerate processing by re- due to its ability to compute volume ducing the number of data elements ex- renderings as well as surface renderings amined and/or by reducing the time without performing surface extraction required to access all the data elements. preprocessing. Volume methods operate For examples of these acceleration tech- directly, possibly with interpolation, niques see Goldwasser and Reynolds upon the data output from the medical [19871, Reynolds [1985], and Reynolds imaging modality. Surface extraction et al. [19871. By specifying a segmenta- processing, if performed, is deferred until tion threshold or threshold window,ls the the transformation of the imaged volume technique in Frieder et al. [1985a] can from object space into image space. 16 As extract the visible surface of a single a result, 3D-primitive-based renderings object for display. Later in this paper, we can portray the surface of selected or- describe how 3D-primitive-based volume gans, as in the ID- and 2D-primitive- methods can be used for volume render- based approaches, or they can provide a ing. Figure 2 is a voxel-based depiction of view of the entire contents of the volume. a volume. The drawback to this approach is the Because of the computational expense large amount of data processed when incurred when accessing the 3D space constructing a view of object space. The array, several researchers have used the commonly used 3D primitives are the octree data structure, or a variation of it, voxel (possibly interpolated to form cubic to reduce data access time. The octree is voxels) and the octree. (We discuss the an eight-ary tree data structure formed octree data structure later. ) by recursive octant-wise subdivision of the object space array. Octant volumes continue to be subdivided until a termi- nation criterion is satisfied. Two common 16An exception to this general rule is the use of the termination criterion are the total vol- binary volume object space representation In this case, object space M preprocessed to segment the volume into an organ of interest and background This technique offers the capability for interactive 17Back.to.front access can be used with ID, 2D, and viewing of the inside and outside of the organ 3D primitives without recomputing vmible surfaces while reduc- ‘8A threshold window is a user-defined range of ing the memory requirement for storing the object voxel values specified to separate items of interest, space 3D array. such as an organ, from the remainder of a volume.

ACM Computmg Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging ● 433 ume represented by a node and the com- and Tuy [1984], and Weng and Ahuja plexity (homogeneity) of the volume [19871. Note that the preprocessing step represented by the node. In the octree, faces the same voxel classification prob- each node of the tree represents a volume lems encountered when using lD and 2D of space, and each child of a given node object space representations even though represents an octant within the volume surfaces are not explicitly extracted. of the parent node. Each node of the Using one of the three object space octree contains a value that corresponds portrayal methods, a desired object of to the average value of the object space interest or the entire volume can be array across the octant volume repre - rendered. The two major approaches to sented by the node. The root node of the 3D medical image rendering built upon tree represents the entire object space these three depiction methods are sur- volume, and leaf nodes correspond to vol- face rendering and volume rendering. A umes that are homogeneous, or nearly surface rendering presents the user with so. Leaf nodes do not represent identi- a display of the surface of a single object. cally sized volumes; instead they repre- A volume rendering displays multiple sent object space volumes that satisfy the surfaces or the entire volume and pre- termination criteria. For example, if sents the user with a of the homogeneity is a criterion, each leaf rep- entire space. Volume rendering uses resents an object space volume that en- 3D primitives as the input data type, compasses a set of voxels having the same whereas a surface rendering can be value. Intermediate levels of nodes rep- computed using lD, 2D, or 3D primitives. resent nonhomogeneous octant volumes; To compute a surface rendering, the the value for each of these nodes is deter- input data must be processed to extract a mined by volume-weighted averaging the desired surface or organ for display. values of the child nodes. To compute a surface rendering, voxels We classify the octree as a 3D-prim- must be deterministically classified itive object space representation because into discrete object-of-interest and the nodes of the octree represent volumes nonobject-of-interest classes (possibly in in object space. The octree is not, a preprocessing step). The resulting ren- however, a pure 3D-primitive-based rep- dered image provides a visualization of resentation because of the voxel classifi- the visible surface of a single object cation preprocessing performed when within the 3D space. Note that during creating the octree data structure. Chen the classification phase, object space must and Huang [1988] provides an introduc - be examined for qualifying voxels. If a tory survey and tutorial on the use of record of qualifying voxels is preserved, octrees in medical imaging, Samet [1990] subsequent renderings of the object’s sur- describes techniques for the creation and face can be made without reaccomplish- use of octrees for representing objects by ing voxel classification. Maintaining a their surfaces. Further work on the for- record substantially reduces the render- mation and use of octrees in medical ing computational burden because only imaging is reported in Amans and the object of interest, rather than all of Darrier [1985], Ayala et al. [1985], object space, must be processed. The use Carlbom et al. [1985], Chen and of a record of qualifying voxels differenti - Aggarwal [1986], Gargantini [1982], ates the surface renderings derived from Gargantini et al. [1986a, 1986 b], the contour (lD) and surface (2D) meth- Glassner [1984, 1988], Jackins and ods from the surface renderings derived Tanimoto[1980], Kunii et al. [1985], from the volume (3D) method. Levoy [1988a, 1988b, 1989a, 1989b, For illustrative purposes, Figure 7 de- 1990a, 1990b], Levoy et al. [1990], Mao picts surface renderings of a human skull et al. [1987], Meagher [1982a, 1982b, computed from 2D (left) and 3D (right) 1985], Samet [1989], Spihari [1981], Stytz primitives. These two particular figures [1989], Tamminen and Samet [1984], Tuy look very similar, and to the lay person,

ACM Computing Surveys, Vol 23, No 4, December 1991 434 “ Stytz et al.

Figure 7. Surface rendering of human skull from 2D primitives (left) and 3D primitives (right) using the technique described in Udupa and Herman [19901. Reproduced with permission from Udupa and Herman [19901.

are for all intents and purposes identical, of this type of imaging is to present the Two-dimensional-primitive-based render- object (s) of interest within the anatomi- ings are, however, commonly crisper in cal context of surrounding tissue. detail than 3D-m-imitive-based surface Currently, there are four major types renderings, whic~ can appear fuzzy. The of volume rendering: volume rendering other differences between the two surface using multiple threshold windows, volu- renderings are the amount of time re - metric compositing, max-min display, and quired to compute the surface rendering radiographic display. Volume rendering and the greater volume manipulation using multiple threshold windows uses flexibility inherent in the 3D-primitive the threshold windows to extract differ- approach. The 3-D-primitive approach is ent tissue types from the volume. This more flexible because all of the data in type of volume rendering generates an the volume is available at rendering image containing multiple opaque object time. surfaces. Volumetric compositing, some- Volume rendering treats voxels as times called compositing, generates an opaque or semiopaque volumes with the image containing multiple semiopaque opacity of each voxel determined by its and/or opaque tissue surfaces in a two- value. Typically, the value is assumed to step process. The first step is determina- correlate to the type(s) and amount of tion of the blend of materials present in material in the voxel. For each rendition each voxel, possibly using a probabilistic all the voxels in object space are exam- classification scheme, yielding an opacity ined, making this type of rendering value and a color for the voxel. The sec- significantly more computationally ex - ond step is the combination of the voxel pensive than surface renderings gener- opacity and color values into a single ated from ID or 2D primitives. The aim representation of light attenuation by the

ACM Computing Surveys, Vol 23, No. 4, December 1991 Three-Dimensional Medical Imaging ● 435

Figure 8. Volume rendering of a human head. Three radiation treatment beams encompassing a treatment region and MRI intensities are superimposed on cut planes in a volume rendering of a patient’s head. The head with exposed cortex was rendered using 3D primitives generated from 109 MRI slices with other objects superimposed by hybrid volume-rendering techniques. courtesy of the University of North Carolina Departments of Computer Science and Radiation Oncology.

voxels along the viewer’s line of sight for changes in lighting and voxel value through image space. The resulting ren- to displayed-color mapping, object space dered image can provide a visualization must be completely examined to compute of the entire contents of the 3D space or each type of volume rendering. of several selected materials (such as The next two figures show different bone and skin surfaces) in the space. types of volume renderings. The patient We describe volumetric compositing in data in Figure 8 were acquired using an greater detail in the next section. Max- MRI unit; those in Figure 9 were ac- min display, also called angiographic dis- quired using a CT. Figure 8 is a volumet- play, determines the shade at each screen ric compositing that displays the first space pixel using the maximum (or mini- visible patient surface along a ray com- mum) voxel value encountered along the posited with three radiation treatment path of each ray cast from each screen beams. Figure 9 is an example of a vol- space pixel through image space. Radio- ume rendering that uses multiple thresh- graphic display, also called transmission old windows to display the surface of two display, determines the shade at each different objects in a volume. At each screen space pixel from the weighted sum pixel in Figure 9, the surface that ap- of the voxel values encountered along the pears in the image is the one surface of path of each ray cast from each screen the two that is closest to the observer in space pixel through image space. Except image space.

ACM Computing Surveys, Vol 23, No. 4, December 1991 436 ● Stytz et al.

Figure 9. Volume rendering showing the lungs and spine of a patient. The 3D primitives used in the rendering were acquired in a 24-slice CT study. Image courtesy of Reality Imaging Corporation, Solon, Ohio. Images rendered using the Voxel Flinger.

We have adopted a definition for ‘the classify renderings from contour and sur- surface- and volume-rendering opera- face object space portrayal methods as tions that emphasizes what the user sees surface renderings and the renderings of in the rendered image, as in Levoy multiple surfaces using volume methods [1990b]. A different set of definitions for as volume renderings. The set of defini- these same operations emphasizes the tions used in this survey, however, clas- amount of data processed to compute the sify a rendering from a volume method rendering [Udupa 1989; Udupa and that depicts a single surface as a surface Odhner 1990]. In this other set of render ing. The other set of definitions definitions, a surface rendering is any would classify this same rendering as a rendering formed from a contour- or sur- volume rendering. face-based object space representation. A Whether a surface or a volume render- volume rendering is any rendering ing is computed, an important aspect to formed from a volume method object providing a 3D illusion within 2D screen space representation. The distinction be- space is the shading (i. e., coloring) of tween the two sets of definitions lies in each pixel. Shading algorithms assign a the classification of those techniques that color to a pixel based upon the surface process all of object space but display a normal, orientation with respect to light single surface. Both sets of definitions sources and the observer, lighting, image

ACM Computing Surveys, Vol. 23, No. 4, December 1991 Three-D in-.ensional Medical Imaging ● 437 space depth, and type of material por- trayal method can be used for surface or trayed by the pixel. We present a more volume rendering. Tiede et al. [1987] detailed description of the concepts used, compares the performance of four to perform shading in the next section different surface-rendering techniques, and in Appendix E. In this section, we Comparisons of surface- and volume- limit the discussion to a classification rendering methods and shading tech scheme for 3D medical imaging shading niques are presented in Gordon and techniques. DeLoid [1990b], Lang et al. [19901, We classify 3D medical image shading Pommert et al, [1989, 1990], Tiede et al, techniques based upon the methodology [1990], and Udupa and Hung [1990a, used to estimate the surface normal. 1990b]. Talton et al. [1987] compares the There are two categories of surface nor- efficacy of several different volume- mal estimation techniques— object space rendering algorithms. methods and image space methods. Table 2 illustrates how the concepts Object space methods use information discussed in this section have been im- available in object space; image space plemented in a wide variety of 3D medi- methods use information available in im- cal imaging machines. We classify the age space. Object space surface normal 3D medical image rendering machines estimation shading algorithms, as in by object space portrayal method, type of Chen et al. [1985], Chuang [1990], rendering, type of shading algorithm, ar- Gourand [1971], Hohne and Bernstein chitecture, and development arena. The [19861, and Phong [19751, calculate the architecture indicates the type of hard- normal for a visible voxel face using the ware used to execute the algorithms in geometry of the surrounding voxels or the machine. We mention the develop- voxel faces or using the gray-scale gradi- ment arena to indicate whether the ent (local pixel value gradient) between machine was designed as a research ex- the target pixel and its neighboring pix. periment or for commercial purposes. els. The object space surface normal esti - References for the machines summarized mation does not change due to rotation of in the table and discussed later in this the object. This category of techniques survey are presented in the body of the can reduce image-rendering time but at survey. the cost of increased time for preprocess- ing the data. Image space surface normal 3. THREE-DIMENSIONAL MEDICAL IMAGING estimation shading algorithms, as in RENDERING OPERATIONS Chen and Sontag [1989], Gordon and Reynolds [1!383, 1985], and Reynolds This section presents a survey of the [1985], estimate the normal using the operations commonly performed when distance gradient (z’ gradient) between rendering a 3D medical image. Because the target pixel and its neighbors. This additional capabilities are under devel - technique requires recomputing the esti- opment, these operations are not an all- mated surface normal after each rota- inclusive list. Rather, they indicate the tion. Appendix F contains a description broad range of capabilities desired within of object space surface normal estima- a 3D medical imaging machine. tion algorithms described in Chen Adaptive histogram equalization [Pizer et al. [1985], Gourand [1971], Hohne and et al. 1984, 1986, 1987, 1990b] is an im- Bernstein [19861, and Phong [1975], as age enhancement operation used to in- well as the image space surface normal crease the contrast between a pixel and estimation algorithm described in its immediate neighbors. For each pixel, Gordon and Reynolds [19851. the technique examines the histo~am of The contour- and surface-based meth- intensities in a region centered on the ods are suitable for computing surface pixel and sets the output pixel intensity renderings of an object of interest, according to its rank within its histo- whereas the volume object space por - gram. The technique has been used to

ACM Computing Surveys, Vol 23, No. 4, December 1991 >

; cd U2 CJ v Three-Dimensional Medical Imaging 8 443 enhance visual contrast in 2D image for example, be used to extract the skull slices and to amplify feature boundary in Figure 7 and the spine and lungs in contrast before performing segmentation Figure 9. A popular approach to bound- (described below). ary detection is surface tracking. Surface Antialiasing [Abram and Westover tracking locates all the connected surface 1985; Booth et al. 1987; Burger and voxels in the scene that are in the same Gillies 1989; Cook et al. 1984; Crow 1977, object /organ. An advantage of boundary 1981; Dippe and Weld 1985; Foley et al. detection over image segmentation by 1990; Fujimoto and Iwata 1983; Lee and thresholding (see below) is improved iso- Redner 1990; Max 1990; Mitchell 1987; lation of the connected components of the Watt 1989] is an image-rendering opera- object within the scene. Boundary detec- tion that diminishes the appearance of tion accomplishes connected component jagged edges within the rendered image extraction implicitly when determining by eliminating spurious high-frequency the object boundary. Because of the diffi- artifacts. Antialiasing accomplishes this culties and inaccuracies inherent in objective by smoothing the edges of ob - making a binary decision concerning the jects in the scene. A common approach to presence or absence of a material within antialiasing in medical imaging is to in- a volume, probabilistic classification crease the resolution of the image during schemes combined with image composit - processing and then resample the data ing operations have been developed. back to the original resolution for final These techniques depict the boundary of display. This technique is referred to as an object fuzzily rather than discretely. super sampling. Appendix D presents three algorithms Supersampling achieves an acceptable devised to perform boundary detection by level of antialiasing at low computa- surface tracking. These are the algo- tional cost. It determines the shade of a rithm proposed by Liu [1977], the pixel by using multiple probes of image algorithm specified in Artzy et al. [1981], space per display pixel, The weighted and the “marching cubes” algorithm sum of the values returned by the probes described by Lorensen and Cline [1987] determines the shade of the pixel. The and Cline et al. [1988]. rendering system must perform graphics Clipping, also called data cutout, iso- operations at high resolution lates a portion of a volume for examina- (at least two or more times the resolution tion by using one or more clipping planes. of the final display) and determine the The clipping planes delineate the bound- low-resolution display-pixel values by av- ary of the subvolume to be extracted. In eraging the high-resolution pixel values. Figure 8, clipping planes were used to For example, by calculating the image at remove the wedge-shaped volume from a resolution of 1024 x 1024 pixels and the back of the patient’s head to expose then displaying it at a resolution of 512 x the underlying material. Using the other 512 pixels, four pixel values in the high- operations described in this section, the resolution image are averaged into one extracted subvolume can be manipulated pixel in the final display. The averag- in the same manner as the whole ing blurs the final image, thereby re- volume. ducing the occurrence of jagged edges Digital dissection is a procedure used and high-frequency artifacts. to cut away portions of overlying mat- Boundary detection is a method for im- erial to view the material lying un- age segmentation that produces object derneath it within the context of the boundary information for an object of in- overlying material. For example, if a user terest. A boundary detection algorithm wishes to view a portion of a fractured identifies the surface of the organ of in- skull, this operation would allow the terest in the scene and extracts the sur- skull to be viewed along with the skin face from the remaining portion of the and surface of the head surrounding the 3D digital scene. This technique could, fracture. Figure 8 presents an example of

ACM Computmg Surveys, Vol. 23, No 4, December 1991 444 “ Stytz et al. this technique with its simultaneous de- user to view portions of the rendered im- piction of the patient’s skin, skull, and age that would otherwise be invisible and brain. Cutting planes are not commonly also enhances the 3D effect in the ren- used for this operation. Instead, a graph- dered image. Successive rotations are cu- ical icon displayed upon the cathode ray mulative in their angular displacement tube (CRT) indicates the location of of objects in the scene relative to their the digital scalpel. The scalpel indicates starting positions. The 3D effect provided where to remove material. The cutting by rotation can be further enhanced by operation can be accomplished in making a movie of the rotation of the the renderer by treating the “cut away” scene through a sequence of small angles voxels’ values as transparent, thereby around one axis [Artzy et al. 1979; Chen allowing the underlying materials to be 1984b; Robb et al. 1986]. viewed. This operation is a fine-grain Hidden-surface removal [Burger counterpart of clipping. Instead of re - and Gillies 1989; Foley et al. 1990; moving entire planes of voxels, however, Frieder et al. 1985a; Fuchs et al. 1979; digital dissection removes areas only one Goldwasser and Reynolds 1987; Meagher or two voxels wide. 1982a, 1982b; Reynolds 1985; Reynolds False color (also called the colored et al. 1987; Rogers 1985; Watt 1989; Ap- range method) [Farrell et al. 1984] is an pendix A] is one of the most computation- image enhancement operation used to ally intensive operations in 3D medical differentiate multiple objects displayed imaging. Hidden-surface removal deter- within the scene. False coloring is a two- mines the surfaces that are visible/in- step procedure. In the first step, the user visible from a given location in image assigns a color value to each range of space. Hidden-surface removal algo- voxel values to be visualized. In the sec- rithms fall into the three main types: ond step, the renderer classifies and col- scanlinelg based, depth-sort, and z- ors individual voxels in image space. buffer. Each type uses the depth, also Coloring is accomplished by assigning a called the z’ distance, of each scene ele- color to a voxel then darkening the in- ment that projects to a pixel to determine tensity of the color according to the im- if it lies in front of or behind the scene age space distance between the voxel and element currently displayed at the same the observer. After coloring the voxel in pixel, Appendix A provides further infor- image space, the voxel is projected into mation and example algorithms. screen space. By making voxels at the Histogram equalization [Gonzalez and back of the scene darker than those at Wintz 1987; Hummel 1975; Richards the front, the display provides the viewer 1986] is an image enhancement opera- with depth cues while simultaneous- tion that attempts to use the available ly differentiating the structures in the brightness levels in a CRT display scene. equally. Histogram equalization modifies Geometric transformations, consisting the contrast in an image by mapping of scaling, translation, and rotation pixel values in original screen space [Burger and Gillies 1989; Foley et al. to pixel values in modified screen 1990; Rogers 1985], allow the user to space based on the distribution of occur- examine the scene from different orienta- rences of pixel values. The procedure al- tions and positions. Scene rotation, locates a greater proportion of the gray scaling, and translation are expressed scale to the ranges of values with a large relative to the object space x-, y-, and number of screen space pixels than to z-axis. Geometric transformations take a ranges with fewer pixels. Histogram scene in object space, transform it, and equalization, unlike adaptive histogram place the result in image space. Scaling enlarges or shrinks the rendered image. Translation moves the rendered volume 19A scanline is a horizontal line of pixels on a CRT within image space. Rotation allows the A scanline M also called a scan.

ACM Computing Surveys, Vol. 23, No. 4, December 1991 Three-Dimensional Medical Imaging ● 445 equalization, makes gray-scale value as- volume to be rendered. Nearest-neighbor signments across the entire image and so interpolation estimates the density value is not sensitive to regional variations in of the interstice voxels using the value of the displayed image. Let ho(x) be the the closest in-slice voxel. Trilinear inter- histogram function for the value x in the polation estimates the density value for original image. If there are P pixels in an interstice voxel by taking a weighted the image and B brightness values, then average of the eight closest in-slice vox - an ideal modified image would have P/B els. Trilinear interpolation yields images pixels per brightness value associated that are superior to nearest-neighbor with it. Histogram equalization can not interpolation but at a cost of increased achieve an ideal modified image in computation. Raya and Udupa [19901 practice because pixels with the same describes a new approach to interpola- brightness value in the original image tion called shape-based interpolation. histogram are not split between bright - This technique is particularly useful ness values in the modified image. The when using 2D (surface) primitives. desired modified image brightness value, Shape-based interpolation uses the in- y, for a given original image brightness slice object boundary to estimate the value, x, is computed by interstice object boundary. This tech- nique does not yield interpolated values B–1 ‘ for interstice voxel values but instead y= ho X dx. P /() yields the interpolated boundary location for the object. The integral is found by computing Multiplanar reprojection (MPR) the cumulative histogram for the image. [Herman and Liu 1977; Kramer et al. Since the y value in the modified image 1990; Mosher and Hook 1990; Rhodes is constant for a given x in the original et al. 19801 is a procedure for displaying image, a look-up table is used to assign a single slice of a volume. Multiplanar pixel values in the original image to reprojection can extract single-slice 2D pixel values in the modified image. Both views from a volume of medical imaging Richards [1986] and Gonzalez and Wintz modality data at arbitrary, oblique an- [19871 provide further descriptions gles to the three main (x, y, z) object of histogram equalization, including space axes. In general, the procedure its mathematical basis, as well as gen- steps along the oblique plane, computing eral information concerning the use of the possibly interpolated voxel value at histograms for image enhancement each point on the display plane from the processing. nearby object space voxel value(s). Interpolation [Artzy et al, 19791 con- Multiplanar display (MPD) is the verts the sparsely spaced 2D slices formed simultaneous display of multiple slices of by a CT, MRI, SPECT, PET, or ultra- medical imaging modality data. Three sound study into a continuous 3D scene. orthographic views are commonly pro- Because medical imaging modalities vided in an MPD: a sagittal view, an leave unimaged space between adjacent axial view, and a coronal view. Taking slices of patient data, interpolation is the z-axis to be the patient’s longest di- used to fill in the space between the mension and the y-axis to be oriented slices. Interstice density values are esti- along the patient’s front-to-back axis, the mated by computing a linear average of sagittal, axial, and coronal views are de- the density values found in pairs of op- fined as follows: An axial view is a view posing voxels in the original slices. Two along the z-axis of 3D medical imaging approaches have found widespread modality data that portrays image slices use: nearest-neighbor (zeroeth-order) in- parallel to the x-y plane. An axial view terpolation and trilinear (first-order) is also called a transverse view. A coro- interpolation. Both techniques superim- nal view is a view along the y-axis of 3D pose a grid of sample points upon the medical imaging modality data that

ACM Computing Surveys, Vol. 23, No. 4, December 1991 446 “ Stytz et al. reveals image slices parallel to the X–Z produce visual effects such as shadows, plane. A sagittal view is a view along the reflection, and refraction. zo The addi- x-axis of 3D medical imaging modality tional effects come at the cost of addi- data that reveals image slices parallel to tional computations per ray and at the the y-z plane. cost of spawning additional rays at each Projection is an operation that maps ray/surface intersection. Even though points in an n-dimension coordinate sys- ray tracing is a point-samplingzl tech- tem into another coordinate system of nique, it can perform antialiasing by less than n dimensions. In 3D medical spawning several rays at each pixel and imaging, this operation performs the computing a weighted sum of the ray mapping from points in image space to intensities. The weighted sum of the ray screen coordinates in screen space. There intensities determines the intensity for are two broad classes of projections— the pixel. Appendix B surveys several perspective and parallel. A perspective techniques for increasing the speed and projection is computed when the distance for improving the image quality obtained from the viewer to the projection plane with ray tracing. (commonly the CRT screen) is finite. This Segmentation [Ayache et al. 1989; type of projection varies the size of an Back et al. 1989; Chuang and Udupa object inversely with its distance in im- 1989; Fan et al. 1987; Herman et al. age space from the projection plane. An 1987; Shile et al. 1989; Trivedi et al. object at a greater distance from the 1986; Udupa 1982; Udupa et al. 1982] is viewer than an identical closer object ap- any technique that extracts an object or pears smaller. A , on surface of interest from a scene, For ex- the other hand, places the viewer at an ample, segmentation can be used to iso- infinite distance from the projection late the bony portions of a volume from plane. The size of an object does not vary the surrounding tissue. Both the contour with its depth in image space. and surface approaches to scene repre- There are two broad classes of parallel sentation use segmentation to isolate projections—orthographic and oblique. In specific features within a volume from an ortho~aphic parallel projection, the the remainder of the volume. Some type direction of projection (the direction from of image segmentation was used to ex- the projection plane to the viewer) is tract the skull in figure 7 and the spine parallel to the normal to the projection and lungs in Figure 9. plane. In an oblique parallel projec- Boundary detection is one approach tion, the direction of projection is not to segmentation. Another technique for parallel to the normal to the projection image segmentation is thresholding. im- plane. For further information see Burger age segmentation by thresholding locates and Gillies [19891, Foley et al. [19901, the voxels in a scene that have the same Rogers [19851, and Watt [19891. property, that is, meet the threshold Ray tracing [Burger and Gillies 1989; requirement. Extraction of an object in a Foley et al. 1990; Glassner 1989; Rogers medical volume using this technique is 1985; Watt 1989] is an image-rendering difficult because multiple materials can technique that casts rays of infinitesimal be present within the volume of each width from a given viewpoint through voxel. The presence of multiple materials a pixel atid on into image space. The path of the ray and the location of the objects in the volume determine the ‘“Reflection is caused by light bouncing off a sur- object(s) in the volume encountered by face of an object. Refraction is caused by light the ray. The color of the object(s), their bending as It passes through a transparent object, 21point ~amPling is a rendering technique that de- depth, and other shading factors deter- terrnines the shade of a pixel based upon one probe mine pixel intensity. In addition to per- of image space. The value returned by the probe forming hidden-surface removal and determines the shade of the pixel. This method of shading, ray tracing can be extended to sampling typically results in aliasing.

ACM Computing Surveys, Vol. 23, No. 4, December 1991 Three-Dimensional Medical Imaging g 447 causes misclassification of voxels. The teraction of these properties. Appendix F important step in the thresholding pro- describes the following shading algo- cess is the classification of the elements rithms: distance shading, normal-based of the scene according to criteria that contextual shading, gradient shading, separate the object(s) of interest from gray-scale gradient shading, Gouraud other elements of the scene. A criterion shading, and Phong shading. commonly used in 3D medical imaging is Tissue / image registration [Byrne et al, the voxel value threshold or window, A 1990; Gamboa-Aldeco 1986; Hermand voxel whose value lies within the win- and Abbott 1989; Hu et al. 1989; dow or above the threshold is assumed to Pelizzariet al. 1989; Schiers et al. 1989; be part of the object; otherwise it is clas- Toennies et al. 1989; Toennies et al. sified as background. Because objects and 1990] is a procedure that allows volume surfaces in a medical image are not and/or surface renderings taken of a pa- distinct, even the best segmentation op- tient at different times to be superim- erators provide only an approximation, posed so that the patient axes in each albeit an accurate one, of the desired image coincide. For example, if a patient object or surface. Appendix C describes had both a CT and a PET scan, it may be two techniques for segmentation by useful to overlay the CT anatomical in- thresh olding along with short formation with the PET physiological synopses/classifications of other tech- information, thereby combining different niques for segmentation. Udupa [19881 information into a single rendered im- discusses several boundary detection and age. Registration can be accomplished by segmentation by thresholding schemes either marking points on the patient that and their use in surgical planning. are used later to align the images or by Shading [Burger and Gillies 1989; manual, post-hoc alignment of the im- Foley et al. 1990; Rogers 1985; Watt ages using anatomical landmarks. 1989] enhances the 3D appearance of an Preparation of the patient for registra- image by providing an illusion of depth. tion allows the acquired data to be Chen et al. [1984a] provides an overview aligned with great accuracy, but it is of several shading techniques used in the impractical to perform the preparation medical imaging environment. In combi- for every patient. Manual alignment is nation with gray scale or color and rota- not as precise as prepared alignment tion, shading enhances the 3D visual because patient axes alignment is merely effect of images rendered in 2D. Figures approximated. Manual alignment, how- 7, 8, and 9 illustrate how shading can ever, gives the physician a capability for enhance the perception of depth in an ad hoc registration. Developing proce- image. Shading achieves the depth illu- dures for accurately performing manual, sion by varying the color of surfaces ad hoc alignment is an active research according to their image space depth, area. material content, and orientation with Transparency effects [Foley et al. IggO; regard to light sources and the viewer. Watt 1989] allow a user to view an ob- To shade a scene correctly, a series of scured object within the context of computations based on one or more prop- an overlying transparent (or semi- erties of each visible object must be per- transparent) object, thereby providing a formed. These properties are the refrac- context for the obscured object in terms tive properties, the reflective properties, of the overlying transparent structure(s). the distance from the observer, the angle For example, one may want to view the of incidence between rays from the illu- skull through overlying transparent skin, minating light source(s) and the surface as shown in Figure 8. In that figure, the of the object, the surface normal of the combination of semitransparent treat- object, the type and color of the illumi- ment beams and the image of the pa- nating light source(s), and the color of tient’s head provide an anatomical the object. Appendix E describes the in- context for the location of the beams.

ACM Computing Surveys, Vol. 23, No 4, December 1991 448 = Stytz et al.

This process is useful because it permits 1987; Reese 1984; Stover 1982; Stover visualization of portions of the anatomy and Fletcher 1983; Suetens et al. 1987; that are inside the patient within the Williams et al. 1989; Wixson 1989]. setting of anatomical landmarks on the Lipscomb [19891 presents a compara- outside of the patient. To achieve a tive study of human interaction with transparency effect, the rendering sys- particular 3D display devices. tem uses the formula 1 = tIl + (1 – t) 12, Volume measurement [Bentley and where O s t s 1, to overlay volume data. Karwoski 1988; Udupa 1985; Walser and 11 is the intensity attributed to a point Ackerman 1977] is an operation that es- on the transparent surface; 12 is the in- timates the volume enclosed by an object tensity calculated for the point lying on within image space. The volume mea- an opaque surface behind 11, and t is the surement operation, first described in transparency factor for ll. Volumetric Walser and Ackerman [19771, can be ac- compositing uses the transparency con- complished by taking the product of the cept when forming a visualization of a number of voxels that form the object medical imaging volume. with the volume of a single voxel. Typi- True 3D display is a total volume cally, this operation calls for some form display method that uses human of segmentation to extract the object from vision system physiological depth cues the surrounding material in the scene. like movement parallax and binocular Because segmentation is not a precise parallax (also called stereo vision) to operation, there is a small error in the cause the perception of depth in a dis- computed volume figure. The error is play. True 3D display techniques can be currently not clinically significant. As used in combination with or instead of pointed out in Walser and Ackerman the depth cues provided by traditional [19771, it is the change in the volume of computer graphics techniques (such as an object that provides clinically shading, shadows, hidden-surface re- relevant information. Even with its asso- moval, and perspective). Binocular paral- ciated eyror, volume measurement is lax is a physiological depth cue based valuable because it provides the phy- upon the disparity in the image seen by sician with the means to accurately each eye. Binocular parallax is the detect the change noninvasively. strongest depth cue for the human visual Volume of interest (VOI) operation se- processing system. Movement parallax is lects a cube-shaped region out of object a strong physiological depth cue elicited space for the purpose of isolating a struc- by head movement and the correspond- ture of interest. This operation is used to ing perceived change in the image. Some isolate a portion of object space for fur- forms of true 3D display (such as the ther display and analysis. The primary varifocal mirror and holographic image benefit of the VOI operation lies in its techniques) further enhance the depth reduction of the amount of storage space illusion by allowing the user to see the and computation required to render an superposition of structures in the volume image. and to achieve segmentation of the struc- Volumetric compositing (also called tures by moving his or her head. Varifo- cornpositing) [Drebin et al. 1988; Duff cal mirrors, holograms, several types of 1985; Foley et al. 1990; Fuchs et al. CRT shutters, and several types of stereo 1988c, 1989a, 1989b; Harris et al. 1978; glasses have been used to generate true Heyers et al. 1989; Levoy 1989a, 1990a, 311 displays. Techniques for displaying 1990b; Levoy et al. 1990; Mosher and true 311 images are described in van Hook 1990; Pizer 1989a, 1989b; [Brokenshire 1988; Fuchs 1982a, 1982b; Porter and Duff 1984; Watt 1989] is a I+arris 1988; Harris et al. 1986; Hodges methodology for combining several im- and McAllister 1985; Jaman et al. 1985; ages to create a new image. Compositing Johnson and Anderson 1982; Lane 1982; uses overlay techniques (to effect Mills et al. 1984; Pizer et al. 1983; Robb hidden-surface removal) andior blending

ACM Computmg Surveys, Vol. 23, No. 4, December 1991 Three-Dimensional Medical Imaging w 449

(for voxel value mixing) along with opac- ical imaging quality requirements while ity values (to specify surfaces) to combine performing the medical imaging opera- the images. Figure 8 illustrates this tions rapidly are difficult challenges. technique with its composition of the In fact, many of the requirements and treatment beams and the image of operations work at cross purposes, and the patient’s head. achieving one usually requires sacrific- A composite is assembled using a bi- ing performance in one or more other nary operator to combine two subimages areas. For example, one method that can into an output image. The input subim - be used to achieve rapid medical image ages can be either two voxels or a voxel volume visualization is the use of tiling and an output image from a prior compo- techniques to depict the surface of the sition. When performing an overlay, a object to be rendered. The use of tiling, comparison of z’ distances along the line however, requires a lengthy surface ex- of sight determines the voxel nearest traction processing step that the volume the viewer; that voxel’s value is output approach avoids. To date, no single ma- from the composition. Image blending re- chine has met all objectives and imple- quires tissue type classification of each mented all the operations, and it is an sample point22 in the 3D voxel grid by open question whether one machine can its value. The tissue type classification meet all the objectives. determines the color and opacity associ- ated with the voxel. The voxel opacity 4. THREE-DIMENSIONAL MEDICAL IMAGING determines the contribution of the sam- MACHINES ple point to the final image along the viewer’s line of sight. The a-channe123 This section presents an examination stores the opacity value for the voxel. An of the techniques used in previous a-channel value of 1 signifies a com- 3D medical imaging research to solve pletely opaque voxel, and a value of O the problem of presenting high-quality signifies a completely transparent voxel. 2D reconstructions of imaged volumes If voxel composition proceeds along the at high speed. It describes several line of sight in front-to-back order, the image processing architectures: Farrell’s value at the nth stage of the composition Colored-Range Method, Fuchs/Poulton is determined as follows. Pixel-Planes 4 and 5 machines, Let C,. be the composition color value Kaufman’s Cube architecture, the Medi- from the n – 1 stage, COut be the output cal Imaging Processing Group (MIPG) color value, c. be the color value for the machine, the Pixar/Vicom Image Com- current sample point, and a ~ be the a- puter and Pixar/Vicom 11,24 Reynolds channel value for the current sample and Goldwasser’s Voxel Processor ma- point. Then cOUt a OUt = c,. a,. + C. a ~ chine, and the Mayo Clinic True 3D (1 – al.), aOU, = ain + a~(l – a,.). The machine and ANALYZE. displayed color, along the line of sight C, These eight 3D medical imaging ma- is C = COUt /aOUt. chines encompass a wide range of paral- Three-dimensional medical imaging lelism in their architectures, a wide range systems use one or more of the opera- of image-rendering rates, and a wide tions mentioned above to depict a data number of approaches to object space dis- volume accurately. Meeting the 3D med- play. Three clistinct approaches to 3D medical imaging are, however, evident: software-encoded algorithms on ‘2 Sample point values are commonly computed us- ing a trilinear interpolation of the eight neighbor- ing voxel values. 23The a-channel (alpha channel) is a data structure 24The Pixar Image Computer and Plxar II were used to hold the opacity information needed to sold to Vicom in the spring of 1990. Vicom 1s a compose the voxel data lying along the viewer’s trademark of Vicom Inc. Pixar is a trademark of line of sight. Pixar Inc.

ACM Computing Surveys, Vol. 23, No 4, December 1991 450 “ Stytz et al. general-purpose computer systems characterize additional commercial and (MIPG, Farrell, ANALYZE), graphics ac- research-oriented 3D medical imaging celerators adapted for 3D medical imag- machines using these same parameters. ing (Pixar/Vicom), and special-purpose Because the objective of this review is to computer systems with hardware encod- survey these machines with particular ing of algorithms (Pixel Planes, Voxel emphasis placed on their ground- Processor, Cube). The MIPG machines breaking achievements, we discuss many demonstrate the relevance and feasibil- but not all of the capabilities provided by ity of constructing 3D medical images each machine. Because of space limita- using contour and surface descriptions of tions, we do not discuss the shading, an- objects in CT data. The Cube and Voxel tialiasing, and hidden-surface removal Processor machines establish the useful- technique parameters in depth and refer ness of innovative memory access meth- the reader to the appendixes and Stytz ods for rapid display of 3D medical and Frieder [19911. For a discussion of images. Farrell’s approach shows the the complete type of capabilities for each usefulness of false color images in a 311 machine, we refer the reader to the bibli- medical imaging environment. The ographic references for each machine. Fuchs/Poulton machines confirm that the We selected the first nine parameters pixel-coloring bottleneck can be over- for examination because they provide in- come by expending hardware resources sight into the options available when at the pixel level, and that this invest- forming 3D medical images as well as ment yields significant throughput re- the effect of various design choices. The turns. The Fuchs/Poulton machines and machine architecture describes the inter- the Voxel Processor machine demon- relationships of the hardware and strate the validity of a rendering pipeline software components of the 3D medical approach to 3D medical imaging, albeit imaging machine. Our description of the using different pipelines. Finally, the architecture concentrates on the major Mayo Clinic work establishes the useful- components of the machine and the ness of performing 3D medical image components’ operation. The choice of ma- rendering with cooperating processes in chine architecture sets the performance their ANALYZE system. These research limits in terms of rendering throughput machines laid the foundation for the cur- and image quality for the 3D medical rent generation of commercial machines imaging machine. The machine architec- reviewed here and in Stytz and Frieder ture also affects the type of data models [19911. The commercial Pixar/Vicom that can be used to meet the image machines established that the images rendering rate design goal. produced using compositing techniques For example, suppose the machine ar- are applicable to 3D medical imaging. chitecture consists of a number of com- In the following sections, we character- municating processes running on a ize each of the machines by the overall single, low-power CPU with CRT display machine architecture, the processing support for an 8-bit image. The design strategy, the data model, the shading goal stresses the need for relatively rapid algorithm(s), the antialiasing tech- image rendering and display rates. These nique(s), the hidden-surface removal design constraints indicate selection of a algorithm, the performance of the contour or surface data model, since these machine, the supported image resolution, models would help achieve a rapid dis- the level at which the machine exploits play rate because they reduce rendering computational parallelism, and the oper- time computational cost. In addition, the ational status of the machine (whether it shading and antialiasing 3D medical exists as a proposal, a prototype, an as- imaging operations can be simple be- sembled machine undergoing testing, a cause 8 bits per pixel does not support fully functional machine, or a marketed the display of high-quality medical im- machine). Stytz and Frieder [19911 age volume visualizations. On the other

ACM Computmg Surveys, Vol. 23, No 4, December 1991 Three-Dimensional Medical Imaging “ 451 hand, if image-rendering rate is not a tional expense than the one using depth prime consideration, then a more compu- shading. tationally expensive model, such as the The machine performance parameter voxel model, can be used. specifies the rendered image produc- The processing strategy parameter tion rate for the 3D medical imaging sketches the approach adopted to use the machine. The resolution parameter, ex- machine architecture to perform 311 med- pressed in pixels, reflects the image reso- ical image-rendering tasks. The strategy lution attainable by the machine. Image addresses the constraints imposed by the resolution is a measure of the fineness of machine architecture, data model, and detail displayed on the CRT. This mea- screen resolution in addition to the de- sure has two components— CRT resolu- sired rendered image quality and render- tion and modality resolution. Modality ing speed. To continue with the example resolution is the resolution of the medi- presented above, recall that minimal cal imaging modality used to acquire the elapsed display and rendering times are image, typically expressed as the dimen- important goals. One possible rendering sion of a voxel along the x-, y-, and z-axis. time minimization strategy would be the The resolution of the modality used to use of a surface model of the object of acquire the image imposes the absolute interest with depth shading and no an- limit on the level of detail that can be tialiasing to render the image of the ob- displayed. CRT resolution is the number ject. A rapid display rate can be achieved of pixels on the CRT, usually expressed by rendering a sequence of images in as the number of pixels along the u-axis batch mode and displaying them after all and U-axis. For example, a CRT resolu- rendering computations end. Note that tion of 256 x 256 means that the screen this processing strategy precludes a ca- is 256 pixels wide and 256 pixels high. pability for rapid interaction with the If the pixels on the CRT have approxi- rendered image. mately the same dimension as the face of The data model parameter indicates a voxel, then increases in CRT resolu- both the amount of preprocessing re- tion, up to the limit imposed by modality quired to render the image and the com- resolution, result in the display of finer putational cost of rendering the image. detail in the rendered image. Because For example, a contour-based description the user cannot alter modality resolution of an object requires a preprocessing step at rendering time, in this survey we use to extract the contours but permits rapid the term resolution when discussing CRT display of the object from various orien- resolution. To achieve acceptable resolu- tations. The drawback of this model is tion, the 3D medical imaging machine’s that any change to the object requires resolution should be the same as the reaccomplishing the preprocessing step. resolution provided by medical imaging On the other hand, a voxel-based descrip- modalities. At the present time, the tion of a volume requires no preprocess- typical 3D medical imaging machine ing before rendering, but the rendering resolution is 256 x 256 pixels. Higher procedure is computationally expensive. resolution allows for magnification and The shading, antialiasing, and hidden- complete display of the rendered image. surface removal algorithm parameters The computational parallelism param- reflect both the image resolution attain- eter indicates the capability of the se- able by the machine and the computa- lected machine architecture to perform tional cost incurred in rendering the rapid image rendering. Typically, there image. For example, assume that one are two computational bottlenecks en- machine uses Phong shading and an- countered when rendering an image: other machine uses depth shading. We hidden-surface removal and pixel color- can then infer that the machine using ing. Parallelism can be used to attack Phong shading usually produces higher both of these bottlenecks by permitting quality images at greater computa- rapid extraction of visible surfaces using

ACM Computing Surveys, V.1 23, No 4, December 1991 co m m Three-Dimensional Medical Imaging g 453

Image Mainframe Workstation Final Database Display

Figure 10. Colored- system architecture,

parallel operation on discrete portions of was to assign the computationally inten- the volume and by parallel computation sive operations, such as 3D rotation and of pixel values. A typical tradeoff en- data smoothing, 25 to the host and assign countered in the use of parallelism is the the remaining operations, such as oblique choice of giving speed or image quality projection, transparency, and color, to the primacy in the performance goals for the workstation. machine. In the extreme, for a given level The processing strategy used in the of parallel operation, a parallel process- machine seeks to reduce the computa- ing capability can be used to increase tional load by eliminating or simplifying image-rendering speed while maintain- the operations required to form a 3D im- ing image quality, or image quality can age. The strategy is implemented using be improved while maintaining the the colored-range method for back-to- image-rendering rate. Table 3 summa- front 3D medical image rendering. The rizes the key parameters for each of the colored-range methodology does not re - machines. quire preprocessing of the data and can rapidly display object space from an arbi- trary point of view. It emphasizes the 3D 4.1 Farrell’s Colored-Range Method relationships of the constituent parts of the image by image rotation and the use The 3D medical imaging machine pro- of color. The machine performs angular posed by Farrell and Zappulla [1989] and Farrell et al. 1984, 1985, 1986a, 1986b, rotation about an axis by diagonally the image across the screen in 19871 takes an approach to 3D medical the back-to-front order defined by the de- image rendering that is different from sired scene orientation. The system com- the other machines in this survey in two putes the required diagonal offset for regards, First, the colored-range method successive 2D frames using x, y tables of does not require preprocessing of the screen coordinates for small rotational data. Second, it relies upon the use of values of up to +400 from a head-on color and image rotation to highlight the viewpoint. The machine performs larger 3D relationships between organs in the angular rotations by reformatting the volume. The machine has a high image- data so that either the +x- or + y-axis processing speed but is expensive. The becomes the + z-axis. Segmentation of the volume-rendering image-processing sys- components of the image is accomplished tem consists of a host and a workstation by specifying a discrete voxel value range (Figure 10). The host is an IBM to identify each organ in the image and 370/4341. The workstation is an IBM then by assigning a different color to 7350. A significant problem faced by the de- signers of the machine was deciding how to divide the workload between the work- 25Data smoothing is the filtering of the rendered station and the host. The solution adopted volume to reduce aliasing effects

ACM Computmg Surveys, Vol. 23, No. 4, December 1991 454 “ Stytz et al. each range. The rendering indicates the parallelism to attack the pixel-coloring relative depth of the parts of a given bottleneck. The latest machines in the object by varying the color intensity series are Pixel-Planes 4 (Pxp14) and according to the depth of the part, with Pixel-Planes 5 (Pxp15). We focus the dis- the more distant parts being darker. cussion on Pixel-Planes 5. The colored-range method has several The goal of the Pixel-Planes 5 architec- advantages in addition to supporting ture is to increase the pixel display rate rapid image rendering. It allows differ- by performing select pixel-level graphics ent structures to be displayed simultane- operations in parallel within a general- ously through the use of different colors, purpose graphics system. To achieve this and it allows the relative size and posi- goal, the system architecture relies upon tion of structures to be visualized by two components— a general-purpose mul- overlaying successive slices. The voxel tiprocessor “front end” and a special- data need not be continuous in 3D space, purpose “smart” frame buffer. The thereby eliminating the need for data front end specifies on-screen objects in interpolation to form a continuous 3D pixel-independent terms, and the volume. Additionally, the colored-range “smart” frame buffer converts the pixel- method displays accurate 3D images with independent description into a rendered just the use of simple logical and arith- image. metic functions in the processor. The Pixel-Planes 4 machine [Fuchs There is little parallel operation in et al. 1985, 1986; Goldfeather 1986; Farrell’s machine. On the other hand, Poulton et al. 1987] laid the foundation the machine generates images at very for the current Pixel-Planes 5 machine in high speeds. Farrell’s image overlay its pioneering use of a smart frame buffer technique permits 3D images to be pro- for image generation. The design goal for cessed and presented in a matter of 10-15 this machine was to achieve a rapid sec. Farrell’s machine attains this ren- image-rendering ability by providing an dering speed by simplifying the computa- inexpensive computational capability for tions involved in representing and each pixel in the frame buffer, hence the rendering the volume. Volume represen- name smart frame buffer. Within the tation computations are minimized by smart frame buffer are the logic- eliminating the preprocessing operations enhanced memory chips that provide the that derive ID and 2D object representa- pixel-level image-rendering capability. tions and that interpolate voxel values to The importance of the smart frame buffer form a continuous 3D volume. The ren- lies in its capability for widening the dering computation workload is reduced pixel-writing/coloring bottleneck by by simplification of shading and hidden- processing each object in the scene simul- surface removal operations. taneously at all pixels on the screen. Pixel-Planes 4 accomplishes object dis- play in three steps. The first step is image 4.2 Fuchs / Poulton Pixel-Planes Machines primitive scan conversion. Scan conver- At the opposite end of the parallelism sion processing determines the pixels that scale from Farrell’s Colored-Range lie on or inside a specific convex polygon Method is the Fuchs/Poulton Pixel- described using a set of linear expres- sions. ZG The second step accomplishes Planes machines and associated algo- rithms, described in Ellsworth et al. [1990], Fuchs et al. [1977, 1979, 1980, 1983, 1985, 1986, 1988a, 1988b, 1988c, 1989a, 1989cI, (loldfeather [19861, Levoy ‘6Each edge of a polygon is defined by two vertices, [1989 b], Pizer and Fuchs [19871, U1 = ( xl, yl) and V2 = (x2, y2), which are ordered so the polygon hes to the left of the directed edge and Poulton et al. [1987]. This U1U2 The equation of the edge is given by Ax + By series of machines achieves rapid image- +C=O, where A=y2–yl, B=x2–xl, and rendering performance by using massive c= x1y2 –X2.Y1,

ACM Computing Surveys. Vol 23, No 4, December 199] Three-Dimensional Medical Imaging o 455 visibility determination for the current preach, provide a capability for render- primitive relative to primitives computed ing several primitives simultaneously. previously, Pixel-Planes 4 determines the Pxp15 evaluates quadratic, instead of visibility of each polygon at the pixel linear, expressions. Figure 11 shows the level by a comparison of z’ values using architecture for the Pixel-Planes 5 data stored in the local pixel memory. machine. The third step consists of the operations, Pixel-Planes 5 processes primitives one such as antialiasing, contrast enhance- screen patch (128 x 128 pixels) at a time ment, transparency, texturing, and shad- in each Renderer unit. It uses multiple ing, required to render the image. Renderers to provide a multiple primi- The logic-enhanced memory chips per- tive processing capability and multiple form the pixel-oriented tasks, such as Graphics Processors (GP) to sort primi- pixel coloring and antialiasing, simulta- tives into different screen patches (bins). neously at the individual pixel level. The system dynamically assigns screen Since pixel-level operations must be patches to Renderers during processing, performed on linear expressions,27 the with each GP sending the primitives in graphics processor must construct a lin- its patches to the appropriate Renderer ear expression model for each object be- in turn. The host workstation is respon- fore sending the image description to the sible for user interaction support and im- smart frame buffer. To develop linear age database editing. The 32 GP and equations for each object in the display MIMD units are used for two purposes. list, the graphics processor performs Each GP manages its assigned Pixel. display list traversal,28 viewing transfor- PHIGS (a variant of PHIGS +30 designed mation, lighting, and clipping opera- for Pixel-Planes) data structures and tions. These four operations yield a sorts its set of primitive objects into bins set of colored vertex descriptions2g for that correspond to parts of the screen. each object. The graphics processor uses Each of the 8-10 Renderers in the sys- the colored vertex descriptions to derive tem, diagramed in Figure 12, are inde- the linear equation coefficients sent to pendent SIMD processing units with the frame buffer. local memory descended from the PxP14 Pixel-Planes 5 builds upon the pixel- chips. Each Renderer chip has 256 pixel- level processing foundation laid by Pxp14 processing elements and 208 bits of but extends it by using Multiple Instruc- memory per pixel. Each Renderer oper- tion, Multiple-Data (MIMD) processors ates on an assigned 128 x 128 pixel patch for display list traversal and a token ring of the screen. In concert, the Renderers network for interprocessor communica- can operate on several discrete patches of tion. These changes, along with decou- the screen simultaneously. The memory pling the pixel-processing elements of the on the board serves as a backing store for smart frame buffer from the frame buffer holding pixel color values for the current memory and using a virtual pixel ap - patch of screen being processed by the Renderer. The Renderer memory holds color, z depth, and other associated pixel 27The linear expressions are of the form Ax + B-Y information. The quadratic expression + C, where x and y are the Image space coordi- evaluator on the chip evaluates the nates of the pixel. Linear expressions can be used quadratic expression Ax+ By+C+ to define polygon, sphere, line, and point primitive objects. 28A display list is a list of the objects within the object space, which is also the list of the objects to be displayed Display list traversal is the process of 30PHIGS + (Programmers’ Hierarchical Interac- examining each of the entries in the display list in tive Graphs System) 1s an extension to the ANSI the order specified by the list. graphics committee’s current PHIGS 3D graphics ‘gVertex descriptions are sent when processing standard that supports lighting, shading, complex polygons; individual pixel values are sent when primitives, and primitive attributes PHIGS tutori- processing medical images als are m Cohn [19S61 and Morrissey [19901

ACM Computing Surveys, Vol 23, No 4, December 1991 456 . Stytz et al.

T

Rendere m? ? L Rendere [ — Rendere Frame Renderer Buffer m

CRT bDfiplay

Figure 11. Pixel-Planes 5 architecture (based upon Fuchs 1989b)

DX2 + Exy + Fyz at each pixel using the Pixel-Plane 5 renders a screen patch A, B, C, D, E, and F coefficients broad- by authorizing each GP to broadcast, in cast from the GP. The 1280 x 1024 pixel turn, the screen patch bin containing the frame buffer is double buffered to sup- primitives and instructions for process- port the 24 frames/second screen update ing them to the Renderer with the corre- rate design goal for Pxp15. A multichan- sponding screen patch assignment. After nel token ring network interconnects the a GP completes its broadcast to a Ren- individual units of the system. The net- derer, it notifies the next GP that it may work can transmit up to eight simultane- begin its processing for the patch. The ous messages at 20M words/second. 12 GP then waits for its next turn to broad- Image rendering commences after an cast. The final GP to broadcast its bin to application on the host workstation the Renderer informs it that the current makes changes to the image database screen patch for the Renderer is com- and sends the results of these changes to plete. The Renderer then moves the pixel the GPs via the network. One GP, desig- color values for that patch to the backing nated the Master GP, has the responsi- store and receives a new patch assign- bility for assigning Renderers to portions ment from the Master GP. After all the of the screen and informing the GPs of Renderers finish rendering all their as- the Renderer assignments. Using the signed screen patches, each Renderer changes sent from the host, each GP transfers the stored results of the compu- transforms the primitives assigned to it tations for its assigned patches to the and then sorts them into the correct frame buffer. screen bins according to the virtual Pixel-Planes 5 can perform the same screen patches the primitives intersect, operations as Pxp14, but its use of

ACM Computing Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging “ 457

Ck,tom Pixel Processing Chips

\ t : Renderer Board \ , 1 \; I \’ + +“ 1’:

Image .. Generation Backing Store (4k hitshmel ) Controller &l -,,-----

Figure 12. Pixel-Planes 5 Renderer (based upon Fuchs 1989a)

quadratic expression evaluators and its Pixel-Planes 5 is a hybrid architec- higher rendering speed provides addi- ture, having both Single-Instruction, tional capabilities. Pixel-Plane 5 per- Multiple-Data (SIMD) and MIMD compo- forms 3D medical imaging by storing nents, with a predicted capability for object space voxels within the backing rendering 256 x 256 x 256 voxel 3D store at each Renderer. Medical image medical images at the rate of 1-10 rendering begins with classification and frames/second. As of this writing, Pixel- shading of the 3D array of voxels at the Planes 5 is undergoing initial testing and Renderers using the Gouraud shading is not yet fully functional. model. Each Renderer retains the result- ing color and opacity values in its back- 4.3 Kaufman’s Cube Architecture ing store. To compute a rendition, the GPs trace parallel viewing rays into the Kaufman designed the Cube machine to 3D array from the viewer’s position. As permit real-time computation of volume each ray progresses, the Renderers trans- and surface renderings of medical images mit requested voxel values to the GPs for and geometric objects from 3D primi- trilinear interpolation and composition tives. He described the architecture in with the current pixel value in the re- Bakalash and Kaufman [19891, Cohen questing GP. After completing its por- et al. [19901, Kaufman [1987, 1988a, tion of the rendition, each GP transmits 1988b, 1988c, 1989], Kaufman and its computed pixel values directly to the Bakalash [1985, 1988, 1989a, 1989b, frame buffer for display. Pixel-Plane 5 1990], and Kaufman and Shimony [19861. can render between 1 and 10 frames/sec- The system development calls for ond, depending upon desired medical the production of a system with a display image quality. resolution of 512 x 512 pixels and a 512

ACM Computing Surveys, Vol 23, No. 4, December 1991 458 0 Stytz et al.

Cubic Frame Buffer

— Legend m FBP3 3dlime bufferpmesmr I m Figure 13. Kaufman’s Cube machine architecture, Legend: FBP3, 3D frame buffer processor; GP3, 3D geometry processor; VP3, 3D viewing processor; FB, frame buffer; VP, video processor (adopted from Kau88b).

x 512 x 512 voxel capacity. The cur- volume slicing in real time, thereby al- rently operational hardware prototype lowing for rapid inspection of interior supports rendering of a 16 x 16 x 16 characteristics of the volume. The Cube voxel volume. The software for emulat - performs shifting, scaling, and rotation ing the full-scale machine’s functionality operations by calculating the coordinates is fully operational. Figure 13 presents a for a voxel based upon the calculated diagram of the system. coordinate position of the closest neigh- The Cube machine provides a suite of bor voxel previously transformed. 3D medical image rendering capabilities The image processing system consists that support the generation of shaded of a host and four major components, the orthographic projections of 313 voxel data. 3D Frame Buffer Processor (FBP3), the The architecture makes provision for col- Cubic Frame Buffer (CFB), the 3D onizing the output and for selection of Geometry Processor (GP3), and the 3D object(s) to be displayed translucently. Viewing Processor (VP3), which includes The machine achieves real-time image the voxel multiple write bus. 32 The host display by using parallel processing at manages the user interface. The 3D two levels. The Cube uses coarse grain Frame Buffer Processor loads the CFB parallel processing to provide user inter- with 3D voxel-based data representing action with the system while the ma- icons,33 objects, and medical images and chine computes renditions. To accelerate manipulates the data within the CFB to image rendition, the machine uses fine- perform arbitrary projections of the data grain parallelism to process an entire therein. The 3D Geometry Processor is beam31 of voxels simultaneously instead responsible for scan conversion of 3D of examining the voxels along the beam geometric models into voxel representa- one at a time. The user can alter color selection, material translucency, and

‘2 Based upon the design reported in Gemballa and — Lmdner [1982] for a multiple-write bus, 31A beam of voxels is a row, column, or diagonal 33Example icons are cursors, synthetic needles, and axle of voxels within the scene scalpels

ACM Computing Surveys, Vol 23, No, 4, December 1991 Three-Dimensional Medical Imaging ● 459 tions for storage within the CFB.34 The The voxel selection algorithm uses the CFB is a large, 3D, cubic memory with location of each voxel along the beam sufficient capacity to hold a 512 x 512 x and its density value to compute the visi- 512 voxel representation of a volume. ble voxel in the beam. To avoid calculat- Each voxel is represented by 8 bits. To ing the location of each voxel on the facilitate rapid retrieval and storage beam, each processor in the voxel multi- of an entire beam of voxels, the CFB ple write bus has an index value. The uses a skewed-memory scheme. The CFB selection algorithm regards the index skewed-memory organization conceptu- value for each processor as the depth ally regards storage as a set of n memory coordinate of the voxel it contains. The modules instead of as a linear array. All lowest index value belongs to the proces- the voxel values in memory module ii sor at the back of the beam. When pro- satisfy the relation k = ( x + y + cessing a beam of voxels, the location of z) mod n, where x, y, and z are the the clipping plane and any voxel value(s) object space coordinates of a voxel. The x considered to be transparent are broad- and y coordinates of a voxel determine cast to all processors on the voxel multi- the storage location within its assigned ple write bus. If a processor contains a module. This memory organization guar- transparent voxel value or if it lies in the antees that for any desired scene orienta- clipped region, the processor disables tion no more than one voxel is fetched itself during the current round of beam from each module. processing. Each of the remaining pro- When forming an image, the 3D View- cessors then places its index value on the ing Processor retrieves beams of voxels Voxel Depth Bus bit by bit, starting with from the CFB and places them in the the most significant bit. In each round of voxel multiple write bus. To meet bit processing, all active processors the requirement for real-time image gen- simultaneously place their next bit value eration, the 3D Viewing Processor simul- on the bus. The bus does an “or” on the taneously processes all the voxels in a values and retains the largest value as beam using the voxel multiple write bus. the current Voxel Depth Bus value. Then The voxel selection algorithm used each active processor examines the bit in the voxel multiple write bus enables it value on the bus, and if it is not equal to to select, from a beam of n voxels, the the processor’s own bit value, the proces- visible voxel closest to the observer in sor disables itself for the remainder of log(n) time. The bus implements trans- the processing of the current beam. One parency and clipping planes by disabling processor eventually remains, and the 3D the processors in the clipped region or Viewing Processor uses that voxel value containing transparent voxel values be- and depth for colonization and shading. fore performing visible voxel determina- The key elements to the ability of tion. The voxel multiple write bus has n the Cube architecture to form an ortho- processors for a scene n voxels deep, or graphic projection rapidly are the 512 processors for the CFB example cited skewed-memory configuration in the CFB previously. To process a 512 x 512 x 512 and the voxel multiple write bus design. voxel volume, the 3D Viewing Processor The CFB memory organization supports makes 5122 sequential accesses to the simultaneous access to a beam of voxels. CFB. After determining the visible voxel, The design of the voxel multiple write the 3D Viewing Processor shades the bus enables rapid determination of the voxel and sends the value to the frame voxel in the beam closest to the observer. buffer. A recent modification to CFB addressing allows for conflict-free retrieval of voxels from the CFB when performing nonorthographic projections. To perform 34For a description of the geometric scan con- version process see Kaufman [1987, 1988al and arbitrary parallel and perspective projec- Kaufman and Shimony [19861. tions, three additional 2D buffers were

ACM Computing Surveys, Vol 23, No 4, December 1991 460 - Stytz et al. added to the machine. This new architec- ing the cooperating processes in the ture performs parallel and perspective ANALYZE 36 system. The True 3D ma- projection by retrieving a plane of projec- chine architecture achieves the second tion rays from the CFB and then by us- objective by displaying up to 500,000 ing the additional buffers to align the voxels/sec as a continuous true 3D im- plane for conflict-free projection-ray re- age. The central concept of the ANA- trieval. Every projection ray within each LYZE software system is that 3D image plane is then analyzed by the voxel mul- analysis is an editing task in that proper tiple write bus to determine the projec- formatting (editing) of the scene data will tion of that ray. Additional information highlight the important 3D relation- on this enhancement appears in ships. In addition to 3D operations, the Kaufman and Bakalash [19901. ANALYZE system supports a wide vari- The Cube machine exploits parallelism ety of interactive 2D scene editing, 2D at the beam level, where it processes full display, and options. The beams simultaneously using the CFB system displays true 3D images using a skewed-memory organization and the varifocal mirror assembly consisting of a simple logic in the voxel multiple write mirror, a loudspeaker, and a CRT. 37 Fig. bus. The estimated35 Cube machine per- ure 14, based upon Robb [1985], presents formance varies depending on the type a diagram of the True 3D display system. of operation it must perform. A 512 x The True 3D machine’s hardware 512 x 512 voxel volume can be rendered and data format were designed to permit to a 512 x 512 pixel image using ortho- rapid image display. The machine ac- graphic projection, shading, translu- complishes true 3D display by storing cency, and hidden-surface removal in the rendered data slices in high-speed 0.062 sec. Use of the modified CFB ac- memory from where they are output at a cess scheme results in a 0.16 sec. esti- real-time rate to a CRT that projects onto mated rendering time for an arbitrary a varifocal mirror. The key to the high- projection of a 512s voxel volume. speed memory to CRT data transfer rate is the use of a custom video pipeline processor to perform intensity transfor- 4.4 The Mayo Clinic True Three-Dimensional Machine and ANALYZE mations on the voxel data before display. The four intensity transformation opera- The Mayo Clinic machine, Harris et al. tions are displaying a voxel value at [1979, 19861, Hefferman and Robb [1985a, maximum CRT intensity, displaying a 1985 b], Robb [1985, 1987], Robb and voxel value at minimum CRT intensity, Barillot [1988, 1989], Robb and Hanson passing the value through unchanged, [1990], and Robb et al. [1986], is the only and modifying the value using table machine described in this survey that look up. A 2-bit field appended to each provides a true 3D display. The system voxel value controls the operation of the performs image-rendering operations in pipeline. As each voxel value enters the a workstation without using parallel pro- pipeline, the pipeline processor uses cessing. The design objectives of the the 2-bit field value to determine the system are to provide an environment intensity transformation operation to suitable for flexible visualization and apply. Once the pipeline finishes proces- analysis of 3D data volumes and to pro- vide a true 3D display capability. The system achieves the first objective by us- 3tiANALYZE for UNIX workstations is avadable from CEMAX, Inc., 46750 Freemont Blvd., Suite 207, Freemont, California 94538. UNIX 1s a trade- 35Estimated performance figures are based upon mark of AT&T Bell Laboratories. printed circuit board technology; higher speeds 37The loudspeaker is used to vibrate the mirror are anticipated with the VLSI implementation synchronously with the CRT display at 30 currently under construction, frames/see

ACM Computmg Surveys, Vol. 23, No 4, December 1991 Three-Dimensional Medical Imaging * 461

Varifocal Mirror Assembly

CRT Dsplay

~ Mirror ,,,.,*.,,:,,.,,.::,:,,.,,.,:,,,. . ,:,.. ,,,,,: ,. -,:. Spcakel Y

2D Display

----- +-. r-l / / High-Speed Mass Interface Storage and Memory D & 2D Display

Figure 14. True 3D machine architecture

sing a voxel value, it forwards the lUS. We summarize their operation here. value to the CRT for immediate display Figure 15 depicts the hierarchical rela- upon the varifocal mirror. tionships between the ANALYZE The ANALYZE 38’39 software system processes. produces the input image frames for the Image processing begins when the host True 3D system. ANALYZE is a set of workstation uses the TAPE or DISK pro- communicating processes running within cessio to bring an image volume into the the workstation, with each process de- machine and convert the data into signed to perform a different class of op- the format required by the machine. The erations. In addition to forming true 313 DISPLAY processes perform manipula- images, ANALYZE processes perform tion and multiformat display of selected user interface management, interprocess 2D sections within the 3D volume as well communication, 3D object-editing tasks, as rendering of 3D transmission and re- and surface and volume rendering, To flection displays of the entire data set. facilitate rapid processing of the image, DISPLAY can render projections of the ANALYZE maintains the entire image volume only along one of the three major within a shared block of memory. Robb axes. The 2D section display process uses [Robb and Barillot 1988, 1989; Robb and the 3D data set to produce a series of 2D Hanson 1990; Robb et al. 1986] discusses slices that lie parallel to one of the three the operation of all the ANALYZE modu- coordinate axes. The 2D section display process supports windowing, threshold- ing, smoothing, boundary detection, and

38This software package runs on standard UNIX rotation to produce the desired 2D views. workstations. ~9Another software system that uses multiple, inde- pendent, cooperating modules to visualize multidi- mensional medical image data is described in Raya 4“Combined into the MOVE processes in Robb and [1990]. Barillot [1989] and Robb and Hanson [19901.

ACM Computing Surveys, Vol 23, No 4, December 1991 462 ● Stytz et al.

l-[

L-J–—

EEZII[12FE31

EE!iiclwziz zlllmllml

Figure 15. ANALYZE processes in the Mayo Clime True 3D machine (from Robb et al [1986b].

DISPLAY forms 3D transmission and re- lighting intensity value along the ray by flection displays by using ray-tracing the composite a-channel value along the techniques to render parallel projections ray. ~l DISPLAY determines the lighting of the 3D data set. A reflection display, intensity value along a ray by calculat- which provides an image closely resem- ing the reflection at the transparent sur- bling a photograph, can be either a 3D face, the light transmitted through shaded-surface display or a transparency the surface, and the reflection at the display. DISPLAY constructs shaded- opaque surface. DISPLAY bases its ci- surface displays by terminating the ray channel calculations upon three factors. as soon as the ray either intersects a These are the orientation of the trans- voxel lying within a specified threshold parent surface relative to the light source, or exits image space. This is essentially a a weighting value for a based upon the technique for surface rendering by orientation, and an assigned maximum thresholding. DISPLAY computes a and minimum a-channel range for the transmission display from either the transparent surface. brightest voxel value along each ray The OBLIQUE~z process generates and (max-min display) or the weighted aver- displays arbitrary oblique views of the age of the voxel values along each ray that lie within a specified threshold (radiographic display). DISPLAY gener- ates radiographic displays by applying 41The formula, from Robb and Bardlott [198S 1, is: I = Itsp*u + (1– a)*Iopq where a = amin + an a-channel-based image composition ( amax – amin)*cos p O. amin and ccmax define the technique to a single transparent surface light transmission coefficient range, o is the angle and an underlying opaque surface within between the surface normal and the light source, the volume. DISPLAY uses thresholding and p 1s the distribution coefficient for the a chan- nel range assigned according to cos 0, p normally to exclude other surfaces from the final lies m the range 1 s p <4 image. DISPLAY determines the value 421ncluded in the DISPLAY processes in Robb and returned by a ray by weighting the total Barillot [1989] and Robb and Hanson [1990]

ACM Computing Surveys, Vol. 23, No 4, December 1991 Three-Dimensional Medical Imaging 0 463

3D volume on the 2D CRT. OBLIQUE selected formats. An oblique plane can performs many of the same 2D-slice dis- be displayed within the volume either by play functions as DISPLAY but can ren- selectively “dimming” the voxels around der arbitrary oblique projections of the desired plane or by superimposing a the volume. The process uses nearest- sparse, bright cutting plane overthe data. neighbor interpolation to facilitate rapid MIRAGE forms the displays by appropri- rendering of the 2D display. To help ori- ately reducing or maximizingthe inten- ent the operator, OBLIQUE presents a sity of the affected voxels. cube-shaped outline of the imaged vol After MIRAGE completes its rendering ume on the CRT, with the current oblique operations, it places the resulting data cutting plane position displayed within values into a 27-frame stack within the the cube. As the operator changes the high-speed buffer. The high-speed buffer position of the cutting plane, the system dumps the stack to the system’s CRT responds with a 2D display of the desired in back-to-front order synchronously with section, an update to the cutting plane the vibration of the varifocal mirror. The position within the cube, and a display of observer sees the rendering in the varifo- the intersection of the oblique plane with cal mirror. The images appear to be con- the sagittal, transverse, and coronal 2D tinuous in all three dimensions, and planes. the operator can look around foreground The MIRAGE process is the heart of structures by moving his or her head. the True 3D display system. This process The EDIT44 process provides interac- consists of several modules that format tive modification of the data in memory the data, perform arbitrary rotations of by thresholding and object tracing. The the formatted data, window out selected MANIP (MANIPULATE in Robb and voxel values, specify and display oblique Barillot [1989] and Robb and Hanson planes through the data, and accomplish [1990]) processes support addition, sub- dissolution/dissection.i3 MIRAGE dis- traction, multiplication, and division of plays the results of these computations the voxel values in memory by scalar upon the varifocal mirror as a true 3D values or other data sets. These pro- image. Data formatting is a preprocess- cesses can enhance objects in addition to ing operation that reduces the 128 input interpolating, scaling, partitioning, and image planes down to 27 display frames masking out all or portions of the vol- and appends a 2-bit field to the voxel ume. MANIP also provides the capability values. MIRAGE uses these 27 rendered for image data set normalization, selec- frames to generate the 3B image. The tive enhancement of objects within the rendered frames can also be interactively image data, edge contrast enhancement, edited by other modules within iMI- and “image algebra” functions. After the RAGE. MIRAGE uses the 2-bit field to MANIP, EDIT, MIRAGE, and/or perform dissolution, dissection, and win- DISPLAY processes isolate the region(s) dowing operations. MIRAGE performs of interest, the BIOPSY45 processes can digital dissection by setting the voxels in be used to gather values from specific the region to be removed to black. points and areas within the region. MIRAGE accomplishes dissolution simi- BIOPSY performs two functions. The first larly, except that it gradually fades vox- function is the display of a set of serial els to black over the course of several 2D slices extracted from the volume at passes instead of setting them to black in the orientation set by its server pro- one pass. MIRAGE displays arbitrary cesses. The display allows the operator to oblique planes using one of two operator-

-— 44Included in the MANIPULATE processes in Robb 43Dissolution appears as tissue dissolving, or melt- and Barillot [1989] and Robb and Hanson [1990]. ing, away. Dissection appears more as a peeling 451ncluded in the MEASURE processes in Robb and away of overlying layers. Barillot [1989] and Robb and Hanson [1990].

ACM Computing Surveys, Vol 23, No. 4, December 1991 464 * Stytz et al. specify subregions to be examined inter- 4.5 Medical Image Processing Group actively. Within each subregion, Machines BIOPf5Y’s second function is to perform The Medical Image Processing Group point sampling, line sampling, and area (MIPG) has developed a series of ma- sampling with histogram, mean, and chines [Artzy 1979; Artzy et al. 1979, standard deviation information provided 1981; Chen et al. 1984a, 1984b, 1985; for the indicated subregion. Edholm 1986; Frieder et al. 1985a, The SURFACE*G process is responsi- 1985b; Herman 1985, 1986; Herman and ble for identifying and extracting speci- Coin 1980; Herman and Liu 1978, 1979; fied surfaces from the volume. The Herman and Webster 1983; Herman process supports interactive isolation of et al. 1982; Reynolds 1983b] that pro- the surface to be extracted, formation of duce surface renderings of medical im- a binary volume by thresholding using ages. The machines rely upon shading to an operator-specified threshold window, provide depth cues. We describe the oper- and extraction of the binary surface. The ation of 3D98, the last version of the process stores the extracted surface as a machines, in this section. The MIPG se- list of voxel faces and can display the ries of machines is unique among those surface on a CRT as a set of contours or discussed in this survey in that the pri- as a shaded 3D volume. The PROJECT47 mary design objectives are low system process creates user-specified projection cost with acceptable image quality rather images of the volume by altering the than rapid image formation. The current view angle, dissolution range, and dissec- MIPG machine [Raya et al. 1990; Udupa tion parameters. 48 The MOVIE 49 process et al. 1990] adopted these same design displays the projection sequences. objectives, but this machine provides a MOVIE displays provide the operator 3D medical imaging capability using a with additional information concerning PC-based system. The MIPG machines the composition of the volume by exploit- achieve their primary design objective by ing motion parallax, tissue dissolves, dis- reducing the amount of data to be manip - section effects, and combinations of ulated.50 To reduce the data volume, the these effects. MIPG researchers developed many of There is no parallelism in the Mayo the surface-tracking and contour extrac- clinic True 3D machine. For true 311 tion techniques described elsewhere in images on the varifocal mirror, the sys- this survey. The MIPG machines use the tem can display up to 27 frames every cuberille51 data model, a derivative of second (this rate is required to form the the voxel model. true 3D image) at a resolution of 128 x 3D98 uses the minicomputer that con- 128 pixels. The True 3D machine is fully trols the x-ray CT scanner to perform all operational at the Mayo Clinic as is the surface-rendering calculations. The ANALYZE software system. ANALYZE 3D98-processing strategy is to reduce the performance depends upon the worksta- computational burden by reducing the tion that hosts the software. rendered data volume. 3D98 implements this strategy in two stages. First, 3D98 isolates the object of interest early in the 461ncluded in the DISPLAY processes in Robb and image formation process by applying Barillot [1989] and Robb and Hanson [1990]. 471ncluded m the DISPLAY processes in Robb and Bardlot [1989] and Robb and Hanson [1990], 48Dissolution range sets the number of projections performed and the amount of dissolution (fading) to 50Another system for performing 3D medical image be applied over the sequence The dissection pa- rendering using a CT scanner processor is outlined rameters specify the starting and ending coordinate in Dekel”[1987r values of the planes to be shced away and the 51Cuberille—the division of a 3D space into cubes, number of projections to be performed. much as quadrille is the division of a 2D space mto 4gIncluded in the DISPLAY processes in Robb and squares, using three mutually perpendicular sets of Bardlot [1989] and Robb and Hanson [1990] umformly spaced parallel planes,

ACM Computmg Surveys, Vol. 23, No. 4, December 1991 Three-Dimensional Medical Imaging e 465 a segmentation operator to the data The 31398 machine produces images volume. The resulting isolated object is slowly. A single 256 x 256 pixel represented in a 3D binary array. The image can take 1 –3 minutes to render second portion of the strategy further and display. There is no parallelism in reduces the computational burden by us- the machine. There is one CPU, that ing surface-tracking operations. The out - found in the CT scanner, and it performs put from the surface tracker is a list of all the rendering calculations. The MIPG cube faces that lie upon the surface of the 3D98 machine is fully operational. object. When compared to the volume of data output by the CT scanner, the two- 4.6 Pixar / Vicom image Computer and Pixar I step preprocessing strategy greatly re - Vicom II duces the amount of data to be rendered and shaded. The segmentation and The Pixar/Vicom Image computer 5Z surface-tracking functions, however, in- and the Pixar/Vicom 115s are com- evitably discard scene information as mercially available, general-purpose they operate. Selection of a new object or graphics computers. The Pixar/Vicom a change to the object of interest requires machines require a host computer for reprocessing the entire volume to detect nonimage computing functions such as and render the new desired surface. network access, the program develop- The 3D98 image-rendering procedure ment environment for the Chap (Channel has three steps. The first step, interpola- Processor) C compiler and Chap assem- tion and segmentation, produces a binary bler, and the user interface. We drew array of cubic voxels from a set of paral- the following material from Carpenter lelpiped-voxel-based slices. In this step, [1984], Catmull [1984], Cook [19841, 3D98 interpolates the voxel values pro- Cook et al. [1984, 1987], Drebin duced by the CT scanner and extracts the et al. [1988], Levinthal and Porter [1984], desired object from the volume. A binary Ney [1990bl, PIXAR [1988a, 1988b, array holds the output of this step. The 1988c, 1988d, 1988e]; Porter and Duff entries in the array that contain ones [19841; Robertson [19861; Springer [19861. correspond to cubes within the object of We limit the following description and interest. The entries in the array that analysis of these machines to the image contain zeros correspond to cubes in processing algorithms supplied with the the scene background. The second Pixar/Vicom machines. Since the algo- step is surface tracking. The input to rithms are software encoded, however, this step is the binary array produced in the user can modify them. step one; the output is a list of faces of The unique processing requirements of cubes. Each face on the list lies on the the motion picture special effects and an- border of the object and so separates a imation image composition environment voxel labeled O from a voxel labeled 1. drove the Pixar/Vicom Image computer The faces on the list form a closed con- and Pixar/Vicom H software and hard- nected surface. The surface depicts the ware designs. The techniques used in object that contains the cube face identi- both Pixar/Vicom machines for render- fied by the user as the seed face for the ing, antialiasing, image compositing, surface-tracking process. Further details rotation, scaling, and shading are de- can be found in Herman and Webster scribed in Carpenter [19841, Catmull [1983] and Frieder et al. [1985b]. The [1984], Cook [1984], and Cook et al. [1984, third and final image processing step is 1987]. The concepts developed in these shaded-surface display. The input to this papers form the basis of the Pixar/Vicom step is the surface defined in the list approach to 3D medical image rendering. output from step two. The output is a digital image showing the appearance of

the medical object when viewed from a 52Pixar is a trademark of Plxar, Inc. user-specified direction. 53Vicom is a trademark of Vlcom, Inc.

ACM Computmg Surveys. Vol. 23. No. 4, December 1991 466 * Stytz et al.

The needs of the image-compositing pro- Processor (FSP)55 with 12MB of image cess strongly influenced the data struc- memory. The Pixar/Vicom II can be ex- ture used in both machines. The data panded to two Chaps and three memory structure, called a pixel, consists of three boards. The memory boards can be any 12-bit color channels (red, green, and combination of FSP memory, Frame Store blue) and the 12-bit transparency (alpha) Memory (FSM), and Off-screen Memory channel required for compositing an im- (OSM) 56 boards, Both the PiXar/ViCOm age. This four-channel format also sup- Image computer and Pixar/Vicom II sys- ports the display of voxel data. The red, tems can support the display of image green, and blue channels hold object frames with 48-bit color and 1280 x 1024 space coordinate values, and the alpha pixel resolution at up to 60 frames/ channe154 holds the voxel value. The de- sec. The Pixar/Vicom II also supports signers chose a four-channel data strut- monochrome image display at 2560 x ture because it permits simultaneous 2048 pixel resolution. operation upon the four components of The Pixar/Vicom machines use the each pixel by the four-component parallel Sysbus (System bus) to connect to processing unit, the Chap. The Chap is the host computer’s57 1/0 bus. The Sys- the basic processing unit of the bus transmits address and data packets Pixar/Vicom computer systems. between the host and the Pixar/Vicom The two Pixar/Vicom machines are machine. The Sysbus gives the host ac- general-purpose graphics computers that cess to the control unit, the address gen- use a limited amount of special-purpose erator, and memory. The four Chap ALU hardware. The hardware provides sup- processors are tightly coupled to the port for, but not implementation of, conl- Scratchpad image memory. Each Chap puter graphics algorithms as well as 3D communicates with peripherals or other medical imaging volume and surface- Chaps over the Yapbus (Yet Another rendering algorithms. The machines’ de- Pixar Bus). Chaps communicate with sign exploits SIMD parallel processing at picture memory using the Pbus the pixel level to obtain high-speed im- (Processor Access Bus). The Pbus moves age generation and allows for system ex- bursts of 16 or 32 pixels between the pandability by using a modular machine Chap and picture memory. The Chap is design. Figures 16 and 17 present the overall design of the two machines. The basic Pixar/Vicom Image com- 55The FSP board contains a video display processor puter system has one Chap, one video that generates the analog raster Image and mem- board, one memory controller, and three ory control circuitry that allows Chap and host memory access m parallel with video display opera- 8MB (24MB total) memory boards, with tions. The memory appears as a linear array of the option of equipping the system 32 x 32 four-component pixels called tiles with three 32MB memory boards for 5GThe FSM on the FSP contains 12MB of memory, a total of 96MB of memory. The sys- and the OSM board contains 48MB of memory. tem can be expanded to three Chaps. Neither the FSM nor the OSM contain display hardware. The memory model used in the OSM M With three Chaps, up to six 32MB Identical to that in the FSM, the difference m the memory boards can be installed, giving boards being that an image stored in an OSM must a maximum of 192MB of memory for the be moved to a FSP board for display whereas an system. image stored in an FSM is moved within the same board to an FSP for display, The basic Pixar/Vicom II system con- 57The host can be a Sun 3, Sun 4. Sihcon Graphics sists of one Chap and one Frame Store IRIS 3100, Sdicon Graphics 4D, or Digital Equip- ment Corp. Micro VAX II~Ultrix or VMS Sun M a trademark of Sun Inc Sdicon Graphics is a trademark of Silicon 54The a-channel is a data structure used in 3D Graphics Inc. medical image compositmg to control the blending IRIS is a trademark of Silicon Graphics Inc. of voxel values as described in Porter and Duff MicroVAX 1s a trademark of DEC Inc. [1984] Ultrix is a trademark of DEC Inc

ACM Computing Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging ● 467

Sysbus I i

I 1 I other F&a computers t 1- disks

Figure 16. Pixar/Vicom Image computer block diagram (from PIX88C)

Host

uSysbus I & I Yapbus 80 MB/see to video momtor

other Pmar computers

disks Eother hosts Figure 17. Pixar/Vicom II block diagram (from PIXAR 1988c).

responsible for communicating with pe - within the Scratchpad memory, with each ripherals and other computers, receiving segment used to store one channel of the instructions from the host, and perform- pixel word. ing image processing operations. The The heart of the rendering system memory controller handles scheduling is the Chap, a four-ALU, one-multiplier of data transfers between video memory machine controlled by an Instruction and the Chaps. The Scratchpad mem- Control Unit (ICU). The Chap operates ory can store up to 16K pixels (16 lK as a four-operand vector pipeline with a scan lines) of data for use by the Chap peak rate of 40 MIPS (10 MIPS/ALU). processor. There are four segments The Chap can process two different

ACM Computing Surveys, VO1 23, No 4, December 1991 468 “ Stytz et al,

%=7--+ I wa Pbus

MBus

I Sbus

Figure 18. Chap programmer’s model (from PIXAR 1988c)

data types: 4-channel, 12-bit/channel in- When performing 3D medical imaging, teger (the pixel format), and 12-bit single the machine operates upon volume data channel integer. Figure 18 contains a sets comprised of voxels organized into model of the data paths available between slices. At the user’s discretion, each voxel Chap components and the remainder of can be classified by its coordinates, color, the machine. opacity, and/or refractive index during A crossbar switch connects the four the course of processing the data volume. ALU elements and the multiplier in the Processing begins by classifying the Chap with the Scratchpad memory. The voxel-based medical image data into ma- switch allows the multiplier and ALUs to terial percentage volumes composed of operate in parallel, with the Mbuses used voxels represented by the pixel data to load data into the multiplier and the structure. The number of material per- Abuses used to load data into the ALUs. centage volumes formed depends upon The ALUs and multiplier use the Sbus the number of materials the user wants (Scalar bus) for reading and writing sin- to visualize. One material percentage gle pixel channels. The ICU uses the volume must be formed for each mate- Sbus for distributing instructions to the rial. The Pixar/Vicom system performs ALUs, multiplier, and buses. The ICU medical image voxel classification using also calculates Scratchpad memory ad- either user-supplied criteria or a proba- dresses. The address space for ICU pro- bilistic classification scheme. The gram memory consists of 64K words of Pixar/Vicom machines represent each 96 bits each and is completely separate voxel in a material percentage volume from Scratchpad memory. using the pixel data structure. In a mate-

ACM Computing Surveys, Vol 23, No. 4, December 1991 Three-Dimensional Medical Imaging “ 469 rial percentage volume, the amount of a material percentage volume and the given material (such as bone, fat, or air) color/opacity value assigned to that ma- present in the original data voxel terial. One product volume must be determines the corresponding pixel’s a- formed for each material percentage vol- channel value. The rendering system ume. Then for each composite pixel data generates the composite volume for a structure in the desired color or opacity given set of appearance conditions by volume, the rendering system sums the forming other pixel-based data structure corresponding a-channel values of all the representations of the original space, product volumes to compute the compos- then composing these representations ite pixel data structure’s a-channel into a final image .58 value. The Pixar/Vicom system performs The rendering system extracts bound- volume compositing by combining the a- aries between materials by forming a channel values for the material percent- density volume. The density volume is age volumes with the a-channel values a composite volume formed from the sum stored in other voxel-based volume repre- of the products of the pixel data strut- sentations derived from the original voxel ture’s u-channel voxel values in each data. These representations are the matte material percentage volume and the cor- volume, the color volume, the opacity responding assigned material density. volume, the density volume, and the The largest density gradient occurs where shaded color volume. The matte, color, rapid transitions between materials with opacity, density, and shading effects de- different densities occur. The system cal- sired in the final image determine the culates the gradient vector between each a-channel values in these volumes. The voxel and its neighbors and stores the rendering software combines the a- result in two separate volumes—the sur- channel values from each volume type to face strength volume and the surface render a single composite view of the normal volume. The surface strength original image space. Matte volumes pro- volume contains the magnitude of the vide the capability for passing an arbi- density gradient. The surface normal trary cutting plane or shape through the volume contains the direction of the gra- volume or for highlighting specific mate- dient. The rendering system uses the rial properties in a region. The color and surface strength volume to estimate the opacity volumes are themselves compos- amount of surface present in each voxel. ite volumes formed using material per- The renderer uses the surface normal centage volumes and the color and volume in shading calculations. opacity values assigned to the type of The rendering system computes the material in each volume. A two-step shaded color volume by compositing procedure computes the a-channel value the surface normal volume, the surface assigned to each pixel data structure in strength volume, the color volume, the the color and opacity composite volumes. given light position(s) and color(s), and First, the rendering system forms prod- the viewer position using a surface re- uct volumes by calculating the product of flectance function. The system also takes the a-channel value for each voxel in a surface scattering and emission into ac- count when computing shaded-color vol- ume values. The renderer assumes that the amount of light emitted from a voxel 5sLevoy describes a different, ray tracing-based ap- is proportional to the amount of lumi- proach to 3D medical image volume rendering in Levoy [1988a, 1988b, 1989a, 1989b, 1990a, 1990bl, nous material in the voxel. The system Levoy et al. [1990], Plzer et al. [1989a, 1989bl. determines the amount of luminous ma- Levoy’s technique performs material classification terial in a voxel using the material on a voxel-by-voxel basis as each ray progresses percentage volumes. through the volume and relies upon the use of an octree and adaptive ray termination to rapidly Three-dimensional medical image render 3D medical images. rendering begins by forming the shaded-

ACM Computing Surveys, Vol. 23, No 4. December 1991 470 “ Stytz et al. color volume. Then, the rendering sys- operations on a 3D volume. The Voxel tem transforms the shaded-color volume Processor is a special-purpose, dis- according to the user inputs; finally, it tributed-memory machine whose only ap- resamples the shaded-color volume for plication is the pipelined processing of display. All of the above-mentioned vol- voxel-based 3D medical images. 59 The ume types need not be formed for every machine processes discrete, voxel-based, image. The rendering system determines subcubes of object space in parallel. Each the volumes to compute based upon the subcube is a 64 x 64 x 64 voxel volume type of data to be viewed and the repre- formed by an octantwise recursive subdi- sentation desired. The number of vol- vision of object space. The machine’s umes computed determines the total time design allocates one processor to each required to render the final image. subcube, thereby allowing all subcubes The Pixar/Vicom machines perform to be rendered simultaneously. Succes- parallel operations at the pixel data sive stages of their image-rendering structure level. The number of Chaps in pipeline merge the subcube-derived 2D the machine and the type of processing surface renderings in back-to-front order performed determine the amount of par- to form a single image. allelism realized. Since there can be no The design objective for the Voxel Pro- more than 3 Chaps in a machine, no cessor is to perform real-time 3D medical more than 12 pixel data structure level image rendering. The processing strat- calculations can be performed simulta- e~ adopted to achieve this goal has two neously. The high pixel data structure components. One component is installa- throughput of the Pixar/Vicom machines tion of the image-merging and small comes from their use of algorithms image-generation algorithms in hard- designed to use computationally inexpen- ware instead of software. This decision sive calculations at the pixel data struc- sacrifices implementation flexibility for ture level and from specially designed processing speed. The other component is processors that can quickly process pixel the use of medium-grain parallelism data structures. The time required to within the image-rendering pipeline. form a 3D medical image from a 256 x There are seven components to the im- 256 x 256 voxel volume can vary from a age processing pipeline. These compo- few minutes to an hour. The amount of nents are the host computer, the object time required depends upon the number access unit, the object memory system of slices in the volume and the number of (64 modules each storing a 64 cube of different volumes required to be formed voxels), the processing elements (PEs), to render the final image. Vicom cur- the intermediate processors (IPs), the rently markets the Pixar/Vicom output processor (OP), and the postpro- machines. cessor. Figure 19 presents a diagram of the machine. The host computer handles object data 4.7 Reynolds and Goldwasser’s Voxel acquisition, database management, and Processor Architecture complex object manipulation. The host is Reynolds and Goldwasser describe the also responsible for generating two Voxel Processor in Goldwasser [1984a, Sequence Control Tables (SCTS). The 1984b, 1985, 1986], Goldwasser and processors use the SCTS to control the Reynolds [1983, 1987], Goldwasser et al. back-to-front voxel readout sequence as 1985, 1986, 1988a, 1988b, 1989] and decided in Reynolds [1985]. The object Reynolds [1983a, 1985]. The Voxel Pro- access unit supports object database cessor machine provides near real-time 2D shaded-surface display while support- ing volume matting, threshold window 69A commercial machine incorporating the parallel specification, geometric transforma- primitives introduced in the Voxel Processor archi- tion, and surgical-procedure simulation tecture M described in Goldwasser [1988a]

ACM Computmg Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging ● 471

Host BUS

v Object Access unit

v Obiect Memow Bus I

Yk:..T ‘-F F‘T’ Ot’ Bus 1

v Post Pmeess- CRT Display ~g , ‘ 0

Figure 19. Voxel Processor architecture (based upon Goldwasser and Reynolds [1987].

management by the host and manages PE accesses the data in its own subcube communication between the Voxel Proc- in back-to-front order. The back-to-front essor and host. The object memory sys- series of computations on the subcube tem (Oi?IS) provides the 16MB of RAM stored at the PE yields the 2D subimage required to hold the 256 x 256 x 256 of the volume that is visible given the voxel image. The OMS stores the voxel- current set of user-editing inputs. This based object data within 64 memory 2D image contains the visible voxel val- modules distributed among the 64 ues and their image space z’-distance processing elements. values. The PEs place the 2D image into The PEs render images from their one of their two output buffers for use by 64 x 64 x 64 voxel subcubes of the vol- the 1P. ume. Each PE has two 128 x 128 pixel The next two stages of the pipeline output buffers, a copy of the SCTS, an perform the task of merging the 64 sepa- input density look-up table, and an arith- rate pictures produced by the PEs into a metic processor. During operation, each single picture that portrays the desired

ACM Computing Surveys, Vol. 23, No 4, December 1991 472 “ Stytz et al. portion of the volume. To perform the of 512 x 512 pixels each per second, at- merge operation, the eight IPs and the tained by the Voxel Processor. The Voxel OP use the SCTS to determine the input Processor was only proposed, never built. picture position offsets and memory ad- The uniprocessor-based Voxelscope 1160 dresses. Each of the eight IPs merges the uses slightly modified Voxel Processor minipictures generated by its set of eight algorithms. Dynamic Digital Displays input PEs into one of its two 256 x 256 sells the Voxelscope II machine. pixel output buffers. The OP forms the final image by merging the contents of SUMMARY the eight 1P output buffers into a 512 x 512 pixel frame buffer. Once the image is The use of 3D techniques for medical in the frame buffer, the postprocessor imaging is still controversial. Although prepares the image for display. The post- many physicians consider it a major processor is responsible for shading, breakthrough, some prominent ones op- brightness, and pseudocoloring of the fi- pose it, mainly because of the processing nal image using texture maps and either assumptions made to create 3D displays. distance or gradient shading. For example, interpolation is commonly Two critical steps in the operation of used, which, in clinical terms, is not pre- the Voxel Processor are subimage merg- cise but is nevertheless required to ing at the IPs and OP and mapping the achieve scene continuity. Two broad voxels from object space to image space issues remain to be addressed: clinical at the PEs. The machine accomplishes applicability and clinical utility. Con- these operations under the control of two vincing physicians that 3D images have SCTS. Each SCT contains eight entries, clinical utility requires continued re - one for each octant of a cube. The host search into 3D medical imaging architec- computer determines the two SCTS based tures that provide improved image on the desired orientation of the output quality at faster rendering rates (ap- image, sorts the entries in both SGTS proaching real time) with an easy-to-use into back-to-front order, then broadcasts user interface. To establish 3D medical them to the processors in the machine. imaging’s clinical applicability requires SCT1 contains the back-to-front octant psychophysical studies on large numbers ordering for the desired scene orienta- of patients. These psychophysical studies tion. The lPs and OP use SCT1 to per- have yet to be performed. In addition, form the back-to-front merging of the psychophysical studies can help to an- subirnages output from the PEs and IPs. swer a broader question. What are the The PEs use SCT2 for back-to-front gen- genuine physician and clinical require - eration of subimages from their individ- ments for 3D medical imaging? Psy- ual subcubes of the volume. chophysical studies may disclose that The Voxel Processor uses parallelism computer cycles spent on one aspe@ of at two different levels to attack the the 3D medical imaging process could be surface-rendering throughput bottleneck. better spent on another. First, within each of the first two stages In this article we discussed various of the processor pipeline there is parallel, solutions to the problems of displaying independent operation by the PEs and medical image data. These solutions IPs on disjoint sets of voxels. Second, ranged from the general-purpose (e. g., each stage of the pipeline operates inde- the MIPG machine, which uses insight- pendently due to the dual buffering of ful algorithms on the computer available the output from each stage. Since each in the imaging modality equipment, and stage operates independently on a differ- the Mayo Clinic ANALYZE machine) to ent output frame, at any one time there the specialized (the Pixel-Planes ma- are four frames in the pipeline. This de- gree of parallelism is responsible for the 6’)Voxelscope II is a trademark of Dynamic Dlgltal claimed real-time performance, 25 frames Displays, Inc

ACM Computing Surveys. Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging ● 473 chines, Kaufman’s Cube, and the Voxel with us. For the foreseeable future, hard- Processor). In all cases, certain tradeoffs ware cannot provide the processing speed were evident. In particular, we noted that needed to generate photorealistic images the general-purpose solutions are either in real time. Gl Insightful algorithms and slow or use expensive equipment, concepts that reduce the computation- whereas the special-purpose solutions are al burden are required. This need is es- quite powerful but limited to solving only pecially critical because the medical one problem. Underuse of these special- imaging modality community continues purpose machines diminishes their cost to increase the resolution of their effectiveness. equipment. This situation is not an unexpected one; Last, any hardware support necessary as we implied at the beginning of the to create a real-time environment should article, the field of 3D medical imaging take the form of coprocessor or special- has special needs. It behooves us to learn purpose display hardware. The coproces- from the attempts discussed here and to sor or special-purpose hardware should combine these lessons with the state of be separated relatively cleanly from the current technology. Then, we can move actual general-purpose processor(s). This closer to providing broadly available separation is present today in some general-purpose 3D medical imaging general-purpose, high-powered commer- systems that have the performance and cial graphics machines like the Titan, effectiveness of contemporary special- Stellar, and AT&T machines referenced purpose systems. Although the presenta- in Table 2. tion of such solutions should not be a part of the present survey, we shall delineate our conclusions and APPENDIX A: HIDDEN-SURFACE REMOVAL some research avenues we are currently ALGORITHMS pursuing. Hidden-surface removal algorithms fall First, the special-purpose-hardware into three classes: scanline based, depth solutions indicated the right approach but sort, and z-buffer. Scanline-based algo- do not go far enough—parallel computa- rithms have found little application tion, on as massive a level as possible in 3D medical image processing. This and in a grain amenable to using the section describes four hidden-surface re- hardware by other tasks. These other moval algorithms: depth-sort, z-buffer, tasks should be of interest to medical back to front, and front to back. care providers, since these 313 medical A depth-sort algorithm requires sort- imaging machines have to be situated in ing the scene elementsGz according to hospitals and clinics at least until ultra- their distance from the observer’s view- high-speed communication media are point, then painting them to the frame available. This implies that parallel buffer in order of decreasing distance. machine solutions should be provided by See Frieder [1985a] for an example. The fairly general-purpose multiprocessor depth-sort algorithm developed by computers. Newell, Newell, and Sancha illustrates Second, the basic data for the display the major operations. Three steps are should be the actual slice data, possibly performed after rotating the scene. First, with interpolated data added. Additional processing should only be done when requested by the user. This is a contro- versial conclusion. It can be supported ‘l For a discussion of graphics hardware and its only after the 3D medical imaging com- foreseeable ~erformance hmltations, see England munity performs the psychophysical [19s91 ‘ 621n Appendices A-F, the term scene element studies mentioned earlier. is used to signify the basic data element in the Third, the need for new approaches contour (ID), surface (2D), and volume (3D) data to the 3D medical imaging task is still models.

ACM Computmg Surveys, Vol 23, No. 4, December 1991 474 “ Stytz et al. sort scene elements bythe largest z’-coor- algorithm and requires less memory since dinate of each scene element. Second, there is no z-buffer to maintain. Note resolve conflicts that arise when z’- that the scene elements can be read out extents of scene elements overlap using in order of decreasing x’ or y’ or z’ coor- the tests described in their paper. Third, dinates. The fastest-changing index is paint the sorted list to the display buffer chosen arbitrarily. The algorithm has two in order of decreasing distance. Since the steps. First, rotate all scene elements to scene elements nearest the observer are their proper location in image space. written last, their values overwrite the Then, extract the scene elements from values of the scene elements they ob- image space in BTF order and paint them scure. The resulting image displays only into the frame buffer. those scene elements visible from the ob- The fourth technique for hidden-surface server’s position. removal is front-to-back (FTB) readout of The z-buffer algorithms use the same the scene as discussed in Reynolds et al. pixel-overwriting strategy used in the [19871. Front-to-back algorithms operate depth-sort algorithm, but they imple- in basically the same way as BTF algo- ment the strategy using the frame buffer rithms. There are two differences. First, and a z-buffer. The frame buffer stores the FTB sequence for reading out scene pixel intensity values. The z-buffer is a elements is determined by increasing z’ data structure with the same dimensions distance from the observer. Second, once as the frame buffer. The z-buffer stores the algorithm has written a scene ele- the z’-value of the portion of the scene ment value to a point in screen space, no element mapped to each pixel. Before other scene element value may be writ- rendering the scene, the z-buffer is ini- ten to that same point in screen space. tialized with the largest representable z-value, and the display buffer is initial- APPENDIX B: RAY TRACING ized to a background value. During ren- dering, the projection of each scene Ray tracing is a technique for locating element into image space yields a depth and shading the visible surfaces within a Z(X, y) at screen position (x, y) = (u, u). scene. It operates by following the path If the newly computed value for Z( x, y) taken by rays of light originating at the is less than the current value of Z( x, y) observer through screen space and on in the z-buffer, then the new Z( x, y) re- into image space. The interaction of the places the old Z( x, y) in the z-buffer, and light color with the material(s) encoun- the intensity value for the scene element tered by the ray determines the value at Z( x, y) replaces the intensity value at assigned to the intersected screen space screen position (x, y). pixel. The appeal of ray tracing comes A third technique for hidden-surface from its ability to produce “ photorealis - removal is back-to-front (BTF) readout of tic” or “lifelike” images by modeling the scene as discussed in Frieder et al. complicated reflection/refraction pat- [1985 al. This algorithm accesses the en- terns and transparency. tire data set in BTF order relative to the Whitted’s [1980] paper is the seminal observer. In most circumstances, this work in ray tracing. It describes a method of access requires sorting the data ray-tracing scheme that incorporates a set by z’-value before the algorithm can shading model into the recursive ray- be used. In the 3D medical imaging castng image-rendering calculations, environment, however, sorting is not re- thereby producing high-quality, photore- quired because the array of scene ele- alistic images. Ray tracing operates by ments emerges from the scanner in sorted casting primary rays from screen space order. Therefore, the only required work into image space. Whenever a primary is reading out the scene elements in cor- ray intersects a surface, three types of rect BTF order. This algorithm is sim- secondary rays are spawned- shadow pler to implement than the z-buffer ray(s), a reflection ray, and a refraction

ACM Computmg Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging ● 475 ray. A ray-casting tree maintains the intensity of the pixel is calculated from parent -child relationship between the the area-weighted sums of the values for primary ray and all shadow, reflection, the quadrants and subquadrants within and refraction rays subsequently the pixel. Whitted recognized the over- spawned from it. Shadow rays determine whelming computational expense of the if the object surface is in shadow with intersection testing required by the algo respect to each of the light sources. An rithm. He accelerated intersection object, 8, is in the shadow of another test processing by enclosing each object object with respect to a given light source in a regular-shaped bounding volume G4 if the shadow ray intersects an object (a sphere). A ray-bounding volume in- between the surface of object 8 and the tersection test determines if the ray light source. Reflection and refraction passes close enough to the enclosed object at the object surface are determined by to warrant a ray-object intersection test. spawning reflection and refraction rays An object-ray intersection test is per- at the object surface intersection point. formed only if the ray intersects the These two newly spawned rays gather bounding volume of the object; otherwise additional information about the light the ray continues on its journey through arriving at the ray/object intersection image space. Because it is much simpler point. Moreover, whenever reflection or to determine if a ray intersects the refraction rays intersect other surfaces surface of a regularly shaped volume in- they, in turn, spawn new refraction, stead of the possibly convoluted surface reflection, and shadow rays. All primary of an object, bounding volume intersec- and secondary rays continue to traverse tion testing pays off in reduced image image space until they either intersect rendering time. an object or leave the image space vol- Since Whitted’s paper, research on im- ume. When ray casting terminates proved methods for ray tracing has con- the ray-casting tree is traversed in centrated on reducing the computational reverse depth order. At each tree node a cost of ray tracing with additional efforts shading model (Whitted uses Phong’s63) directed toward improving the photoreal - is applied to determine the intensity of ism of the rendered image. Techniques the ray at the node. The computed inten- for reducing ray value computation time sity is attenuated by the distance be- propose using novel data structures tween the parent and current child node Fujmoto 1986; Glassner 1984, 1988; and is used as the input to the intensity [Kaplan 1987], parallel processing [Dippe calculations at the parent node as either and Swenson 1984; Gudmundsson and reflected or refracted light, Randen 19901, adaptive ray termination To render antialiased images, Whitted [@son and Keeler 1988], and combina- projects four rays per pixel, one from each tions of these techniques [Levoy 1988a, corner. If the values of the four rays are 1988b, 1989a, 1989b, 1990a, 1990b; Levoy similar, the pixel intensity is the average et al. 1990]. Techniques for improving ray value. If the ray values have a wide image quality are presented in Cook variance, the pixel is subdivided into four [1984], Cook et al. [1984], and Carpenter quadrants and additional rays cast from [1984]. The references cited concerning the corners of the new quadrants. Quad- aliasing are also relevant to improving rant subdivision continues until the new the quality of ray-traced images. We re- quadrant corner rays satisfy the mini- fer the reader to Glassner [19891 for a mum variance criteria, whereupon the more thorough discussion of principles

‘i3Any other illumination model may be used, such “A bounding volume is a regularly shaped con- as the Cook-Torrance model [1982] as long as the tainer, such as a cube, sphere, or cone. which is model is based upon geometric optic concepts and used to enclose an object within a volume to be not radiosity. rendered

ACM Computmg Surveys, Vol. 23, No. 4, December 1991 476 0 Stytz et al. and techniques for ray tracing, as well as Farrell’s technique [Farrell et al. 1984, for an extensive annotated bibliography 19851 is also straightforward. It differen- of this field. tiates the object of interest from the remainder of the volume by assign- ing a unique color to the range of voxel APPENDIX C: IMAGE SEGMENTATION BY values that specifies an object. This tech- THRESHOLDING nique assumes that the object corre- Image segmentation by thresholding is a sponds to a continuous range of values technique for locating regions in a scene (i.e., there are no discontinuities in the that have the same property. There are window) and that no other objects are two steps in the process. First, the scene incorrectly assigned the same color. elements must be classified according to Because of the noise and low contrast some criteria, usually the voxel value. encountered in medical image data, seg- For example, the user can define a max- mentation by thresholding often does not imum voxel value (a threshold) or a produce an accurate portrayal of soft user-defined range of voxel values (a tissue or bony objects. As a result, threshold window) to separate the other image segmentation techniques object(s) of interest from the remainder have been developed to supplement or (background) of the scene. Thresholding replace the thresholding process. In gen- assumes that any voxel meeting the cri- eral, these techniques make the segmen- teria for selection, no matter where it tation decision on a voxel-by-voxel basis lies, is part of the object. Since there is based upon information lying in the no general theory of image segmenta- neighborhood of each voxel rather than tion, we illustrate the process with two upon a single global threshold setting. algorithms—a generic algorithm and These image segmentation techniques Farrell’s algorithm [Farrell et al. 1984, can be classified into a few different ap- 1985]. proaches based on the technique(s) used Assume a 2D scene composed of pixels to improve the segmentation procedure. in which there is one object. The values There are the algorithms that apply sta- assigned to the pixels lying within the tistical analysis [Choi et al. 1989; Fan bounds of the object fall within a contin- et al. 1987; Rounds and Sutty 1979], al- uous range of numbers, and no pixel out- gorithms that use neighborhood-based side the object has a value within (instead of global) thresholds [Doherty this range. Scene size and the screen et al. 1986; Tuomenoksa et al. 1983], al- resolution are identical. The goal of gorithms that use a combination of the procedure is to form a binary scene. smoothing and edge detection operators The procedure forms a binary scene by [Ayache et al. 1989; Bomans et al 1990; assigning all the pixels within the object Cullip et al. 1990; Nelson et al. 1988; a value of 1 and all other pixels a value Pizer et al. 1988, 1990a; Raman et al. of O. One way to accomplish this task is 1990], algorithms that use gradients as follows. First, the user indicates a [Trivedi et al. 19861, algorithms that use range of pixel values that encompasses statistical pattern recognition techniques the range of pixel values for the object. [Coggins 1990; Low and Coggins 19901, Second, starting at the upper-left-hand algorithms that use image pyramids and corner of the scene, determine the binary stacks [Kalvin and Peleg 1989; Vincken scene values for screen pixels as follows: et al. 1990], algorithms based upon ex- If a scene pixel has a value within the pert systems and artificial intelligence range, assign the corresponding screen techniques [Acharya 1990; Chen et al. pixel a value of 1; otherwise assign the 1989a; Cornelius and Fellingham 1990; screen pixel a value of O. Perform this Nazif and Levine 1984; Raya 1989a, operation for all the scene pixels. When 1989b, 1990a; Stansfield 1986], and algo- the procedure terminates, display the rithms that are combinations of these object on the screen. techniques [Gambotto 1986].

ACM Computmg Surveys, Vol. 23, No 4, December 1991 Three-Dimensional Medical Imaging ● 477

APPENDIX D: SURFACE-TRACKING the object boundary then resumes from ALGORITHMS the newly identified boundary voxel. Artzy’s algorithm models the surface This appendix presents three algorithms of an object as a directed graph and that have been used for boundary detec- translates the problem of surface detec- tion and surface tracking. These are tion into a problem of tree traversal. the algorithm proposed in Liu [1977], the Artzy et al. [1981] in conjunction with algorithm proposed in Artzy et al. [1981], Herman and Webster [1983] prove that, and the “marching cubes” algorithm given directed graph G whose nodes cor- described in both Lorensen and Cline respond to faces of voxels separating the [19871 and CHine et al. [19881. object in the scene from background, the Liu’s algorithm extracts a surface from connected subgraphs of G correspond to a volume by exploiting three properties surfaces of connected components in the of an object: contrast between the object scene, Therefore, finding a boundary sur- and surrounding material(s), connectiv- face corresponds to traversing a sub- ity, and agreement with a priori knowl- graph of a digraph. A key property of the edge. The algorithm exploits these three digraph is that every node in the digraph properties, along with the requirement has indegree two and outdegree two. This that at least one and at most two neigh- characteristic, in conjunction with the bors of every boundary voxel also lie on fact that every connected component of the boundary, to determine the boundary the digraph is strongly connected, sup- voxels that outline the object. The possi- ports two important conclusions. First, ble boundary voxels are considered to lie for every node in the graph there is a along the curves in 3D space that have binary spanning tree rooted at that node. high voxel-value gradient values. Bound- Second, the binary spanning tree is a ary detection begins with the user’s se- subgraph for the digraph and it spans lection of an initial boundary voxel. The the connected component containing the seed voxel is then localized based on a given node. This property of the digraph maximum voxel-value gradient criterion. guarantees that given a single face of a The next and all subsequent boundary voxel on the boundary of an object, every voxels are located by examining the voxel face in the boundary can be found. neighbors of the current boundary voxel. Gordon and Udupa [19891 describes an The neighbor that lies across the largest improvement to this algorithm. voxel-value gradient and was not previ- Artzy’s surface detection algorithm ously identified as a boundary voxel be- assumes a 3D array representation for comes the current boundary voxel. If none the 3D scene. The object in the scene is of the gradient values at a given bound- defined to be a connected subset of ele- ary voxel exceeds the minimum gradient ments of the 3D array. First, a binary value, the algorithm backtracks to the volume representation of the object is immediately preceding boundary voxel. formed by segmentation. The one-voxels The preceding boundary voxel once again in the binary volume representation are becomes the current voxel. The algo- all the voxels that may comprise the ob - rithm then searches for another bound- ject of interest, but the object does not ary voxel from among the set of voxels contain all the one-voxels. The algorithm around the current voxel. Liu’s algo- separates the object- and non-object rithm restricts its search to those voxels exposed one-voxel faces by finding the not previously considered as a boundary connected subset of exposed one-voxel voxel. If none of the restricted set of faces. The algorithm logically constructs voxels around the current voxel meets the binary spanning tree as it proceeds the minimum voxel-value gradient trite- by locating exposed, adjacent one-voxel ria, the algorithm backtracks again. The faces and adding them to the tree. The algorithm continues backtracking until selected faces are the nodes in the t~.ee. it finds a boundary voxel. The search for The algorithm never actually builds the

ACM Computing Surveys, Vol 23, No 4, December 1991 478 “ Stytz et al. tree. Instead, it maintains the important the surface. This preliminary processing current aspects of the tree’s status using roughly locates the surface /cube inter- two lists. These lists are the list of once- section, since the surface intersects the visited nodes and the list of nodes to marching cube only along the edges con- be considered. taining a one-vertex and a zero-vertex. To start the algorithm, the user speci- By inspection, the authors determined fies a seed voxel. An exposed face of the that there are 14 unique cases of sur- seed voxel is the root node of the binary face/cube intersection to be considered. spanning tree. When the algorithm visits After the algorithm determines the type a node in the binary spanning tree, of cube/surface intersection, the location the node is made current and is checked of the intersection along each “marching against a list of once-visited nodes. If the cube” edge is calculated. The algorithm algorithm previously examined the node, estimates the location of the intersection the node is removed from further con- using the result of a linear interpolation sideration since it can never be visited of the data values at the cube vertices on again. If the node has not been previ- the edge. The point of intersection ously examined, it is added to the list of on each edge is a vertex of a triangle. For once-visited nodes. The algorithm then shading purposes, the algorithm com- adds all the exposed one-voxel faces adja- putes the surface normal at each triangle cent to the current node to the list of vertex based on the marching cube ver- nodes to be considered. Using a first-in tex gradient values. By taking the cen- first-out (FIFO) discipline, the next node tral difference of the marching cube to visit is then selected from the list of vertex gradient values along the edge nodes to be considered. Then, the algo- containing the triangle vertex, the algo- rithm examines the selected node. The rithm estimates the gradient, and so the algorithm terminates when the queue of surface normal at the enclosed triangle nodes to be considered is empty. vertex. This completes processing for the The third surface-tracking algorithm is current location of the cube, so the trian- the “marching cubes” algorithm out- gle vertices and vertex normals are out - lined in Cline et al. [1988] and Lorensen put and the cube marches deeper into and Cline [19871. This technique pro- object space. cesses the data in a medical image volume in scanline order, creating a APPENDIX E: SHADING CONCEPTS triangle-tiled model of a constant density surface as it moves along. The algorithm Shading is a computationally intensive has two basic steps. The first step locates process that attempts to depict the the desired surface within the cube and appearance of a rendered object using defines it with triangles. The second step an illumination model. The illumina- calculates the normals to the surface at tion model depicts the interaction of each vertex of each triangle in the cube. different types of light, from possibly The marching cube performs surface different sources, at different locations, location. It is a logical cube created from with the materials in the scene on a eight adjacent points, four from each of pixel-by-pixel basis. The characterization two adjacent slices of data gathered by a must also consider the position of each medical imaging modality. As the first material relative to an observer, the po- step in locating the surface, the data sition of each material relative to each value at each vertex of the cube is exam- individual light source, and the composi- ined. The algorithm assigns a 1 to a tion and light reflection/transmission vertex if its value meets or exceeds the characteristics of each material. Illumi- threshold value that defines the object; nation models are based on the concept otherwise it assigns a value of O. One- that the amount of light reflected from vertices are within or on the surface and transmitted through an object makes of the object; zero-vertices are outside the object visible. The wavelength of the

ACM Computmg Surveys, Vol. 23, No. 4, December 1991 Three-Dimensional Medical Imaging * 479 incident light in combination with the dull, matte surface. Diffuse reflection can surface properties of the object determine be modeled using Lambert’s cosine law the amount of light absorbed, reflected, for reflection from a point light source for and transmitted by the object, as well as a perfect diffuser: the color of the object as seen by the human eye. The interaction of incident I= Ilk~cos8 05+ (1) light with the surface of an object can be characterized using a combination of 1 is the reflected intensity, 11 is the wavelengths in the incident light, its incident light from the point light source, direction, the type of light source (area, k ~ (defined over the range O-1) is the point, or diffuse), the orientation of the diffuse reflection constant for the reflect - object surface, and the composition of ing material, and 0 is the angle between the surface. the surface normal and the light direc- The two broad classes of illumination tion. Because of the large computational models used in computer graphics are cost involved in computing the illumina- the global and local illumination models. tion for an area light source, area light Global illumination models provide a sources are usually treated as a constant higher quality rendering than local mod- term and linearly combined with the els, albeit at the cost of additional ren- diffuse reflection computed with (1) dering computations. They characterize yielding the appearance of each object by consid- ering many parameters for each object. A partial list of these parameters includes the surface properties and orientation of materials, the location, intensity, area, I. is the intensity of the area light and color of all the light sources source and k ~ is the diffuse reflection that shine upon the object, the amount of constant for the reflecting material. light reflected from surrounding objects, Because the perceived intensity of light the distance from the light source(s) to reflected from an object falls off linearly the object, and the observer’s position. with the distance between observer These considerations allow the renderer and object, the intensity for the point to determine the amount of refracted light source computed in (2) is linearly light transmitted through the object and attenuated. the surface specular and diffuse reflec- Ilk~cos8 tion. Local illumination models need only I = I~k. + 050<;. compute the diffuse reflection at the d+K surface of an object. Local illumination (3) models base their computations on the surface orientation of the object, In (3), d is the distance from the light illumination from a single point light source to the object surface, and K is an source (possibly in conjunction with arbitrary constant. an area light source), and the ob- Specular reflection is the reflection server’s position. caused by light bounding off the outer The following global illumination surface of an object. It has a directional model [Rogers 1985] considers diffuse and component and typically appears as a specular reflection as well as refraction highlight on the surface of an object. The effects to render the surface of the object. empirical formulas described by Phong Figure 20 contains a 2D depiction of the [19751 as well as Cook and Torrance relationships involved. [19821 model this phenomena. The fol- Diffuse reflection is caused by the lowing discussion is based on Phong’s absorption and uniformly distributed model, which considers the angle of the reradiation of light from the illuminated incident light, i, its wavelength, X, the surface of an object and appears as a angle between the reflected ray and the

ACM Computing Surveys, Vol 23, No 4, December 1991 480 ● Stytz et al.

Light viewer Scmrw. ,T “4 N ,’ s

L

s l!mfaw / (3

Legend ,,’ ~T_N CastRays Tmmwlmntltd PhyslcdL@t Path Light 4 D DiffuseReflectionRay R SpecularReflectionRay T TmmmmedLrghtRay N SurfaceNonmdVector s Lm-of-Sight Vector L L@-SOurceVector

Figure 20. Surface-shading relationships, Legend:—, cast rays; ---, physical light path; D, dif~use reflection ray; R, specular reflection ray; T, transmitted light ray;“.N, surface normal vector; S, line-of-sight vector; L, light-source vector,

line of sight, a, and the spatial distribu- described above, For exam~le,. . Cook tion of the incident light to compute the and Torrance [1982] describe a more com- specular reflectance of an object, 1s: prehensive model that considers the material properties of each object and IS = @(i, A)cos’ a. (4) accurately portrays the resulting reduc- tion in the intensity of reflected light. The reflectance function, w ( i, h), gives Their model also accounts for the block- the ratio of specularly reflected light to ing of ambient light by surrounding ob- incident light. The Cosn term approxi- jects and the existence of multiple light mates the spatial distribution of the sources of different intensities and specularly reflected light. Because W( i, h) different areas. is a complex function, it is usually Reflection is not the only lighting effect replaced by an experimentally deter- to be considered. If transparent objects mined constant, k~; k~ is selected to yield appear in the scene, refraction effects a Pleasing appearance. Combining (3) must be allowed for. Snell’s law de- and (4) yields the desired global scribes the relationship between the inci- illumination model: dent and refracted angles:

~l(kd COS@+ kscos’ a) (6) I = Iak?a + qlsin8 = rlz sin f)’. d+K

In (6), ql and qz are the indices of Os 05:. (5) refraction of the two mediums; 0 is the incident angle, and 0‘ is the angle of There are more accurate, and computa - refraction. To eliminate the effects that tionally intensive, models than the one can arise from an image space approach

ACM Computing Surveys, Vol. 23, No. 4, December 1991 Three-Dimensional Medical Imaging ● 481 to refraction, the refraction computations tions they make to compute the local are typically performed in object space surface normal. The Gouraud and Phong using ray tracing. Simple implementa- algorithms, on the other hand, are com - tions of transparency effects ignore putationally intensive and commonly refraction and the decrease in light in- used when rendered images of the high- tensity with distance. These implemen- est quality are desired. The Gouraud tations linearly combine, or composite, model relies on a global illumination the light intensity at an opaque surface, model to provide the light intensity at Iz, with the light intensity at the trans- each polygon vertex, then uses this value parent surface, 11, lying between it and to compute the light intensity at points the observer. Taking t as the trans- between vertices. Phong takes the nor- parency factor for 11, the relationship is mal computed at each polygon vertex and interpolates these values along the edges I=tll+(l–t)12. (7) in the scene, with the final intensity for If IZ is transparent, the algorithm is each point determined using a global il- applied recursively until it encounters an lumination model. opaque surface or until it reaches the Distance-only, or depth, shading is the edge of the volume. More realistic trans- simplest of the six techniques. Depth parency effects can be achieved by shading does not estimate the surface considering the surface normal when normal at the shaded points. Depth shad- computing t. ing assigns a shade to a point on a visible The computational cost incurred when surface based upon a distance criterion. using a global illumination model miti- The criterion can be the distance of the gates against its use for 3D medical im- point from the assumed light source, from age rendering, especially when rapid the observer, or from the image space image generation is an important consid- z’-axis origin. This computational sim- eration. When combined with ray trac- plicity comes at some cost, however, in ing, however, a global illumination model that it tends to obscure surface details. yields high-quality 3D medical images. This technique is not an object space or image space surface normal estimation shading algorithm. APPENDIX F: SHADING ALGORITHMS Gradient shading [Chen and Sontag The following discussion provides an 1989; Gordon and Reynolds 1983, 1985; overview of shading algorithms that have Reynolds 1985] is an image space surface proven useful in the 3D medical imaging normal estimation algorithm. It com- environment. This overview is not all putes a realistically shaded image at rel- inclusive but instead provides a repre- atively low computational cost. Because sentative subset of approaches to this technique produces high-quality ren- the problem. These algorithms present derings, it is useful in the 3D medical several different approaches to rend- imaging environment. The key to gradi - ering shaded surfaces. The first four ent shading is its use of the z:gradient. ,.”av;4 L_. . -1 distance-only, gradient, The z~gradient approximates the amount g~~y-sc~lti ~. ddient, and normal-based of change in the z~dimension between contextual shading— use local illumina - neighboring entries in the z~buffer. tion models. These algorithms do not After determining the z:gradient, the lo- consider lighting effects such as refrac- cal surface normal can be approximated, tion in their calculations but neverthe- and using this, the light intensity at the less provide usable 3D medical image surface can be calculated. The algorithm shading at a reasonable cost in time. The estimates the surface normal by first gradient and normal-based contextual finding the gradient vector Vz = (a z /a U, shading algorithms can be used to imple- dz/tlu, 1). The derivatives az/au and ment an approximation to reflection and a z /d v can be estimated using the central are notable for the simplifying assump - difference, the forward difference, the

ACM Computing Surveys, Vol. 23, No 4, December 1991 482 - Stytz et al. backward difference, or a weighted aver- each face, the algorithm stores the ar- age differencing technique. Taking i, j to rangement around the scene element face be the current location in the z-buffer, in a neighbor code. Chen et al. [1984a, the d z /d u forward difference is: d z / d U,f 19851 describe how the neighbor codes = z: – Z:+l. Using the backward differ- can be used to approximate the surface ence, az/au, b = z; – Z;.l. Using the normals for the face of the central scene central difference, dz/dul C = Z;+l – z;_l. element. If a surface-tracking procedure The d z/d u is estimated similarly. The is performed before the shading opera- surface normal, N, is computed using Vz. tion, the context for each face can be N=O. determined at little additional computa- Gray-scale gradient shading described tional cost. When rendering a medical in Hohne [1986], is an object space sur- image, the algorithm can calculate the face normal estimation method. This shade at each face using the techniques technique, like gradient shading, pro- described in Chen et al. [1984al or any duces high-quality 3D medical images. other local illumination model. Gray-scale gradient shading uses the Gouraud [19711 shading attempts to gray-scale gradient between neighboring provide a high-quality rendered image by voxel values in the z-buffer to approxi- calculating the shading value at points mate the surface normal. This approach between scene element vertices. We clas- uses the ratio of materials comprising a sify this algorithm as an object space voxel (i. e., the gray-scale voxel value) in surface normal estimation technique. conjunction with the ratios in neighbor- Gouraud shading relies upon the obser- ing voxels to compute the depth and di- vation that shade and surface normal rection of the surface passing through vary in the same manner across a sur- the voxel. The gray-scale gradient ap- face. Gouraud’s technique first calculates proximates the rate of change in the ra- the surface normal at a scene element tio of materials between the voxel and its vertex, then determines the shade at neighbors and hence the rate of change the vertex. The algorithm computes scene in surface direction. After determining element vertex normals directly from the the gray-scale gradient, the local normal image data. The algorithm then bilin- can be approximated using a differencing early interpolates these vertex shade val- operation; and using this, the light inten- ues to estimate the shade at points lying sity at the surface can be calculated. along the scanline connecting the ver- Normal-based contextual shading tices. Since the interpolation only pro- [Chen et al. 1984a, 19851 is an object vides continuity of intensity across scene space surface normal estimation method. element boundaries and not continuity in This technique calculates the shade for change in intensity Mach band effects the visible face of a scene element using are evident in the images shaded with the estimated surface normal for the visi- this algorithm. ble face. The algorithm estimates the Phong shading [Burger and Gillies surface normal from two factors. First, it 1989; Foley et al. 1990; Phong 1975; uses the orientation of the visible face for Rogers 1985; Watt 19891 lessens the Mach the scene element with respect to the band effect present in Gouraud shading direction to the light. Secondj the algo- at the cost of additional computation. rithm considers the orientation of the Phong’s approach to shading interpolates faces of adjacent scene elements with re- normal-vectors instead of shading val- spect to the direction to the light. These ues. First, the algorithm computes the two factors provide the context for the scene element vertex normals directly face. For each edge of a face to be shaded, from the image data. Then, the algo- the algorithm classifies the face adjacent rithm computes the surface normal at to that edge into one of three possible each scanline/scene element edge inter- orientations, yielding a total of 81 possi- section point by bilinearly interpolating ble arrangements of adjacent faces. At the two scene element vertex normal val-

ACM Computmg Surveys, Vol 23, No. 4, December 1991 Three-Dimensional Medical Imaging ● 483 ues for that edge. Finally, the algorithm ARTZY, E., FRIEDER, G., AND HERMAN, G. T, 1981. estimates the surface normal at points The theory, design, implementation and evalu- ation of a three-dimensional surface detection along each scanline within the scene ele - algorithm. Comput. Graph. Image Process. 15 ment. The algorithm computes these (Jan.), 1-24. estimates by bilinearly interpolating AT&T PIXEL MACHINES, INC. 1988a. AT& T Ptxel the normals computed at the two scan- Mach ines PXM 900 Series Standard Software. line/scene element edge intersection AT&T PIXEL MACHINES, INC. 1988b. PIClib: The points for that scanline. Of the six Pixel Machine’s Graphics L,Lbrary. shading techniques discussed in this AT&T PIXEL MACHINES, INC. 1988c. RA YLib: The appendix, Phong shading renders the Pixel Machine 900 Series Ray Tracing Library. highest quality images. We classify this AT&T PIXEL MACHINES, INC. 1988d. The Ptxel algorithm as an object space surface nor- Machine System Architecture: A Technical mal estimation method. Report. AUSTIN, J. D., AND VAN HOOK, T. 1988. Medical image processing on an enhanced workstation. ACKNOWLEDGMENTS Medical Imaging II: Image Data Management and Display. SPIE 914, 1317-1324. The authors are indebted to the employees of Pixar AYALA, D., BRUNET, P., JUAN, R., AND NAVAZO, I and the researchers whose machines are mentioned 1985. Object representation by means of non- for reviewing their particular portions of this sur- minimal division quadtrees and octrees, ACM vey. We also thank Gabor Herman and the referees Trans. Graph. 4, 1 (Jan.), 41-59. for their suggestions and criticisms. AYACHE, N., BOISSONNAT, J. D., BRUNET, E., COHEN, L., CHIEZE, J. P., GEIGER, B., MONGA, O , REFERENCES ROCCHISANI, J. M., AND SANDER, P. 1989. Building highly structured volume representa- ABRAM, G., AND WESTOVER, L, 1985. Efficient tions in 3D medical images. In Proceedings of alias-free rendering using bit-masks and look- the International Symposium: CAR ’89, Corn up tables. Comput. Graph. 19, 3 (July), 53-59. puter Assisted Radiology (Berlin, Germany), ACHARYA, R. 1990. Model-based segmentation for Computer Assisted Radiology Press, pp. multidimensional biomedical image analysis. 766-772, Biomedical Image Processing, SPIE, 1245, BACK, S., NEUMANN, H., AND STICHL, H. S. 1989. (Santa Clara. Ca/if.), pp. 144-153. On segmenting computed tomograms. In Pro- AKELEY, K. 1989. The silicon graphics 4Di ceedings of the International Symposium: CAR 240GTX superworkstatlon. IEEE Comput. ’89, Computer Assisted Radiology (Berlin, Graph. Appl. 9, 4 (July), 71-83. Germany), Computer Assisted Radiology Press, AMANS, J.-L., AND DARRIER, P. 1985. Processing pp. 691-696. and display of medical three dimensional ar- BAKALASH, R., AND KAUFMAN, A. 1989. Medicube: rays of numerical data using octree encoding. A 3D medical imaging architecture. Comput. Medical Image Process. SPZE 593, (Cannes, Graph. 13, 2, 151-154. France, Dec.), 78-88. BARILLOT, C., GIBAUD, B., Lm, O., MIN, L. L., APGAR, B., BERSACK, B., AND MAMMF:N, A. 1988. A BOULIOU, A., LE CERTEN, G , COLLOREC, R., display system for the Stellar graphics super- AND COATRIEUX, J. L. 1988. Computer graph- computer model GS1OOO. Comput. Graph. 22, ics in medicine: A survey. CRC Crztzcal Rev. 4 (Aug.), 255-262. BiomecZ. Eng. 15, 4 (Oct.), 269-307. ARDENT COMPUTER. 1988a. DORE’ Technical BELL) C. G., MIRANK~R) G. S., ANrI RUBINSTEIN, J. Oueruiezo. (April). 1988. Supercomputing for one. Spectrum 25, ARDENT COMPUTER. 1988b, The Tztan Graphics 4 (Apr.), 46-50. Supercomputer: A System Oueruzew. BENTLEY, M. D , AND KARWOSIU, R. A. 1988 Esti- ARDENT COMPUTER. 1988c. Tztan Specifications. mation of tissue volume from serial tomo- ARTZY, E. 1979. Display of three-dimensional in- graphic sections: A statistical random marking formation in computed tomography. Comput. method. Inuestzgatzue Radzol. 23, 10 (Oct.), Graph. Image Process. 9, 196-198. 742-747. ARTZY, E,, FRIEDER, G., HERMAN, G. T., AND LIU, H. BERGER, S. B., LEGGIERO, R. D., MARCH, G. F., K. 1979. A system for three-dimensional dy- OREFICE, J. J., TUCKER, L. W , AND REIS, D J namic display of organs from computed tomo- 1990. Advances in microcomputer biomedical grams. In Proceedings of the 6th Conference on imaging. In Proceedings of the 11th Annual Computer Applications in Radzology & Com- Conference and Expos~t~on of the National Com- puter/ Azded Analysis of Radiological Images puter Graphics Association, NCGA ’90 (Newport Beach, Calif., June), pp. 285-290. (Anaheim, Calif,, Mar. 19-22), pp. 136-145.

ACM Computing Surveys, Vol. 23, No. 4, December 1991 484 ● Stytz et al.

BOMANS, M,, HOHNE, K.-H , TIEDE, U , AND REIMER, national Symposzum on Medical Images and M 1990. 3-D segmentation of MR Images of Icons (Arhngton, Virginia, July), IEEE Com- the head for 3-D display. IEEE Trans. Med puter Society Press, pp 309-316, Imaging 9, 2 (June), pp. 177-183. CHEN, L -S,, HERMAN, G, T., REYNOLDS, R A , AND BOOTH, K. S., BRYDEN, M P , COWAN, W. B , UDUPA, J K. 1985. Surface shading in the MORGAN, M. F., AND PLANTE, B. L. 1987. On cuberille envu-onment. IEEE Comput. Graph. the parameters of human visual performance: Appl 5, 12 (Dee ), 33-43. An investigation of the benefits of antialiasmg. CHEN, S.-Y., LIN, W.-C., AND CHEN, C.-T 1989a IEEE Comput Graph Appl 7, 9 (Sept ), pp. An expert system for medical Image segmen- 34-41 tation Medical Imagwzg III: Image Process- BORDEN, B. S 1989 Graphics processing on a ing, SPIE 1092, (Newport Beach, Calif. ), pp. graphics supercomputer IEEE Comput. 162-172. Graph. Appl. 9, 4 (July), pp. 56-62 CHEN, S -Y , LIN, W -C., .AND CHEN, C.-T. 1989b. BROIIENSHIRE, D. 1988. REAL 3-D stereoscopic Improvement on dynamic elastic interpolation Imaging. In Proceedings of the 9th Annual technique for reconstructing 3-D objects from Conference and E.tpositlon of the National serial cross-sections. In Proceedings of the In- Computer Graphm Association, NCGA ’88 ternational Symposium: CAR ’89, Computer (Anaheim, Calif., Mar 20-24), pp 503-514 Asmsted Radw/ogy (Berlin, Germany)j Com- BURGER, P,, AND GILLIES, D. 1989. Interactive puter Assisted Radiology Press, pp. 702-706. Computer Graphzcs. Addison-Wesley, Reading, CHEN, L.-S , AND SoNTA~, M. R. 1989 Represen Mass. tation. display. and manipulation of 3D dlgltal BYRNE, J, P , UNDEILL, P. E., AND PHILLIPS, R. P scenes and them medical applications. Comput 1990 Feature based image regustratlon using V1s~on, Graph , Image Process. 48, 2 (Nov ), parallel computing methods In Fzrst Confer- 190-216, ence on Visualization m BLomedzcal Computmg CHOI, H S., HAYNOR, D R , AND Km, Y 1989 (Atlanta, Georgia, May 22-25), IEEE Com- Multivariate tissue classification of MRI im- puter Society Press, pp. 304-310. ages for 3-D \,olume reconstruction: A statisti- CAHN, D U. 1986 PHIGS: The programmer’s cal approach. Medl cal Imaging III Jmage hierarchical interactive graphics system. In Processing, SPIE 1092, (Newport Beach, Proceedings of the 7th Annual Conference and Calif ), pp 183-193 Exposition of the Natmnal Computer Graphzcs CHUANG, K.-S. 1990 Advances m microcomputer Assoczatzon, NCGA ’86 (Anaheim, Calif , May biomedical imagmg, In Proceedings of the 11th 11-15), pp. 333-355. Annual Conference and Exposltzon of the Na- CARLBOM, I., CHAIIRAVARTY, I., AND VANDERSCHEL. tional Computer Graphws Assoclatzon, NCGA D 1985 A hierarchical data structure ’90 (Anaheim, Cahf Mar 19-22), 136-145 for representing the spatial decomposition of CHUANG, K.-S., AND UDUP.A, J K. 1989 Boundary 3-D objects. IEEE Comput. Graph Appl. 5, 4 detection m gray-level scenes In Proceedings (Apr ), 24-31, of the 10 Annual Conference and Exposition of CARPENTER, L. 1984. The A-buffer: An an- the Natzonal Computer Graphics Assoczatlon, tlaliased hidden surface method. Comput. NCGA ’89 (Philadelphia, Penn , Apr 17-20), Graph. 18, 3 (July), 103-108. pp. 112-117 CA.TMULL, E. 1984. An analytic visible surface al- CLINE, H, E., LORENSON, W E , LUDKE, S , gorlthm for independent pixel processing CR~WFORD, C. R , AND T~ETIZR, B. C. 1988. Comput. Graph 18, 3 (July), 109-115. Two algorithms for the three-dimensional re CHEN, C. H , AND AGGARWAL, J K 1986 Vol- construction of tomograms. Med Phys 15, 3 ume~surface octrees for the representation of (Mayl’June), 320-327 three-dimensional objects. Comput Vision, COGCHNS,J M. 1990 A multiscale description of Graph.. Image Process. 36, 1 (Oct.). 100-113 image structure for segmentation of biomedical CH~N, H. H , AND HUANG, T. S. 1988. A survey lmage~. Fu-st Conference on Vzsua[Lzatu>n in of construction and manipulation of octrees. Bzomedzcal Computzng (Atlanta, Georgia, May Comput. Vzsion, Graph., Image Process. 43, 22-25), IEEE Computer Society press. pp 409-431. 123.130

CHEN, L. S., HERMAN, G T., REYNOLDS, R A , AND COHEN, D., KAUFMAN, A , AND BAKALASH, R 1990 UDUPA, J. K 1984a. Surface rendering in the Real-time discrete shading. Vzsual Comput. 6, cuberille environment. Tech. Rep. MIPG 87, 3 (Feb.), 16-27. Dept. of Rad]ology, Umv. of Pennsylvania. CooK, R. L 1984. Shade trees Comput Graph CHEN, L. S , HERMAN, G T., ME-I-ER, C. R,, 18, 3 (Jul.), 223-231. REYNCMM, R A , AND UDUP~, J K. 1984b 3D83: An easy-to-use software package for CooK, R L., AND TORRANCE, K. E 1982 A re- three-dimensional display from computed to- flectance model for computer graphics ACM mograms IEEE Computer Society Joznt Inter- Trans. Graph 1, 1 (Jan ), 7-24.

ACM Computmg Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging ● 485

COOK, R. L., PORTER, T., AND CARPENTER, L. 1984. DIPPE, M. A. Z., AND WOLD, E. H. 1985 Antialias- Distributed ray tracing. Comput. Graph. 18, 3 ing through stochastic sampling Comput. (July), 137-145. Graph. 19, 3 (July), 69-78 COOK, R. L., CARPENTER,L., AND CATMULL, E. 1987. DOHERTY, M. F., BJORKLUND, C. M , AND NOGA, M. The Reyes image rendering architecture. C’om - T. 1986. Split-merge-merge: An enhanced put. Graph. 21, 4 (July), 95-102. segmentation capability. In Proceedings of the CORNELIUS, C., AND FELLIN~HAM, L. 1990. Visual- 1986 IEEE Computer Society Conference on izing the spine using anatomical knowledge. Computer Vzsion and Pattern Recogn ltzon Extracting Mean ing from Complex Data: Pro- (Miami Beach, Fla., June 22-26), IEEE Com- cessing, Display, Interaction, SPIE 1259, (Santa puter Society Press, pp 325-328. Clara, Calif., Feb. 14-16), pp. 221-231. DREBIN, R. A., CARPENTER, L., AND HANRAHAN, P. 1988. Volume rendering. Comput. Graph. 22, CROW, F. C. 1977. The aliasing problem in com- 4 (Aug.), 65-74. puter-generated shaded images. Commun. ACM 20, 11 (Nov.), 799-805. DUFF, T. 1985. Compositing 3-D rendered images. Comput. Graph, 19, 3 (July), 41-44. CROW, F. C. 1981. A comparison of antialiasing techniques. IEEE Comput. Graph. Appl., 1, 1 EDHOLM) P. R., LEWITT, R. M., LINDHOLM, B., HERMAN, G. T., UDUPA, J. K., CHEN, L. S., (Jan.), 40-48. MARGASAHAYAM, P. S , AND MEYER, C. R. CULLIP, T. J., FREDERICKSON, R. E., GAUCH, J. M., 1986. Contributions to reconstruction and AND PIZER, S. M. 1990. Algorithms for 2D and display techniques in computerized tomogra- 3D image description based on IAS. In Pro- phy. Tech. Rep. MIPG 110, Dept. of Radiology, ceedings of the 1st Conference on Visualization Univ. of Pennsylvania. in Biomedical Computing (Atlanta, Georgia, May 22-25), IEEE Computer Society Press, pp. EDMONDS, I. R. G. 1988. The graphics supercom- 102-107. puter: Bringing visualization to technical com- puting. In Proceedings of the 9th Annual Con- DEKEL, D. 1987 A fast, high quality, medical 3D ference and Exposition of the National Com- software system. Medical Imaging I, SPIE 767, puter Graphics Association, NCGA ’88 (Newport Beach, Calif., Feb. 1-6), pp. 505-508. (Anaheim, Calif. Mar. 20-24), pp. 521-528. DE~EL, D. 1990. CAMRA Oueruiew Presentatwn. EKOULE, A., PEYRIN, F., ANU GOUTTE, R 1989. ISG Technologies, Inc., NIississauga, Ontario, Reconstruction and display of biomedical struc- Canada. tures by triangulation of 3D surfaces. SCL. Eng DEV, P., WOOD, S., WHITE, D. N., YOUNG, S. W., Med. Imag. SPIE 1137, (Paris, France, Apr. AND DUNCAN, J. P. 1983. An interactive 24-26), pp. 106-113, graphics system for planning reconstructive ELLSWORTH, D , GOOD, H., AND TEBBS, B. 1990. surgery. In Proceedings of the 4th Annual Con- Distributing display lists on a multicomputer ference and Exposition of the Natzonal Com- In Proceedings of the 1990 Symposium on puter Graphics Association, NCGA ’83 Znteractiue 3D Graphics (Snowbird, Utah, (Chicago, 111.,June 26-30), pp. 130-132. Mar. 25-28), pp 147-154. DEV., P., FELLINGHAM, L. L., VASSILIADIS, A , ENGLAND, N. 1989. Evolution of high performance WOOLSON, S. T.j WHITE, D. N., AND YOUNG, S. graphics systems. In Proceedings of Graphics L. 1984. 3D graphics for interactive surgical Interface ’89 (June), pp. 144-151. simulation and implant design. Processing and FAN, R T., TRIVE~I, S. S., FELLINGHAM, L. L., GAM- Display of Three-Dimensional Data II, SPIE BOA-ALDECO, A , AND HE~GCOc~, M W. 1987. 507, San Diego, Calif., pp. 52-57. Soft tissue segmentation and 3D display from DEV, P., FELLINGHAM, L L., WOOD, S. L., AND computerized tomography and magnetic reso V.MSILIAIIIS, A. 1985. A medical graphics sys- nance imaging. Medical Imagmg: Part Tu,o, tem for diagnosis and surgical planning. In SPIE 767, (Newport Beach, Calif., Feb. 1-6), Proceedings of the International Symposium: pp. 494-504 CAR ’85, Computer Assisted Radiolog.v (Berlin, FARRELL, E J., ANII ZAPPULLA, R A. 1989. Three- West Germany), Computer Assisted Radiology dimensional and biomedical Press, pp. 602-607. applications. CRC Critical Ret,. Biomed. Eng. DIEDE, T., HAGENMAIER, C. F., MIRANKER, G. S., 16, 4, 323-363. RUBINSTEIN, J. J., AND WORLEY, W. S. JR. 1988. FARRELL, E. J., YANG, W, C., AND ZAPPULLA) R. The Titan graphics supercomputer architec- 1985. Animated 3D CT imaging. IEEE Corm ture Computer 21, 9 (Sept.), 13-29. put. Graph Appl. 5, 12 (Dec.), 26-30. DIMENSIONAL MEDICINE, INc. 1989 10901 Bren FARRELL, E. J , ZAPPULLA, R., AND YANG, W. C Road East, Minnetonka, Minn. Maxiuiew 1984. Color 3-D imaging of normal and patho- Radzology Workstations. logic intracranial structures. IEEE CorrzP. DIPPE, M., AND SWENSEN, J. 1984. An adaptive Graph. Appl. 4, 10 (Sept.), 5-17. subdivision algorithm and parallel architec- FARRELL, E. J., ZAPPULLA, R A., KANTROWTTZ, A. ture for realistic image synthesis. Comput. 1986a. Planning neurosurgery with interac- Graph. 18, 3 (July), 310-319 tive 3D computer imaging. In Proceedings of

ACM Computing Surveys, Vol. 23, No. 4, December 1991 486 “ Stytz et al.

the 5th Conference on Med~cal Informatm R FUCHS, H., ABRAM, G. D., AND GRANT, E. D 1983 Salamon, and B. Blum, Eds., North- Near real-time shaded display of rigid objects. Holland, Amsterdam, Denmark, pp. 726-730. Comput. Graph. 17, 3 (Jul.), 65-69 FARRELL, E. J., ZAPPULLA, R. A., KANTROWITZ, A., FUCHS, H , GOLDFEATHER, J., HULTQUIST, J. P., SPIGELMAN, M., YANG, W. C. 1986. 3D dis- SPACH, S., AUSTIN, J. D., BROOKS, F. P. JR , play of multiple intracranial structures for E~LES, J G., AND POULTON, J. 1985. Fast treatment planning. In Proceedings of the 8th spheres, shadows, textures, transparencies, Annual Conference of the lEEE/Engineering in and image enhancements in pixel-planes. Mediczne and Biology Society (Fort Worth, Computer Graph. 29, 3 (Jul.), 111-120. Texas, Nov. 7-10), IEEE Press, pp. 1088-1091. FUCHS, H., POULTON, J., EYLES, J., AUSTIN, J , AIND FARRELL, E. J., ZAPPULLA, R. A., AND SPIGELMAN, GREER, T. 1986. Pixel-planes A parallel ar- M. 1987. Imaging tools for interpreting chitecture for raster graphics Pixel-Planes two- and three-dimensional medical data. In Project Summary, Univ. of North Carolina, Proceedings of the 8th Annual Conference and Dept of Computer Science Exposition of the National Computer Graphics FUCHS, H., POULTON, J,, EYLES J., AND GRK.ER,T Assoczatzon, NCGA ’87 (Philadelphia, PA, Mar. 1988a, Coarse-gram and fine-grain paral- 22-26), pp. 60-69. lelism m the next generatmn Pixel-Planes FLYNN, M., MATTESON, R., DICKIE, D., KEYES, J. W. graphics system. TR88-014, The Univ. of North JR,, AND BOOKSTEIN, F. 1983 Requirements Carolina at Chapel Hall, Dept of Computer for the display and analysis of three-dimen- Science and Proceedings of the International .wonal medical image data. In Proceedings of Conference and Exh lb~tion on Parallel Process- 2nd International Conference and Workshop on ing for Computer Viszon and Dlspla.v (Univer- Picture Archwmg and Communication Systems sit y of Leeds, United Kingdom, Jan. 12-15) ( PA CSII) for Mcd~cal Apphcatzons, SPIE 418. FUCHS, H., POULTON, J., EYL~s, J., GR~ER, T., HILL, (Kansas City, Missouri, May 22-25), pp. E , ISRAEL, L , ELLSWORTH, D., GoorL H., 213-224. INTERRANTE, V., MOLNAR, S., N~UMANN, U., FOLEY, J. D., VAN DAM, A., FEINESZ, S. K., .4ND RHOADES, J., TEBBS, B , AND TURIL G. 1988b HUGHES, J. F. 1990. Fundamentals oflnterac- Pixel-Planes: A parallel architecture for raster tive Computer Graphzcs, 2nd Edition Addison- g-aphics Pwel-Planes Project Summary, Umv. Wesley Publishing Company, Reading, Mass of North Carolina, Dept of Computer Science. FRWDER, G., GORDON, D., AND REYNOLDS, R A FUCHS, H., PIZER, S M., CREASY, J. L., RENNER, J. 1985a Back-to-front display of voxel-based B., AND ROSENMAN. J G. 1988c. Interactive, objects. IEEE Comput. Graph Appl 5, 1 richly cued shaded display of multiple 3D ob- (Jan ), 52-60. jects in medical images. Medical Imagmg II: FRIE~ER, G., HERMAN, G T., MEYER, C., AND UDUPA, Image Data Management and Display, SPIE J. 1985b. Large software problems for small 914, (Newport Beach, Calif, Jan. 31-Feb 5), computers: An example from medical imaging. pp 842-849. IEEE Softw 2, 5 (Sept ), 37-47. FUCHS, H., LmToY, M., AND PIZER, S M. 1989a FUCHS, H , KEDEM, Z, M., AND USELTON, S P 1977. Interactive visualization of 3D medical data. Optimal surface reconstruction from planar IEEE Comput. 22, 8 (Aug.), 46-52. contours. Commun. ACM, 20. 10 (Nov.), FUCHS, H , LEVOY, M., PIZER, S. M., AND RosE~- 693-702. MAN, J. G. 1989b, Interactive visuahzatmn FUCHS. H . KEDEIVr, Z. M , AND NAYLOR, B. 1979. and manipulation of 3-D medical image data Predetermining visibility priority in 3-D scenes In Proceedings of the 10th Annual Conference (prehminary report) Computer Graph. 13, 2 and Exposition of the Natzonal Computer (Aug.), 175-181. Graph ics Association, NCGA ’89 (Phda - FUCHS, H , KEDE~, Z. M., AND NA~LOR, B F 1980 delphla, Penn., Apr. 17-20), pp. 118-131 On visible surface generation by a priori tree FUCHS, H., POULTON. J , EYLES, J., GREER, T . structures Comput. Graph 14. 2 (Aug ), GOLDFEATHE~, J., ELLSWO~TH, D , MOLNAR. S , 124-133 AND TURW G 1989c Pixel-Planes 5: A hetero- FUCHS, H . PJZER, S. M., HEINZ, E R., BLOOMBERG, geneous multiprocessor graphics system using S. H., TSAI, LI-C., ANn STmcrm%rm, D C 1982a. processor-enhanced memories. Comput. Graph. Design of and image editing with a space- 23, 4 (Jul.), 79-88. filling three-dimensional display based on a FUJIMOTO, A., AND IWATA, K 1983. Jag-free im- standard raster graphics system. Process/.ng ages on raster displays. IEEE Comput. Graph. and Display of Three-Dimensional Data, SPIE, Appl. 3, 9 (Sept.), 26-34. 367 SPIE, (San Diego, Calif., Aug. 26-27), pp 117-127. FUJIMOTO, A., TANAKA, T., AND IWATA, K 1986 FUCHS, H , PIZER, S. M., TSAI, L.-C., BLOOMBERG, S ARTS: Accelerated ray-tracing system. IEEE H , AND HEINZ, E. R 1982b. Adding a true Comput Graph Appl 6, 4 (Apr ), 16-26. 3-D display to a raster graphics system IEEE GAMBOA-ALDECO, A., FELLINGH~M, L L., AND CHEN, Comput. Graph. Appl. 2, 7 (Sept.), 73-78 G. T. Y. 1986. Correlation of 3D surfaces from

ACM Computmg Surveys, Vol. 23, No. 4, December 1991 Three-Dimensional Medical Imaging “ 487

multiple modalities in medical imaging. Appli- GOLDWASSER, S. M. 1984a. A generalized object cation of Optical Instrumehtation m Medicine display processor architecture. In Proceedings XIV and Picture Archiving and Communica- of the 11th Annual International S.vmposium tion Systems ( PACS IV) for Medical Applica- on Computer Arch ztecture (Ann Arbor, Mich. tions, SPIE 626, (Newport Beach, Calif., Feb. May), IEEE Computer Society Press, pp. 38-47. 2-7), Pp. 467-473. GOLDWASSER, S. M, 1984b. A generalized object G~MBOTTO, J. P. 1986. A hierarchical segmenta- display processor architecture. IEEE Comput. tion algorithm. IEEE Eighth International Graph. Appl. 4, 10 (Oct.), pp. 43-55. Conference on Pattern Recognition, (Oct.), pp. GOLDWASSER,S. M. 1985 The voxel processor ar- 951-953. chitecture for real-time display and manipula- GARGANTINL I. 1982. Linear octrees for fast pro- tion of 3D medical objects. In Proceedings of cessing of three-dimensional objects. Comput. the 6th Annual Conference and Exposition of Graph. Image Process. 20, 4 (Dec.), 365-374. the National Computer Graphics Association, NCGA ’85 (Dallas, Tx,, Apr. 14-18), pp. 71-80. GARGANTINI, I., WALSH, T. R. S., AND Wu, O. L 1986a. Displaying a voxel-based object via GOLDWASSER, S. M 1986. Rapid techniques for linear octrees. Application of Optwal Instru- the display and manipulation of 3-D biomedical mentation in Medicine XIV and Picture Archiu- data. In Proceedings of the 7th Annual Confer- ing and Commurucation Systems (PACS IV) ence and Exposition of the National Computer for Medical Applications, vol. 626. SPIE, Graphics Association, NCGA ’86 (Anaheim, Newport Beach, Calif., pp. 460-466. Calif., May 11-15), pp. 115-149. GOLDWASSER, S. M., AND REYNOLDS, R. A. 1983 GARGANTINI, I., WALSH, T. R., AND Wu, O. L. 1986b. An architecture for the real-time display and Viewing transformations of voxel-based objects manipulation of three-dimensional objects. In via linear octrees. IEEE Comput. Graph. Appl. Proceedings of the International Conference on 6, 10 (Oct.), 12-21. Parallel Processing (Bellaire, Mich., Aug ), GEIST, D., AND VANNIER, M. W. 1989. PC-based IEEE Computer Society Press, pp. 269-274. 3-D reconstruction of medical images. Comput. GomwAse!m, S. M., AND REYNOLDS, R. A. 1987. Graph. 13, 2 (Sept.), 135-143. Real-time display and manipulation of 3-D GELBERG, L. M , KAMINS, D., AND VROOM, J. 1989a. medical objects: The voxel processor architec- VEX: A volume exploratorium: An integated ture. Comput. Vision, Graph., Image Process. toolkit for interactive volume visualization. In 39, 1-27. Proceedings of the Chapel Hill Conference on GOLDWASSER, S. M,, REYNOLDS, R. A., BAPTY, T., Volume Visualization (Chapel Hill, N.C , May BARAFF, D., SUMMERS, J., TALTON, D. A., AND 18-19), Univ. of North Carolina Press. pp. WALSH, E. 1985. Physician’s workstation with 21-26. real-time performance. IEEE Comput. Graph GELBERG, L. M., MACMANN, J. F., AND MATHIAS, C. Appl. 5, 12 (Dec.), 44-57. J, 1989b. Graphics, image processing, and the GOLDWASSER,S. M., REYNOLDS, R. A., TALTON, D., Stellar graphics supercomputer. Imaging AND WALSH, E 1986. Real-time interactive Workstations and Document Input Systems, facilities associated with a 3-D medical work- SPIE 1074, (Los Angeles, Calif., Jan. 12-20) station. Application of Optical Instrumentation pp. 89-96. m Medicine XIV and l%cture Archiuing and GEMBALLA, R., AND LINDNER, R. 1982. The multi- Communication Systems (PACS IV) for Medt- ple-write bus technique. IEEE Comput. Graph. cal Applications, SPIE 626, (Newport Beach, Appl. 2, 7 (Sept.), 33-41. Calif., Feb. 2-7), pp. 491-503. GIBSON, C. J. 1989. Rapid three-dimensional dis- GOLDWASSER, S. M., REYNOLDS, R. A., TALTON, play of medical data using ordered surface list D, A,, AND WALSH, E S. 1988a. High- representations. Science and Engineering of performance graphics processors for medical Proceedings of the In- Medical Imaging, SPIE 1137, (Paris, France, imaging applications. In Apr. 24-26), pp. 114-119. ternational Conference on Parallel Processing for Computer Viszon and Display (University of GLASSNER, A. S. 1984. Space subdivision for fast Leeds, United Kingdom, Jan. 12-15), pp. 1-13. ray tracing. IEEE Comput. Graph. Appl. 4, 10 GOLDWASSER,S, M., REYNOLDS, R. A., TALTON, D. (Oct.), 15-22. A., AND WALSH, E, S, 1988b. Techniques for GLASSNER, A. S. 1988. Spacetime ray tracing for the rapid display and manipulation of 3-D IEEE Comput. Graph. Appl. 8, 2 . biomedical data. Comput. Med. Imagmg (Mar.), 60-70. Graph. 12, 1 (Jan. -Feb.), 1-24. GLASSNER, A. S., Ed. 1989. An Introduction to GOLDWASSER,S. M., REYNOLDS, R. A., TALTON, D Ray Tracing. Academic Press, London, Eng- A., AND WALSH, E. S. 1989, High perfor- land. mance graphics processors for medical imag- GOLDFEATHER,J., AND FUCHS, H. 1986. Quadratic ing applications, In Parallel Processing for surface rendering on a logic-enhanced frame- and D~splay, Dew, P. M., buffer memory. IEEE Comput. Graph. Appl. Earnshaw, R. A., and Heywood, T. R. (eds.). 6, 1 (Jan.), 48-59. Addison-Wesley, Workingham, England.

ACM Computing Surveys, Vol. 23, No 4, December 1991 488 “ Stytz et al.

GONZALEZ, R. C , ANI) ~lNTZ, P 1987 Dzgital and medical images IEEE Trans. Med. Imag Image Processing, 2nd edition, Addison- M14, 1 (Mar ), 26-38. Wesley, Readmg, Mass. HEFFERMAN, P B., AND ROBB, R. A. 1985b, Dis- GORDON, D , AND REYNOLDS, R. A 1983 Image play and analysls of 4-D medical Images. In space shading of three-dimensional objects Proceedings of the International Symposium: Tech Rep MIPG 85, Dept. of Radiology, Univ. CAR ’85. Computer Assisted Radiology (Berhn, of Pennsylvania West Germany), Computer Assisted Radiology GORDON. D., .AND REYNOLDS. R. A. 1985. Image Press, pp. 583-592 space shading of 3-dimensional objects Com- HEMMY. D. C., AND LINDWIST, T R. 1987. Opti- put. Vzslon, Graph , Image Process., 29, mizing 3-D Imagmg techmques to meet chnical 361-376. requirements. In Proceedings of the 8th An- GORDON,D, AND UDUPA, J. K. 1989. Fast surface nual Conference and Exposztzon of the National tracking in three-dimensional binary images Computer Graphics Assoclatlon, NCGA ’87 Comput. Vision, Graph , Image Process 45, 2 (Philadelphia, Penn., Mar, 22-26), pp 69-80. (Feb.), 196-214 HIZRMAN. G. T 1985 Computer graphics m GORDON,M., AND DELOID, G. 1990a Advanced 3D radiology In Proceedl ngs of the Intern at[on al Image reconstruction from serial slices. In Pro- Symposium: CAR ’85, Computer Assisted ceedings of the 11th Annual Conference and RadLology (Berhn, West Germany), Computer Exposztzon of the National Computer Graphzcs Assisted Radiology Press, pp 539-550. AssocLat~on, NCGA ’90 (Anaheim, Calif., Mar HERMAN, G T. 1986. Computerized reconstruc- 19-22), pp. 176-183 tion and 3-D imaging in medicine Techmcal GORDON, M , AND DELOID, G 1990b. Surface and Report MIPG 108, Dept. of Radiology, Univ of volumetric 3-D reconstructions on an engineer- Pennsylvania ing workstation. In SCAR ’90: Proceedz ngs of HERMAN, G T . AND ABBOTT, A. H. 1989 Repro- the 10th Conference on Computer ApplzcatLons ducibility of landmark locatlons on CT-based in Radiology and the 4th Conference on Com- three-dimensional images. In Proceedings of puter Assisted RadLology (Anaheim, Calif., the 10th Annual Conference and Exposltwn of June 13- 16), Symposia Foundation Press, pp. the Natzonal Computer Graphics Asso[:latmn. 533-537 NCGA ’89 (Philadelphia, Penn Apr 17-20). GOUR~UD, H 1971. Continuous shading of curved pp 144-148. surfaces. IEEE Trans. Comput C-20, 6 (June), 623-629. HERMAN, G T., AND COIN, C G. 1980. The use of three-dimensional computer display in the GUDMUNDSSON, B , AND R.ANDEN, M 1990 Incre- study of disk disease J. Comput. A.sszsted mental generation of projections of CT-volumes. Tomogruph. 4, 4 (Aug ), pp 564-567 In Proceedings of 1st Conference on Visualzza- twn m Biomedical Computzng (Atlanta, Ga , HERMAN, G. T , AND LIU, H. K. 1977. Display of May 22-25), IEEE Computer Society Press pp. three-dimensional information in computed 27-34. tomography J Comput. Assisted Tomograph, 1, 1, 155-160 HARRIS, L. D, 1988, A varifocal mirror display integrated into a high-speed image processor HERMAN, G, T., AND LIU, H K 1978 Dynamic Three-D zmenslonal Imaging and Remote Sens- boundary surface detection. Comput Graph. ing Imaging, SPIE 902, (Los Angeles, Calif., Image Process, 7, 130-138 Jan. 14-15), pp. 2-9. HERMAN, G, T., .4ND LIu, H K. 1979. Three- HARRIS, L D., Ro~B, R A., YUEN, T. S., .AND RIT- dlmenslonal display of human organs from MAN, E L 1978. Noninvasive numerical dis- computed tomograms Comput Graph. IWLage section and display of anatomic structure using Process. 9, 1-21 computerized X-ray tomography. Recent and HERIVSAN,G T , AND WEBSTER, D. 1983 A topolog- Future Developments in Medical Imagmg, SPIE ical proof of a surface tracking algorlthm. 152, pp. 10-18. Comput, Vmmn, Graph , Image Process 23, HARRIS, L. D., ROBB, R. A., YUEN, T S., AND 162-177. RITMAN, E. L. 1979. Display and visualiza- tion of three-dimensional reconstructed HERMAN, G. T., UDUPA, J. K.. KRAMER, D., LMJTER- anatomic morphology: Experience with the tho- BUR, P. C., RUDIN, A M , AND SCHNEIDER, J. S rax, heart, and coronary vasculature of dogs, 1982 Three-dimensional display of nuclear J Comput Assisted Tomograph. 3, 4 (Aug.), magnetic resonance images Opttcal Eng. 21, 5 439-446 (Sept.-Ott.), 923-926. HARRIS, L. D., CAMP, J. J., RITMAN, E. L., AND HERMAN, G. T., ROBERTS, D . AND RABE. B 1987 ROBB, R. A. 1986, Three-dimensional display The reduction of pseudoforamma (false holes) and analysls of tomographic volume images in computer graphic presentations for cramofa- utilizing a varifocal mirror, IEEE Trans. Med. cial surgery In Proceedings of the 8th Annaal Imag. MI-5, 2 (June), 67-72. Conference and Exposztmn of the Natloaal HEFFERMAN, P. B., AND ROBB, R. A 1985a, A new Computer Graphics Association, NCGA ’87 method for shaded surface display of biological (Phdadelphia, Penn , Mar 22-26), pp 81-85

ACM Computmg Surveys, Vol 23, No. 4, December 1991 Three-Dimensional Medical Imaging “ 489

HEYERS, V., MEINZER, H. P., SAURBIER, F., limited-view computed tomograms. Comput. SCHEPPELMANN, D., AND SCHAFER, R. 1989. Vzszon, Graph., Image Process. 30, 3 (June), Minimizing the data preparation for 3D vis- 345-361. ualization of medical image sequences. In JENSE, G. J., AND HUUSMANS, D. P. 1989, Interac- Proceedings of the International S.vmposium: tive voxel-based graphics for 3D reconstruction CAR ’89, Computer Asswted Rad~ology of biological structures. Cornput. Graph. 13, 2, (Berlin, Germany), Computer Assisted Radiol- 145-150. ogy Pressj pp. 743-746 JOHNSON, E. R., AND MOSHER, C. E. JR. 1989. In- HILTEBRAND, E. G. 1989 Hardware architecture tegration of volume rendering and geometric with transputers for fast manipulation of vol - graphics. In Proceedings of the Chapel HL1l ume data. In Parallel Processing for Computer Workshop on Volume Visualization (Chapel Viszon and Display, Dew, P. M., Earnshaw, R. Hill, N. C., May 18-19), Univ. of North Caro- A., and Heywood, T. R. (eds. ). Addison-Wesley, lina Press, pp 1-7. Reading, Mass , pp. 497-503. JOHNSON, S. A., AND ANDERSON, R. A. 1982. Clini- HODGES, L, F., AND MCALLISTER, D. F. 1985. cal varifocal mirror display system at the Uni- Stereo and alternating-pair techniques for dis- versity of Utah Processing and Display of play of computer-generated images. IEEE Three-Dimensional Data, SPIE 367, (San Diego, Comput, Graph. Appl. 5, 9 (Sept.), 38-45. Calif., Aug. 26-27), pp. 145-148 HOFFMEISTER, J. W., RINEHART, G. C., AND VAN- NIER, M. W. 1989. Three-dimensional surface KALVIN, A., ANI) PELEG, S. 1989. A 3D multireso- reconstructions using a general purpose image lution segmentation algorithm for surface Medzcal Imag- processing system. In Proceedings of the Inter- reconstruction from CT Data. SPIE 1092, national Sympos~um: CAR ’89, Computer ing III: Image Processing, Asszsted Radzology (Berlin, Germany), Com- (Newport Beach, Calif,, Jan. 31-Feb. 3), pp. puter Assisted Radiology Press, pp. 752-757. 173-182. HOHNE, K. H., AND BERNSTEIN, R. 1986 Shading KAPLAN, M. R. 1987. The use of spatial coherence 3D images from CT using gray-level gradients. in ray tracing. In Tech ntques for Computer IEEE Trans. Med. Imag. MI-5, 1 (Mar.), Graphzcs, D. F. Rogers, and R, A, Earnshaw, 45-47. (Eds.) Springer-Verlag, New York: pp. 173-190. HOHNE, K.-H., BOMANS, M., TIEDE, U., REIMER, M. 1988. Display of multiple 3D objects using KAUFMAN, A. 1987 Efficient algorithms for 3D the generalized voxel-model. Medical Imaging scan-conversion of parametric curves, surfaces, IL Image Data Management and Display, SPIE and volumes. IEEE Comput. Gr[zph. 21, 4 914, (Newport Beach, Calif., Jan. 31-l?eb. 5), (July), 171-179. pp. 850-854, KAUFMAN, A. 1988a. Efficient algorithms for EIu, X , TAN, K. K., LEVIN, D. N., GALHOTRA, S. G., scan-converting 3D polygons. Comput. Graph. PELIZZARI, C A , CHEN, G. T. Y., BECK, R. N., 12, 2, 213-219. CHEN, C -T., AND COOPER, M. D. 1989. Volu- KAUFMAN, A. 1988b. The CUBE workstation: A metric rendering of multimodality, multivari- 3-D voxel-based graphics envu-onment. Vzsual able medical imaging data. In Proceedings of Comput. 4, 4 (Oct.), 210-221 the Chapel Hill Workshop on Volume Visual- KAUFMAN, A. 1988c. The CUBE three-dimen- ization (Chapel Hill, N. C., May 18– 19, Univ. of sional workstation. In Proceedings of the 9th North Carolina Press, pp. 45-49 Annual Conference and Exposition of the Na HUMMEL, R. A. 1975. Histogram modification tional Computer Graph tcs Assoclatlon. NCGA techniques. Comput. Graph. Image Process. 4, ’88 (Anaheim, Calif., Mar. 20-24), pp. 10, 209-224. 344-354. ISG TECHNOLOGIES, INC. 1990a. CAMRA Allegro: KAUFMAN, A. 1989, The voxblt engine: A voxel A Technical Oueruiezo, ISG Technologies, Inc , frame buffer processor. In Aduances in Graph- Mississauga, Ontario, Canada. ics Hardware III, Kuijk, F , and Strasser, W ISG TECHNOLOGIES,INC. 1990b. CAMRA Allegro: (eds.), Springer-Verlag, Berlin, Germany. A Full-Solution 2 D/3D Workstation, lSG KAUFMAN, A., AND BAKALASH, R. 1985. A 3-D cel- Technologies, Inc., Mississauga, Ontario, lular frame buffer. In Proceedings of EURO- Canada. GRAPHICS ’85, Vandoni, C. E., (cd.), Elsevier JAC~ELj D. 1985. The graphics Parcum system: A Science Publishers B. V. (North-Holland). 3D memory based computer architecture for Amsterdam, The Netherlands, pp. 215-220. processing and display of solid models. Corn- KAUFMAN, A., AND BAKALASH, R 1988. Memory put. Graph. Forum 4, 21-32. and processing architecture for 3D voxel-based JACKINS, C. L., AND TANIMOTO, S. L. 1980. Oct- imagery. IEEE Comput. Graph. Appl 8, 6 trees and their use in representing three- (Nov.), 10-23 dimensional objects. Comput. Graph. Image KAUFMAN, A., AND BAKALASH, R. 1989a. Parallel Process. 14, 249-270. processing for 3D voxel-based graphics. In JAMAN, K. A., GORDON, R,, AND RANGAYYAN, R. M. Parallel Processing for Computer Vision 1985. Display of 3D anisotropic images from and Dw-play, Dew, P. M., Earnshaw, R. A.,

ACM Computing Surveys, Vol. 23, No. 4, December 1991 490 * Stytz et al.

and Heywood, T, R. (eds). Addison-Wesley, LENZ, R., GUDMUNDSSON, B , ~NEI LHNDSKCW, Workingham, England B. 1985, Mampulation and display of 3-D KAUFMAN, A , ANII BAKALASH, R. 1989b The images for surgical planning Medccal Image c u B E Processing, SPIE 593, (Cannes, France, system as a 3D medical workstation 7’hree- Dec. 2-3), pp. 96-102, Dimensconal Vmualizatlon and Display Tech- LENZ, R., GU~MUNDSSON, B , LINDs~o~, B., AND nologies, SPIE 1083, (Los Angeles, Cal if., DANI~LSSON, PER E 1986. Display of density Jan. 18-20), pp. 189-194, volumes IEEE Comput. Graph Appl 6, 7 KAUFhlAN, A., AND BA~ALASH, R. 1990, Viewing (July), 20-29. andrendering processor for a volume visualiza- LEVINTHAL, A., AND PORTER, T. 1984, Chap: A tion system (extended abstract). In Aducmces SIMD graphics processor Comput, Graph.,, 18, m Graph Lcs Hardware IV, Grimsdale, R L., 3 (July), 77-82 and Strasser, W., (eds. ), Springer-Verlag, L~voY, M 1988a. Display of surfaces from vol- Berlin, Germany ume data IEEE Compu t Graph Appl 8, 5 KAUFMAN, A., AND SHIMONY, E. 1986. 3D scan- (May), 29-37. conversion algorithms for voxel-based graph- LEVOY, M. 1988b. Direct visuahzatlon of surfaces ics. In Proceedings of the ACM Worhshop on from computed tomography data Medical Interactme 3D Graphics, (Chapel Hill, NC , Imaging II: Image Data Management and Dts - Oct. 23-24), pp. 45-75. play, SPIE 914, (Newport Beach, Calif,, Jan, KLEIN, F , AND KUEBLER, O 1985 A prebuffer 31-Feb. 5), pp. 828-841, algorithm for instant display of volume data, LEVOY, M 1989a Display of Surfaces from Vol- Arch ztectures and Algorithms for Digztal Image ume Data, Ph. D Dissertation, Dept of Com- Processing, SPIE 596, (Cannes, France, Dec. puter Science, University of North Carolina at 5-6), pp 54-58 Chapel Hill. KLIEGIS, U,, NEUMANN, R., KORTMANN, TII,, SCHWE- LEVOY, M, 1989b Design for a real-time hlgh- SI~, W., MITTELSTADT, R., WMGEL, H., AND quality volume rendering workstation, In ZEN~ER, W. 1989. Fast three dimensional vi- Proceedings of the Chapel HLll Workshop on sualization using a parallel computmg system, Volume Visuahzution (Chapel Hill, N.C.), In Proceedings of the International Symposium: Univ of North Carohna Press, pp 85-92. CAR ’89, Computer Assisted Radiology (Berlin, LEVO~, M 1990a A hyhrid ray tracer for render- Germany), Computer Assisted Radiology Press, ing polygon and volume data IEEE Comput pp 747-751. Graph Appl 10.2 (Mar ), 33-4o. KRAMICR, D. M., KAUFMAN, L , GLTZMAN, R. J , AND LEVOY, M, 1990b, Efficient ray tracing of volume HAWRYSZKO, C 1990. A general algorithm for data ACM Trans Graph, 9,3 (July), 245-261. oblique Image reconstruction IEEE Compu t LEVOY, M , FUCHS, H., PIZER, S. M., ROSENMAN, J , Graph. Appl. 10, 2 (Mar.), 62-65, CHANEY, E. L., SHEROUSE, G. W , INTERRANTE, KUNII, T. L., SATOH, T., AND YAM~GUCHI, K. 1985, V,, AND KIEL, J 1990 Volume rendering in Generation of topological boundary representa- radiation treatment planning Ftrst Conference IEEE Comput, tions from octree encoding. on VlsualzzatLon m Blomedtcal Computmg Graph, Appl. 5, 3 (Mar.), 29-38. (Atlanta, Ga., May 22-25), IEEE Computer LANE, B 1982. Stereoscopic displays. Processing Society Pressj pp. 4-8 and Dzsplay of Three-dimensional Data, SPIE LIN, W.-C., AND CHEN, S -Y, 1989. A new surface 367, (San Diego, Calif,, Aug 26-27), pp. 20-32, interpolation techmque for reconstructing 3D LANG, P., STEIGER, P , AND LJN~QUIST, T. 1990, objects from serial cross-sections. Comput, VZ- Three-dimensional reconstruction of magnetic slon, Graph , Image Process. 48, 1 (Ott ), resonance images: Comparison of surface and 124-143 volume rendering techniques SCAR ’90: Pro- LIN, W C., LIANG, C C., AND CHEN, C. T, 1988. ceedings of the 10th Conference on Computer Dynamic elastic interpolation for 3-D object App/tcatLons m Radlolog.v and the 4th Confer- reconstruction from serial cross-sectional im. ence on Computer Assisted Radiology ages IEEE Trans Med. Imag 3, 3 (Sept.), (Anaheim, Calif., June, 13-16), Symposia 225-232 Foundation Pub,, pp. 520-526. LINDQUIST, T. R. 1990. Personal communication LEE, M, E., AND REDNER, R. A 1990. A note on with authors. Dimensional Medicine, Inc., the use of nonlinear filtering m computer 10901 Bren Road East, Mmnetonka, MN. graph~cs. IEEE Comput. Graph. Appl. 10, 3 December, 1990. (May), 23-29. LIPSCOMB, J S, 1989. Experience with stereo- LEMKE, H U , BOSING, K., ENGELHORN, M,, JACKEL, scopic display devices and output algorithms. D., KNOBLOCH, B., SCHARNWEBER, H., STIEHL, Three-D1mens~onal Vwualization and Display H. S., AND TONNIES, K. D. 1987. 3-D com- Technologies, SPIE 1083, (Los Angeles, Calif., puter graphic work station for biomedical in- Jan 18-20), pp. 28-34 formation modeling and display. Medical LIU, H, K. 1977 Two- and three-dimensional Zmaging, SPIE 767, (Newport Beach, Calif., boundary detection. Comput. Graph. Image Feb. 1-6), pp. 586-592. Process. 6, 123-134.

ACM Computing Surveys, Vol. 23. No 4, December 1991 Three-Dimensional Medical Imaging 0 491

LORENSEN, W. E., AND CLINE, H. E. 1987. March- 1988. 3-D display of magnetic resonance ing cubes: A high resolution 3D surface con. imaging of the spine. Three-Dimensional Imag- struction algorithm. Cornput. Graph. 21, 4 ing and Imaging, SPIE 902, (July), 163-169 (Los Angeles, Calif., Jan, 14-15), pp. 103-112. Low, K.-C,, AND COGGINS, J. M. 1990. Biomedical NEY, D., FISHMAN, E. K., AND MAGID, D. 1990a. image segmentation using multiscale orienta- Three dimensional imaging of computed to- tion fields. First Conference on Visualization in mography: Principles and applications. First Biomedical Computing (Atlanta, Ga,, May Conference on Visualization in Biomedical 22-25), IEEE Computer Society Press, pp. Computing (Atlanta, Ga., May 22-25), IEEE 378-384. Computer Society Press, pp. 498-506. MAMMEN, A. 1989, Transparency and antialiasing NEY, D. R., FISHMAN, E. K., MAGID, D., AND DR~BIN, algorithms implemented with the virtual pixel R. A. 1990b. Volumetric rendering of com- puted tomography data: Principles and tech- maps technique. IEEE Comput. Graph. Appl, niques. IEEE Comput. Graph. Appl., 10, 2 9, 4 (July), 43-55. (Mar.), 24-32. MAO, X., KUNII, T, L,, FUJISHIRO, ISSEI, AND NOMA, OHASHI, T,, UCHIKI, T., AND TOKORO, M. 1985. A T. 1987. Hierarchical representations of three-dimensional shaded display method for 2D/3D gray-scale images and their 2D/3D two- voxel-based representation In Proceedings of way conversion. IEEE Comput. Graph. Appl. EUROGRAPHICS ’85, Vandoni, C. E., (cd.) 7, 12 (Dec.), 37-44. Elsevier Science Publishers B.V. (North-Hol- MAX, N, L 1990. Antialiasing scan-line data. land), Amsterdam, The Netherlands, pp. IEEE Comput Graph. Appl. 10, 1 (Jan.), 221-232, 18-30. PELIZZARI, C. A., CHEN, G. T. Y., SPELBRING, D R., MEAGHER, D. 1982a. Geometric modeling using WEICHSELBAUM, R. R., AND CHEN, C.-T. 1989. octree encoding. Comput Graph. Image Pro- Accurate three-dimensional registration of CT, cess. 19, 129-147. PET, and/or MR images of the brain J. Corn- MEAGHER, D 1982b. Efficient synthetic image put. Assist. Tomograph., 13, 1,20-26. generation of arbitrary 3D objects. In Pro- PHONG, B. T. 1975. Illumination for computer ceedings of the 1982 IEEE Computer Society generated graphics. Commun. ACM, 18, 6 Con ference on Pattern Recognition and Image (June), 311-317. Processing (Las Vegas, Nev., June 14-17), PIXAR CORPORATION.1988a, Ch apLibraries Tech- IEEE Computer Society Press, pp. 473-478. nical Summary for the UNIX Operating M~AGHER, D. J. 1985. Applying sohds processing System. methods to medical planning. In Proceedings PIXAR CORPORATION. 1988b Chap Tools Tech ni- of the 6th Annual Conference and Exposztlon of cal Summary for the UNIX Operating System the Natton al Computer Graphics Association, PIXAR CORPORATION. 1988c. Pixar II Hardu,are NCGA ’85 (Dallas, Tex., May 14-18), pp. Overview. 101-109. PIXAR CORPORATION. 1988d. Chap Image Pro- MILLS, P. H., FUCHS, H., AND PIZER, S. M. 1984. cessing System Techntcal Summary. High-speed interaction on a vibrating-mirror PIXAR CORPORATION. 1989, Chap Volumes Vol- Processing and Display of Three- 3D display. ume Rendering Package Technical Summary. Dimensional Data II, SPIE 507, (San Diego, PIZER, S M., AND FUCHS, H. 1987. Three di- Calif., Aug. 23-24), pp. 93-101. mensional image presentation techniques in MITCHELL, D. P, 1987, Generating antialiased im- medical Imaging In Proceedings of the ages at low sampling densities. Comput. Graph, International Symposium: CAR ’87, Comp- 21, 4 (July), 65-72. uter Assisted Radiology (Berlin, West MORRISSEY,T. F’ 1990 Programmers hierarchical Germany), Computer Assisted Radiology Press, interactive graphics system PHIGS et al. In pp. 589-598. Proceedings of the 11th Annual Conference and PIZER. S. M., FUCHS, H., HEINZ, E. R., STAAB, E. V., Exposition of the Natwnal Computer Graphics CHANEY, E L., ROSENMAN, J. G., AUSTIN, J D , Association, NCGA ’90 (Anaheim, Calif., BLOOM~~RG, S H., MACHARDY, E. T., MILLS, P March 19-22), pp 666-690. H., AND STRICKLAND, D. C. 1983. Interactive MOSHER, C. E., AND VAN HOOK, T. 1990. A geo- 3D display of medical images In Proceedings metric approach for rendering volume data. In of the 8th Conference on Information Processing Proceedings of the 11th Annual Conference and Ln Medical Imaging (Brussels, Belgium, Exposition of the National Computer Graphics Aug.29-Sept. 2), pp 513-526. Association, NCGA ’90 (Anaheim, Calif., PIZER, S. M., ZIMMERMAN, J. B., AND STAAB, E. V. March 19-22), pp. 184-193. 1984. Adaptive grey level assignment in CT NAZIF, A. M., AND LEVINE, M. D 1984. Low level scan display. J. Comput. Asswt. Tomograph. image segmentation: An expert system. IEEE 8, 2 (Apr.), 300-305. Trans. Pattern Analysis Machine Intelligence PIZER, S. M., AUSTIN, J. D., PERRY, J. R., SAFRIT, PAMI-6, 5 (Sept.), 555-577. H. D., AND ZIMMERMAN, J. B. 1986 Adaptive NELSON, A. C , KIM, Y., HARALICK, R. M., ANDER- histogram equalization for automatic contrast SON, P. A., JOHNSON, R. H., AND DESOTO, L. A. enhancement of medical images Apphcatzon

ACM Computing Surveys, Vol. 23, No. 4, December 1991 492 - Stytz et al.

of Optical Instrumentation In Medimne XIV and POULTON, J., FUCHS, H., AUSTIN, J., EYLES, J., AND Picture Archiving and Communmatton Systems GREER, T. 1987. Building a 512 x 512 Pixel- (PACS IV) for Medtcal Appllcatlons, SPIE Planes system, 1987 Conference on Aduanced 626, (Newport Beach, Calif., Feb 2-7), pp Research m VLSI, Stanford Umversity, Palo 242-250. Alto, Calif,, preprint, PIZER, S. M , Amumi, E P., AUSTIN, J. D., R~MAN, S. V., SARKAR, S , AND BOYER, K. L 1990 CItOMARTIE, R., GESELCIWITZ, A., GREER, T , Tissue boundary refinement in magnetic reso- ROMENY, B. TER H., ZIMMERMAN, J B., AND nance images using contour-based scale space ZUIDERVELD, K. 1987 Adaptive histogram matching SCAR ’90: Proceedings of the 10th equalization and its variations Comput. Vi- Conference on t20mputer Applicatmns m Radt- sion, Graph., Image Process 39, 3 (Sept ), ology and the 4th Conference on Computer As- 355-368. szsted Radmlogy (Anaheim, Calif., June 13- 16), PIZER, S. M, GAUCH, J. M, AND LIFSHITZ, L M Symposia Foundation Press, pp 618-624. 1988. Interactive2D and 3D object definition RP.YA, S P. 1989a. Low-level segmentation of 3-D in medical images based on multiresolution magnetic resonance brain Images: An expert image descriptions. Medlca[Imagmg II. Image system. Vwual Commun Lcatlons and Image Data Management and D[splay, SPIE 914. Processing IV, SPIE 1199, (Philadelphia, (Newport Beach, Cahf., Jan 31 -Feb 5), pp. Penn , Nov. 8-10), pp. 913-919. 438-444. RAYA, S P. 1989b Rule-based segmentation PIZER, S. M., LEWOY, M . FUCHS, H , AND Rosin - of multi-dimensional medical images In Proc- M.4N, J G 1989a. Volume rendering for dis- eedings of the 10th Annual Conference and play of multiple organs, treatment objects, and E.zpos~tlon of the Natmnal Computer Graphics Image intensities Scwnce and Engineering of AssoclatLon, NCGA ’89 (Philadelphia, Meducal Imagmg, SPIE 1137, (ParLs, France, Penn , Apr 17-20), pp. 193-198 Apr. 24-26), pp. 92-97 RA~A, S P. 1990a. Low-level segmentation of 3-D PIZER, S. M., FUCHS, H , LEVOY, M., ROSENM.4N, J magnetic resonance brain images: A rule based G , DAVIS, R E , AND RENNER, J B 1989b. system IEEE Trans. Med. Imag. 9, 3 (Sept.), 3D display with mmimal predefinition In Pro- 327-337. ceedings of the International Symposium: CAR ’89, Computer Assisted RadlolOgy, (Berlin, RAxi~, S P. 1990b SOFTVU: A software package Germany), Computer Assisted Radiology Press, for multidimensional medical image analysis pp. 724-736. Medical Imaging IV Image Capture and Dis- PJz~R, S M , GAUCH, J. M , CULLIP, T. J , AND play, SPIE 1232, (Newport Beach, Calif , Feb. FREURRICKSON,R E. 1990a Descriptions of 4-5), pp. 162-166 image intensity structure via scale and symme RAYA, S P., AND UDuPii, J. K, 1990, Shape-based try. First Conference on VisualLzatton in interpolation of multidimensional objects. Ihomedlcal Computing (Atlanta, Ga , May IEEE Trans Med. Imag, 9, 1 (Mar.), 32-42, 22-25), IEEE Computer Society Press, pp. RA.~A, S P , UDIJPA, J K , AND BARRETT,W A, 94-101. 1990 A PC-based 3D imagmg system: Algo- PIZ~R, S M , JOUNSTON, R. E., ERICKSEAT, J. P., rithms, software, and hardware considerations, YAN~AS~AS, B C., AND MULLER, K. E. 1990b. Comput Med Imag Graph 14, 5, 353-37o. Contrast-hmlted adaptive histogram equahza- REAIHTY IM.+~ING CORPORATION 1990a, Voxel tion’ Speed and effectiveness Frost Conference Fhnger 3-D Imaging S-vstem OEM Integration on Vlsuulization rn Biomedical Computing, Guide. Reahty Imagmg Corporation (Atlanta, Ga., May 22-25), IEEE Computer REALITY IMA~iNG CORPORATION 1990b. Vo.xel Society Press. pp 337-345 Fl[nger System Ocer~tezo. Reality Imaging Cor- PQMMWLT, A , TI~DE, U., WIEBECXE, G , HOHNW, K. poration H 1989. Image quality m voxel-based surface shading In Proceedings of the International REYNOLDS, R. A 1983a Simulation of 3D display systems. Dept. of Radiology, Umversity of Symposium. CAR ’89, Computer Assisted Pennsylvania, Unpublished report Radiology (Berlin, Germany), Computer As- sisted Radiology Press, pp. 737-741. REYNOLDS, R A 1983b. Some architectures for POMMEiZT,A., TIEDE, U , WIEBECKE, G., AND HOHNE, real-time display of three-dimensional objects: K H 1990 Surface shading m tomographic A comparative survey. Tech Rep, MIPG 84, volume visualization A comparative study. Dept. of Radiology, Umv. of Pennsylvania Fzrst Conference on Visualization m B1omedZ- RE~NOLDS, R A 1985 Fast Methods for 3D DM- cal Computmg (Atlanta, Ga , May 22-25), play of Medical Objects, Ph.D. Dissertation, IEEE Computer Society Press, pp 19-26 Dept. of Computer and Information Science, PORTER,T., AND DUFF, T. 1984. Compositing digi- Univ of Pennsylvama tal Images Comput Graph 18, 3 (July), REYNOLDS, R A., GORDON, D , AND CHE~, L -S. 253-259, 1987. A dynamic screen technique for shaded Porrhmsm, M., AND HOFFERr, E. M 1989. The pixel gTaphics display of slice-represented objects machine: A parallel image computer. Comput. Comput, Vlslon, Graph., Image Process 38, 3 Graph. 23, 3 (July), 69-78. (June), 275-298.

ACM Computmg Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging ● 493

REYNOLDS, R. A., GOLDWASSER,S. M., DONHAM, M Understanding Systems H, SPIE 205, pp. E., WYATT, E. D , PETERSON,M. A., WIDERMAN, 126-132. L. A., TALTON, D, A., AND WALSH) E. S. 1990. SAIWET,H. 1989. Neighbor finding in images rep- VOXELSCOPE: A system for the exploration resented by octrees Computer Vision, Graph., of multidimensional medical data. Nordic Image Process. 46, 3 (June), 367-386. Workshop on Visualization and Analysis of SANIET, H. 1990. The Des~gn and Analysis of Spa- Scientific and Techntcal Computations on tial Data Structures. Addison-Wesley, Reading, HLgh-Performance Computers (Umea, Sweden, Mass. Sept. 25-28), preprint, SCHIERS, C , TIEDE) U., mvn HOHNE, K, H. 1989. RHODES, M. L, 1979. An algorithmic approach to Interactive 3D registration of image volumes controlling search in three-dimensional image from different sources, In Proceedings of the data. Comput. Graph. 13, 2 (Aug.), 134-141. International Symposzum: CAR ’89, Computer RHODES, M. L., GLENN, W. V., AND AZZAWI, Y. M. Assisted Radiology (Berlin, Germany), Com- 1980. Extracting oblique planes from serial puter Assisted Radiology Press, pp. 666-670. CT sections. J, C’omput. Assist. Tomograph SEITZ, P., AND RU~GSEGER, P, 1983. Fast contour 4, 5 (Oct.), 649-657. detection algorithm for high precision quanti- RICHARDS, J. A. 1986. Remote Sensing and tative CT IEEE Trans. Med. Imag, MI-2, 3 Digital Image Analysls: An Introduction. (Sept.), 136-141, Springer-Verlag, Berlin, Germany, SHANTZ, M 1981. Surface definition for branch- ROBB, R. A 1985. Three-Dimensional Biomedical ing, contour defined objects. Comput. Graph. Imagmg, vol. 1. CRC Press, Boca Raton, Fla. 15, 2 (July), 242-270. ROBB, R. A. 1987, A workstation for interactive SHILE, P. E., CHWIALKOWSKI, M. P., PFEIFEZZ,D,, display and analysis of multidimensional PARKEY, R. W., AND PESHOCK, R, M, 1989, biomedical images, In Proceedings of the Inter- Automated identification of the spine in mag- national Symposium: CAR ’87, Computer netic resonance images: A reference point for Ass~sted llad~ology (Berlin, West Germany), automated processing. In Proceedings of the Computer Assisted Radiology Press, pp. International Sympostum: CAR ’89, Computer 642-656. Asszsted RadLotogy (Berlin, Germany), Com- ROBB, R. A., AND BARILLOT, C. 1988. Interactive puter Assisted Radiology Press, pp. 678-690, 3-D image display and analysis In Proceedings SPORER,M., Moss, F. H., AND MATHIAS, C. J. 1988 of the Society of Photo-OptLcal Instrzzmentatzon An introduction to the architecture of the Stel- Engineers: Hybrid Image and Signal Process- lar graphics supercomputer. In Proceedings of ing 939 D P Casasent, and A. G. Tescher, the 33rd IEEE Computer Society International Eds., (Orlando, Fla.), pp. 173-202. Conference (San Francisco, Calif., Feb. 29-Mar, ROBB, R. A., AND BARILLOT, C, 1989, Interactive 4), pp. 464-467. display and analysis of 3-D medical Images. SPRINGER, D, 1986. PIXAR image computer revo- IEEE Trans. Med. Imag, 8, 3 (Sept.), 217-226. lutionizes industrial design. In tell. Instz-u. ROBB, R. A., AND HANSON, D. P. 1990. ANA- Comput. (Nov./Dee.), 276-277. LYZE: A software system for biomedical image SRIHARI, S. N 1981. Representation of three- analysis. In Proceedings 1st Conference on Vi- dimensional digital images Conzput Suru. sualization in Biomedical Computing (Atlanta, 13, 4 (Dec.), 399-424. Ga., May 22-25), IEEE Computer Society STANSFIELD, S, A. 1986. ANGY: A rule-based ex- Press, pp. 507-518. pert system for automatic segmentation of ROBB, R. A., HEFFERMAN, P. B., CAMP, J, J., .mn coronary vessels from digital subtracted an- HANSON, D. P, 1986 A workstation for inter- glograms, IEEE Trans. Pattern Anal. Mach, active display and quantitative analysls of 3D In tell. PAMI-8, 2 (Mar.), 188-199. and 4D biomedical images. In Proceedings of ST~DIN~, T. L. 1989. A new-generation imaging the 10th Annual Symposium on Computer Ap- platform with integrated pixel-polygon process- plications in Medtcal Care (Washington, D, C,, ing: The Titan graphics supercomputer. Zm ag oct. 25-26), pp. 240-256. ing Workstations and Document Input Systems, ROBERTSON,B 1986. PIXAR goes commercial in a SPIE 1074, (Los Angeles, Calif, Jan. 17-20), new market. Comput, Graph. World 9, 6 (June), pp 42-45. 61-70. STEIGER, P, 1988. Computer graphics for highly ROESE, J. A. 1984. Stereoscopic electro-optic shut- reproducible quantitative medical image ter CRT displays: A basic approach. Processing processing, In Proceedings of the 9th Ann ua[ and Display of Three-D lmens~onal Data II, Conference and Exposition of the Nat~onal SPIE 507, (San Diego, Calif., Aug. 23-24), pp. Computer Graphics AssoczatLon, NCGA ’88 102-107. (Anaheim, Calif., Mar 20-24), pp. 178-187. ROGERS, D F. 1985. Procedural Elements for STELLAR COMPUTER, INC. 1989. GS2000 Series Computer Graph its. McGraw-Hill, New York. Backgrounder. ROUNDS, E. M,, AND SUTTY, G. 1979. Segmenta- STOVER, H, 1982. True three-dimensional display tion based on second-order statistics. Image of computer data. Processing and Dmplay of

ACM Computing Surveys, Vol 23, No. 4, December 1991 494 “ Stytz et al.

Three-Dimensional Data, SPIE 367, (San rOENNIES, K. D, 1989. Surface triangulation by DLego, Callf., Aug 26-27), pp. 141-144, linear interpolation in intersecting planes. SC1- STOVER, H., AND FLETCHER, J. 1983. True three- ence and Englneerlng of Medical Imagmg, SPIE dimensional display of computer generated 1137, (Paris, France, Apr 24-26), pp. 98-105, Images. Three-D imens~onal Imaging, SPIE TOENNI~S, K. D 1990. Automatic contour genera- 402, (Geneva, Switzerland, Apr. 21-22), pp. tion in binary scenes SCAR ’90: Proceedings 166-169. of the 10th Conference on Computer Applica- ST~TZ, M. R 1989, Three-dimensional medical tions m Radiology and the 4th Conference on image analysls using local dynamic algorithm Computer Asszsted Radzology (Anaheim, Cahf,, selection on a multiple-instruction, multiple- June 13-16), Symposia Foundation Press, pp. data architecture Ph. D. Dissertationj Dept. of 633-639. and Computer Science, TOENNIES, K. D,, HERMAN, G T., ANDUDUPA, J K Univ of Michigan 1989. Surface registration for the segmen- STYTZ, M. R AND FRIED~R, O 1991 Computer tation of implanted bone grafts. In Proceed- Systems for three-dimensional diagnostic imag- ings of the International Symposlam: CAR ing: An examination of the state of the art. ’89, Computer Assisted Rad~olog.v (Berlin, CRC CrZt. Rex. BLomed. Eng. 19, 1 (Aug. ) 1-46. Germany), Computer Assisted Radiology Press, STYTZ, M. R., AND FRIEDER, 0. 1990. Three-dimen- pp. 381-385. sional medical Imaging modalities: An TOENNIES, K. D., UDUPA, J K , HERMAN, G T., overview CRC Crzt, Reu. B1omed, Eng, 18, 1 WORNOM, I L III, .4ND BUCHMAN, S. R 1990 (July), 1-26. Registration of 3D objects and surfaces. IEEE SUETENS, P., OOSTERLINCK, A., GYBELS, J , VANDER- Comput. Graph Appl 10, 2 (Mar ), 52-62 MUELEN, D., AND MARCHAL, G, 1987, A pseudo-holographic display system for inte- TRIVEDI, S S , HERMAN, G. T., AND UDUPA, J. K. grated 3-D medical images Medzcal Zmagmg I, 1986. Segmentation into three classes using gradients IEEE Trans. Med. Imagzng MI-5, 2 SPIE 767, (Newport Beach, Calif., Feb. 1-6), pp. 606-613. (June), 116-119. SUN MICROSYSTEMS,INC 1988a TAAC.I App/tca. TRONNIER, U , HASENBRINK, F , AND WITTICH, A tlon Accelerator Techn lcal Notes, 1990. An experimental environment for true 3D surgical planning by slmulatlon: Methods SUN MICROSYSTEMS, INC. 1988b Sun’s Softuare and first experimental results. SCAR ’90: Pro Overuieu ceedmgs of the 10th Conference on Computer SUN M1cFzOsysTEMS, INC 1988c Suns’s TAAC-I Appllcatzons m Radu)logy and the 4th Confer- Appll catzon accelerator. ence on Computer Ass Lsted Radiology TAMMINEN, M.. .4ND SAMET, H 1984 Efficient (Anaheim, Cahf., June 13- 16), Symposia octree conversion by connectivity labeling. Foundation Press, pp. 594-601, Comput. Graph. 18, 3 (July), 43-51 TALTON, D. A., GOLDWASSER,S. M., REYNOLDS, R. TUOMENOKSA, D. L., ADAMS, G B III, SIEGEL, A., AND WALSH, E. S 1987, Volume rend- H J., AND MITCHIX,L, 0. R, 1983. A parallel ering algorithms for the presentation of 3D algorithm for contour extraction: Advantages medical data In Proceedings of the 8th Annual and architectural implications In Proceedl ngs Conference and Exposition of the NatLonal of the 1983 IEEE Computer Soclet.v Conference Computer Graphws Assoczat~on, NCGA ’87 on Computer VLsZon and Pattern Recognttton (Philadelphia, Penn,, Mar. 22-26), pp. (Washington, D. C., June 18-23), IEEE Com- 119-128 puter Society Press, pp. 336-344. TIEDE, U , HOHN~, K H , AND REIMER, M. 1987 Trm-, H, K., AND Tu~, L T 1984. Du-ect 2-D Comparison of surface rendering techniques for dmplay of 3-D objects. IEEE Comput. Graph. 3-D tomographic objects In Proceedings of the Appl, 4, 10 (Ott ), 29-33. International Symposium CAR ’87, Computer UDtTPA, J K. 1982. Interactive segmentation and Assisted Radzology (Berlin, West Germany), boundary surface formation for 3-D digital im- Computer Assisted Radiology Press, 599-610. ages. Comput. Graph Image Process 18, TIEDE, U , RIEMER, M , BOMANS, M,, AND HOHNE, 213-235 K H. 1988 DLsplay techmques for 3-D tomo- UDUPA, J K 1985 D,splay and analysis of 3D graphic volume data. In Proceedings of the 9th medical images using directed contours In Annual Conference and Exposztzon of the Na- Proceedings of the 6th Annual Conference and tzonal Computer Graphics Assoclatlon, NCGA Exposlt~on of the Nattonal Computer Graph lcs ’88 (Anaheim, Calif , Mar. 20-24), pp. AssocLatwn, NCGA ’85 (Dallas, Tx., May 188-197. 14-18), pp 145-155 TIEDE, U , HOEHNE, K. H , BOMANS, M , POMMERT, UDUPA, J. K. 1988 Computer graphics m surgical A., RIEMER, M , AND WIEBECKE, G. 1990 planning In Proceedings of the 9th Annual Investigation of medical 3D-rendering Conference and Exposltzon of the Natwna[ Com- algorithms. IEEE Comput Graph. Appl 10, 2 puter Graphics Assoclatlon, NCGA ’88 (Mar.), 41-53. (Anaheim, Calif., Mar, 20-24), pp. 67-77.

ACM Computing Surveys, Vol 23, No. 4, December 19~ Three-Dimensional Medical Imaging ● 495

UDUPA, J. K 1989. Computer aspects of 3D imag- grams: Finding the volume of fluid-filled brain ing in medicine: A tutorial. Tech. Rep. cavities. J. Comput. Asswt. Tomograph, 1, 1, MIPG 153, Dept. of Radiology, Univ. of 117-130. Pennsylvania. WATT, A. 1989. Fundamentals of Three-Dimen- UDUPA, J, K,, AND AJJANAGADDE, V, G. 1990, sional Computer Graphics, Addison-Wesley, Boundary and object labeling in three- Reading, Mass. dimensional images, Comput. Vision, Graph., WEISBURN, B. A., PATNAIK, S., AND FELLINGHAM, Image Process. 51, 355-369. L. L. 1986. An interactive graphics editor for UDUPA, J. K., AND HERMAN, G. T., eds. 1990. 3D 3D surgical simulation. Apphcahon of Optical Imaging in Medicine. CRC Press, Boca Raton, Instrumentation in Medicine XIV and Picture Fla. Archiving and Communication Systems [PACS UDUPA, J. K., AND HUNG, H.-M. 1990a. Surface IV) for Medical Applications, SPIE 626, (New- versus volume rendering: A comparative as- port Beach, Calif,, Feb. 2-7), pp. 483-490, sessment. In Proceedings of the 1st Conference WENG, J., AND AHUJA, N. 1987 Octrees of objects on Visualization in Biomedical Computing in arbitrary motion: Representation and effi - (Atlanta, Ga., May 22-25), IEEE Computer ciency. Comput Viston, Graph., Image Pro- Society Press, pp. 83-91. cess. 39, 2 (Aug.), 167-185. UDUPA, J. K., AND HUNG) H.-M, 1990b. A compar- WHITTED, T. 1980, An improved illumination ison of surface and volume rendering methods. model for shaded display, Comm u n. ACM 23, SCAR ’90: Proceedings of the 10th Conference 6 (June), 343-349. on Computer Applications in Radiology and the WILLIAMS, R, D., AND GARGIA, F., JR. 1989. Vol- 4th Conference on Computer Assisted Radiology ume visualization displays, In f, Display 5, 4 (Anaheim, Calif., June 13- 16), Symposia (Apr.), 8-10. Foundation Press, pp. 464-470. WIXSON, S. E. 1989. True volume visualization of UDUP~, J, K., AND ODHNER, D. 1990. Interactive medical data. In Proceedings of the Chapel Hzll surgical planning: High-speed object rendition Workshop on Volume Visualczatton (Chapel and manipulation without specialized hard- Hill, N. C., May 18-19), Univ. of North Caro- ware. In Proceedings of the 1st Conference on lina Press, pp. 77-83. Visualization in Biomedical Computing WOOD, S. L,, AND FELLINGHAM, L. L. 1985 Solid (Atlanta, Ga., May 22-25), IEEE Computer modeling and display techniques for CT and Society Press, pp. 330-336. MR images. In Proceedings of the ZnternatLonal UDUPA, J. K., ANDTUY, H. K. 1983. Display of 3D Symposium: CAR ’85, Computer Assisted Ra- information in medical images using directed- diology (Berlin, West Germany), Computer contour representation. In Proceedings of the Assisted Radiology Press, pp. 612-617. 4th Annual Conference and Exposition of the WOOD, S. L., Dtiv, P,, DUNCAN, J, P., WHITE, D, N., National Computer Graph ics Association, YOUNG, S. W., AND KAPLAN, E. N, 1983, A NCG’A ’83 (Chicago, Ill., June 26-30), pp. computerized system for planning reconstruc- 98-105. tive surgery. In proceedings of the 5th Annual UDUPA, J. K,, SRIHARI, S, N., AND HERMAN, G. T. IEEE Engineering in Medicine and B~ology So- 1982. Boundary detection in multidimen - ciety Con ference on Frontiers of Engineer- sions. IEEE Trans. Patt. Anal. Mach. In tell. ing and Computing in Health Care— 1983 PAMZ-4, 1 (Jan, ),, 41-50. (Columbus, Ohio, Sept. 10-12), pp. 156-160. UDUPA, J. K., RAYA, S. P., AND BARRETT, IV. A. Xu, S. B., AND Lu, W X. 1988 Surface recon- 1990. A PC-based 3D imaging system for struction of 3D objects in computerized tomog- biomedical data. In Proceedings of the 1st Con- raphy. Comput. Vmion, Graph., Image Process. ference on Visualization in Biomedical Comput- 44, 3 (Dec.), pp. 270-278. ing (Atlanta, Ga., May 22–25), IEEE Com- YASUDA, T., HASHIMOTO, Y., YOKCH, S., AND puter Society Press, pp. 295-303. TORIWAKI, J.-I. 1990. Computer system for UPSON, C., AND KEELER, M, 1988. V-BUFFER: craniofacial surgical planning based on CT im- Visible volume rendering, Comput. Graph. 22, ages. IEEE Trans. Med. Imag. 9, 3 (Sept.), 4 (Aug.), 59-64. 270-280. VINCKEN, K. L., DEGRAAF, C. N , KOSTER, A. S. E., YOKOI, S., YASUDA, T., HASHIMOTO, Y., TORIWAKI, VIER~EVER, M. A., APPELMAN, F. J. R., AND J.-I , FUJIOKA, M., AND NAKAJIMA, H. 1987. A TIMMENS, G. R. 1990. Multiresolution seg- craniofacial surgical planning system. In Pro- mentation of 3D images by the hyperstack. In ceedings of the 8th Annual Conference and Exposition of the National Computer Graphics Proceedings of the 1st Conference on Visualiza- Association, NCGA ’87 (Philadelphia, tion in Biomedical Computing (Atlanta, Ga., May 22-25), IEEE Computer Society Press, pp. Penn., Mar. 22-26), pp. 152-161. 115-122. BIBLIOGRAPHY VITAL IMAGES, INC. 1989. The VOXEL VIEW In- teractive Volume Rendering System. Farrell Colored-Range Method Bibliography WALSER, R. L., AND ACKERMAN, L. V. 1977. Deter- FARRELL, E. J., AND ZAPPULLA, R. A. 1989. Three- mination of volume from computerized tomo- dimensional data visualization and biomedi-

ACM Computing Surveys, Vol. 23, No. 4, December 1991 496 “ Stytz et al.

cal applications CRC Crlt Rev Biomed, Project Summary. Ilept of Computer Science, Eng. 16, 4, 323-363. Univ. of North Carolina, FARRELL, E. J,, YANG, W. C,, AND ZAPPULLA, R, FUCHS, H., POULTON, J., EYLES, J., AND GREER, T, 1985. Animated 3D CT imaging IEEE Com- 1988a Coarse-~ain and free-grain paral- put. Graph. Appl, 5, 12 (Dec.), 26-30, lelism m the next generation Pixel-Planes FARRELL, E. J,, ZAPPULLA, R., AND YANG, W, C, graphics system TR88-014, Umv of North 1984, Color 3-D imaging of normal and patho- Carolina at Chapel Hill, Dept of Computer logic intracranial structures IEEE Comp Science, and Proceedings of the International Graph. Appl. 4, 10 (Sept ), 5-17 Conference and Exhlbction on Parallel Process- ing for Computer Vlslon and Dtsplay (Univer- FARRELL, E. J., ZAPPULLA, R. A., KANTROWITZ, A 1986a. Planning neurosurgery with intera- sity of Leeds, United Kingdom, Jan. 12- 15) ctive 3D computer imagmg. In Proceedings of FUCHS, H,, POULTO~, J., EYLES, J , GREER, T , HILL, the 5th Conference on Medical Informatzcs, E., ISRAEL, L., ELLSWORTH, D., GOOD, H , Salamon R., and Blum, B. (eds.), Amsterdam, INTERRANTE, V., MOLNAR, S,, NEUMANN, U , Denmark, North-Holland, pp. 726-730. RHOADES, J., TEBBS, B., AND TURK, G, 1988b FARRELL, E. J., ZAPPCTLLA,R A , KANTROWITZ) A., Pixel-Planes: A parallel architecture for raster SPIGELMAN) M., YANG, W. C 1986b. 3D dis- graphics, PLxel-Planes Pro]ect Summary, Dept play of multiple intracranial structures for of Computer Science, Umv. of North Carohna, treatment planning. In Proceedings of the 8th FUCHS, H., PIZER, S. M , CREASY, J. L , RENNER, Annual Conference of the IEEE/Engineering in J. B , AND ROSENMAN, J. G 1988c, Interac- Medicine and Biology Society (Fort Worth, Tx., tive, richly cued shaded display of multiple 3D Nov. 7-10), IEEE Press, pp. 1088-1091, objects in medical images. Medical Imaging II: F.ARRELL, E, J., ZAPPULLA, R, A,, AND SPIGELMAN, Image Data Management and Dlspla.v, SPIE M. 1987 Imaging tools for interpreting 914, (Newport Beach, Calif , Jan. 31-Feb 5), two- and three-dimensional medical data, In pp. 842-849. Proceedings of the 8th Annual Conference and FUCHS, H., POULTON, J., EYLES, J., GREER, T , ExposltLon of the Nat~onal Computer Graphws GOLDFEATHER, J., ELLSWORTH, D., MOLNAR, S,, Association. NCGA ’87 (Philadelphia, Penn., AND TURK, G. 1989c. Pixel-pLanes 5: A het- Mar, 22-26), pp 60-69, erogeneous multiprocessor graphics system us- ing processor-enhanced memories Compu t Graph 23, 4 (July), 79-88 Fuchs / Poulton Pixel-Planes Machines FUCHS, H , LEVOY, M , AND PIZER, S M. 1989a, Bibliography Interactive visualization of 3D medical data, ELLSWORTH, D., GOOD) H., AND TEBBS, B. 1990, IEEE Comput 22, 8 (Aug.), 46-52 Distributing display lists on a multicomputer. GOLDFEATHER, J , AND FLTCHS,H, 1986 Quadratic In Proceedings of the 1990 Symposium on In- surface rendering on a logic-enhanced frame- teractzue 3D Graphics (Snowbird, Utah, Mar buffer memory. IEEE Comput Graph. Appl, 25-28), pp. 147-154 6, 1 (Jan ), 48-59 FUCHS, H., KEDEM, Z, M,, AND USELTON, S. P. 1977. Optimal surface reconstruction from planar LEVOY, M. 1989b, Design for a real-time high-qu- alit y volume rendering workstation. In Pro- contours. Commun. ACM 20, 10 (Nov.), ceedings of the Chapel Hill Workshop on Vol- 693-702, ume Vlsualzzation (Chapel Hill, N C., May FUCHS, H,, KEDEM, Z, M., AND NAYLOR, B, 1979. 18- 19), Umv. of North Carolina Press, pp. Predetermining visibility priority in 3-D scenes 85-92. (preliminary report). Cornput. ‘Graph. 13, 2 (Aug.), 175-181 PIZER, S. M,, AND FUCHS, H. 1987, Three dimen- sional image presentation techniques in medi FUCHS, H,, KEDEM, Z. M,, AND NAYLOR, B. F. 1980, cal imagmg. In Proceedings of th e International On visible surface generation by a priori tree Symposium: CAR ’87, Computer Ass~sted Ra - structures. Comput, Graph. 14, 2 (Aug.), dtology (Berlin, West Germany), Computer As- 124-133 sisted Radiology Press, pp. 589-598. FUCHS, H,, ABRAM, G. D., AND GRANT, E D. 1983, Near real-time shaded display of rigid objects. POULTON, J., FUCHS, H , AUSTIN, J , EIiLES, J , AND Comput. Graph. 17, 3 (July), 65-69 GREER, T. 1987 Building a 512 x 512 Pixel- Planes system, 1987 Conference on Aduanced FUCHS, H., GOLDFEATHER, J., HULTQUIST, J. P., Research Ln VLSI. Stanford University, Palo SPACH, S., AUSTIN, J. D., BROOKS, F. P. JR., Alto, Calif., preprint EYLES, J. G., AND POULTON, J, 1985. Fast spheres, shadows, textures, transparencies, and image enhancements in Pixel-Planes, Comput Graph 19, 3 (July), 111-120, Kaufman’s Cube Machine Bibliography FUCHS, H., POULTON, J., EYLES, J,, AUSTIN, J., AND BAK.A.LASH,R , AND KAUFMAN, A 1989 Medlcube: GREER, T, 1986. Pixel-Planes: A parallel ar- A 3D medical imagmg architecture. Comput chitecture for raster graphics. Pixel-Planes Graph, 13, 2151-159.

ACM Computmg Surveys, Vol 23, No 4, December 1991 Three-Dimensional Medical Imaging e 4.97

CO~i~N, D., KAu~hfAN, A., ANDBAKALASH, R. 199~. tion of three-dimensional reconstructed Real-time discrete shading. V~sual Comput. 6. anatomic morphology: Experience with the tho- 314-27. rax, heart, and coronary vasculature of dogs. GEMBALLA, R., AND LINDNER, R. 1982. The multi- J. Comput. Assist. Tomog. 3, 4 (Aug.), ple-write bus technique. IEEE Comput. Graph. 439-446. Appl. 2, 7 (Sept.), 33-41. HARRIS, L. D., CAMP, J J , RITMAN, E. L., ANIJ KACTFM~N, A , AND BAKALASH, R. 1985 A 3-D cel- ROBB, R. A. 1986. Three-dimensional display lular frame buffer In Proceedings of EURO- and analysis of tomographic volume images G’RAPHICS ’85, Vandoni, C. E., Ed., Elsevier utilizing a varifocal mirror. IEEE Trans. Med. Science Publishers B.V. (iVorth-Holland), Ams- Imag. MI-5, 2 (June), 67-72. terdam, The Netherlands, pp. 215-220. HEFFERMAN, P B., AND ROBB, R. A. 1985a. A new KAUFMAN, A., AND SHIMONY, E. 1986. 3D scan- method for shaded surface display of biological conversion algorithms for voxel-based graph- and medical images. IEEE Trans. Med. Im ag. ics. In Proceedings of the ACM Workshop on MZ4, 1 (Mar.), 26-38. Interactive t3D Graphics (Chapel Hill, N .C., Ott. HEFFERMAN, P. B.j AND ROBB, R. A 1985b. Dis- 23-24), pp. 45-75. play and analysis of 4-D medical images. In KAUFMAN, A. 1987. Efficient algorithms for 3D Proceedings of the International Symposium. scan-conversion of parametric curves, surfaces, CAR ’85, Computer Assisted Radwlogy (Berlin, and volumes. IEEE Comput. Graph 21, 4 West Germany), Computer Assisted Radiology (July), 171-179. Press, pp. 583-592 KAUFMAN, A. 1988a. Efficient algorithms for ROBB, R. A. 1985. Three-Dlmt’ns~onal BLomedwal scan-converting 3D polygons. Comput. Graph. Imaging. 1. CRC Press, Boca Raton, Fla. 12, 2, 213-219. ROBB, R. A., HEFFERMAN, P. B., CAMP, J. J., AND KAUFMAN, A. 1988b The cube workstation: A 3-D HANSON, D. P. 1986. A workstation for inter- voxel-based graphics environment. Visual active display and quantitative analysis of 3D Comput. 4, 4 (Oct.), 210-221 and 4D biomedical images In Proceedings of the 10th Annual Symposium on Computer Ap- KAUFMAN, A., ANI) B~KALASH, R. 1988. Memory plications in Medical Car-e (Washington, D. C., and processing architecture for 3D voxel-based Oct. 25-26), pp. 240-256. imagery. IEEE Comput. Graph. Appl. 8, 6 (Nov ) 10-23. ROBE, R. A. 1987, A workstation for interactive display and analysis of multidimensional KAUFMAN, A. 1988c. The CUBE three-dimen- biomedical images. In Proceedings of the Inter- sional workstation In Proceedings of the 9th national Symposium: CAR ’87, Computer Annual Conference and Exposition of the Na- Aswsted Radiology (Berlin, West Germany), tional Computer Graph~cs Assoclatmn, IVCGA Computer Assisted Radiology Press, pp. ’88 (Anaheim, Cahf’., Mar. 20-24), pp. 642-656. 344-354. ROBB, R. A., AND BAIULLOT, C, 1988. Interactive K~UFMAN, A 1989 The uoxblt engine: A voxel 3-D image display and analysis. In proceedings frame buffer processor. In Aduances in Graph- of the Srmety of Photo-Optical Instrumentation ics Hardware III, Kuijk, F , and Strasser, W. Engineers: Hybrid Image and Signal Process- (eds.), Springer-Verlag, Berlin, Germany. ing, 939, Casasent, D. P., and Tescher, A. G., KAUFMAN, A., AND B~KALASH, R 1989a. Parallel (eds ), (Orlando, Fla.), pp. 173-202. processing for 31) voxel-based graphics. In ROBB, R. A , ANLI BARILLOT, C 1989. Interactive Parallel Processing for Computer Vision display and analysis of 3-D medical Images and Dwplay, Dew, P. M., Earnshaw, R. A., IEEE Trans. Med Imag. 8, 3 (Sept.), 217-226. and Heywood, T. R. (eds. ), Addison-Wesley, Workingham, England. ROBB, R. A., AND HANSON, D. P. 1990, ANA- LYZE: A software system for biomedical Image KAUFMAN, A , AND BA~ALASH, R. 1989. The cube analysis. In Proceedings of the 1st Conference system as a 3D medical workstation. 7’hree- on Vzsua[uatzon [. BLotnedlca/ Computmg Dimensional Vwualization and Display Techn- (Atlanta, Ga , May 22-25), IEEE Computer ologies, SPIE 1083, (Los Angeles, Calif., Jan. Society Press, pp. 507-518. 18-20), pp. 189-194. KA[TFMAN, A., AND BAKALASH, R. 1990. Viewing and rendering processor for a volume visualiza- MIPG Machines Bibliography tion system (extended abstract). In Advances ARTZY, E. 1979. Display of three-dimensional in- m Graphics Hardware IV, Grimsdale, R. L., formation in computed tomography Comput and Strasser, W. (eds. ), Springer-Verlag, Graph. Image Process., 9, 196-198. Berlin, Germany. ARTZ~. E., FRI~DER, G.. HERMAN. G. T., AND LIU. H. K 1979. A system for three-dimensional dynamic display of organs from computed Mayo Clinic True 3D Machine Bibliography tomograms, In Proceedings of the 6th HARRIS, L. D , ROBB, R. A., YUEN, T. S., AND Conference on Computer Applications m RITMAN, E. L. 1979. Display and visualiza- RudLology and Computer/ .4ded Anal.ysLs

ACM Computing Surveys, Vol. 23, No 4, December 1991 498 * Stytz et al.

of Racllologlcal Images (Newport Beach, Symposium: CAR ’85, Computer Assisted Ra- Calif., June), pp. 285-290. diology (Berlin, West Germany), Computer As- ARTZY, E., FRIEDER, G., AND HERMAN, G T. 1981. sisted Radiology Press, pp. 539-550. The theory, design, implementation and evalu- HERMAN, G T, 1986, Computerized reconstruc- ation of a three-dimensional surface detection tion and 3-D imaging m medicine. Tech. algorithm Comput. Graph Image Process. 15 Rep MIPG 108, Dept. of Radiology, Umv of (Jan ), 1-24. Pennsylvama, CHEN, L. S., HERMAN, G T., REYNOLDS, R A., AND REYNOLDS, R A. 1983b. Some architectures for UDUPA, J. K. 1984. Surface rendering in the real-time display of three-dimensional objects: cuberille envwonment. Tech Rep. MIPG 87. A comparative survey, Tech Rep. MIPG 84, Dept. of Radiology, Univ of Pennsylvania. Dept. of Radiology, Univ. of Pennsylvania. CHEN, L. S., HERMAN, G. T., MEYER, C R., UDUPA, J. K , RAY~, S. P., AND BARRETT. REYNOLDS, R A., AND UDUPA, J K. 1984b W. A 1990 A PC-based 3D imaging system 3D83: An easy-to-use software package for biomedical data. In Proceedings of 1st for three-dimensional dwplay from computed Conference on Vlsuali.zation tn Bwmedwal tomograms In Proceedings of IEEE Computmg (Atlanta, Ga , May 22-25), IEEE Computer Society JoZnt International Computer Society Press, pp 295-303. Symposwm on Medical Images and Icons UDUPA, J, K., AND HUNG, H.-M. 1990. A com- (Arlington, Va., July), IEEE Computer Society parison of surface and volume rendering Press, pp 309-316. methods. SCAR ’90. Proceedings of the 10th Conference on Computer Apphcations CHEN, L.-S., HERMAN, G. T , REYNOLDS, R. A., AND m Radiology and the 4th Conference on UDUPA, J. K 1985 Surface shading in the Computer Asswted Radwlogy (Anaheim, cuberdle envu-onment IEEE Comput. Graph Calif., June 13-16), Symposia Foundation Appt. 5, 12 (Dec.), 33-43 Press, pp. 464-470 EDI~OLM, P. R,, LEWITT, R M , LINDHOLM, B , HERMAN, G. T., UDUPA, J. K , CHEN, L S., MARGASAHAYAM,P. S., AND MEYER, C. R. 1986, Contributions to reconstruction and Pixar / Vicom Image Computer and Pixar / display techniques in computerized tomogra- Vicom II Bibliography phy. Tech, Rep. MIPG 110, Dept. of Radiology, CARPENTER, L. 1984. The A-buffer, An antialiased Univ of Pennsylvania, hidden surface method Comput Graph. 18, 3 FRIED~R, G., GORDON, D., AND REYNOLDS, R. A (Ju1y), 103-108, 1985a Back-to-front display of voxel-based CATMULL, E. 1984, An analytic visible surface al- objects. IEEE Comput Graph. Appl. 5, 1 gorithm for independent pixel processing (Jan ), 52-60 Cornput. Graph 18, 3 (July), 109-115. FRIEDER, G., HERMAN, G, T., MEYER, C., AND UDUPA, CooK, R, L. 1984 Shade trees Comput. Graph J. 1985b. Large software problems for small 18, 3 (July), 223-231, computers. An example from medical imaging. COOK, R L , PORTER,T., CARPENTER,L. 1984. Dis- IEEE Softw. 2, 5 (Sept ), 37-47. tributed ray tracing Comput. Graph. 18, 3 HERMAN, G. T., AND LIU, H. K, 1978. Dynannc (July), 137-145 boundary surface detection. Comput. Graph. COOL R L., CARPENTER,L , AND CATMULL, E. 1987 Image Process. 7, 130-138. The reyes image rendering architecture Com- HERIVMN, G T , AND LIU, H. K. 1979, Three- put. Graph, 21, 4 (July), 95-102. dimensional display of human organs DREBIN, R, A,, CARPENTER, L,, AND HANR~HAN, P from computed tomograms. Comput, Graph, 1988 Volume rendering, Cornput. Graph, 22, Image Process. 9, 1-21 4, (Aug.), 65-74. HERMAN, G. T . AND COJIV, C G 1980 The use LEVLNTHAL, A., AND PORT~R, T 1984. Chap: A of three-dimensional computer display in the SIMD graphics processor Comput Graph 28, study of disk disease J Comput. .4ss,st 3, (Julj-), 77-82. Tomograph, 4, 4 (Aug.), 564-567. NEY, D. R , F[SH~AN, E K., MAGID, D., AND DREBIN, HERMAN, G. T., UnUPA, J. K., KR.AM~R, D., LAUT~R- R. A. 1990b Volumetric rendering of com- BUR, P C., RUDIN, A. M., AND SCHNEIDER, J, S. puted tomography data: Prmclples and tech- 1982. Three-dimensional display of nuclear niques. IEEE Comput, Graph. Appl 10, 2 magnetic resonance images Opt. Eng. 21, 5 (Mar ), 24-32 (Sept /Ott ), 923-926 PIXAR Cowow%’rloN, 1988a. ChapLLbrartes Tech- HERMAN, G T , AND WEBSTER, D. 1983 A topolog- nwal Summary for the UNIX Operating ical proof of a surface tracking algorithm. System. Comput. Vwlon, Graph., Image Process. 23, PIXAR CORPORATION 1988b ChapTools Technical 162-177. Summary for the UNIX Operating System, HERMAN, G. T. 1985 Computer graphics ]n PIXAR CORPORATION. 1988c. Pzxar II Hardware radiology In Proceedl ngs of the In tcrnatlonal ouerllew

ACM Computing Surveys, Vol 23, No. 4, December 1991 Three-Dimensional Medical Imaging g 499

PIXAR CORPORATION, 1988d. Chap Image Process- GOLDWASSER, S. M. 1986. Rapid techniques for ing System Technical Summary. the display and manipulation of 3-D biomedical PIXAR CORPORATION. 1989. Chap Volumes Vol- data. In proceedings of the 7th Annual Confer- umes Rendering Package Technical Summary. ence and Exposition of the National Computer PORTER, T., AND DUFF, T. 1984. Compositing digi- Graphics Association, NCGA ’86 (Anaheim, tal images. Cornput. Graph. 18, 3 (July), Calif, May 11-15), pp 115-149. 253-259, GOLDWASSER,S. M., REYNOLDS, R. A., TALTON, D., ROBERTSON,B. 1986. PIXAR goes commercial in a AND WALSH, E. 1986. Real-time interactive new market. Comput. Graph. World 9, 6 (June), facilities associated with a 3-D medical work- 61-70. station. Application of Optical Instrumentation in Medicine XIV and Ptcture Arch iuing and SPRINGER, D. 1986. Image computer revolution- Communication Systems (PACS IV) for Medical izes industrial design. Intelligent Instrum. Applications, SPIE 626, (Newport Beach, Comput. (Nov./Dee.), 276-277. Calif., Feb. 2-7), pp 491-503. GOLDWASSER, S. M , AND REYNoLnS, R. A. 1987. Real-time display and manipulation of 3-D Reynolds & Goldwasser’s Voxel Processor medical objects: The voxel processor architec Bibliography ture. Comput. Vision, Graph., Image GOLDWASSER, S. M., AND REYNOLDS, R. A. 1983. Process. 39, 1-27. An architecture for the real-time display and GOLDWASSER, S. M., REYNOLDS, R. A., TALTON, manipulation of three-dimensional objects. In D. A., AND WALSH, E. S. 1988a. High-perfor- Proceedings of the International Conference on mance graphics processors for medical imaging Parallel Processing (Bellaire, Mich., Aug.), applications. In Proceedings of the Interna- IEEE Computer Society Press, pp. 269-274. tional Conference on Parallel Processing for GOLDWASSER, S. M. 1984a. A generalized object Computer Vision and Display (University of display processor architecture. In Proc- Leeds, United Kingdom, Jan. 12- 15), pp. 1-13. eedings of the 11th Annual International GOLDWASSER, S. M., REYNOLIM, R. A., TALTON, Symposium on Computer Architecture, D. A., AND WALSH, E. S. 1988b. Techniques (Ann Arbor, Mich., May), IEEE Computer Soci- for the rapid display and manipulation of 3-D ety Press, pp. 38-47. biomedical data. Comput. Med. Imag. Graph. GOLDWASSER, S. M. 1984b. A generalized object 12, 1 (Jan. -Feb.), 1-24. display processor architecture. IEEE Comput. GOLDWASSER, S. M., REYNOLDS, R. A., TALTON, Graph. Appl. 4, 10, (Oct.), 43-55. D. A,, AND WALSH, E, S, 1989 High perfor- GOLDWASSER, S. M. 1985. The voxel processor mance graphics processors for medical imaging architecture for real-time display and man- applications. In Parallel Processing for Corn- ipulation of 3D medical objects. In Proc- puter V1.won and Display Dew, P M., Earn- eedings of the 6th Annual Conference and shaw, R. A., and Heywood, T. R. (eds. ), Addi- Exposition of the National Computer son-Wesley, Workingham, England. Graph ics Association, NCGA ’85 (Dallas, Tx., REYNOLDS, R. A. 1983a. Simulation of 3D dis- Apr. 14-18), pp. 71-80. play systems. Dept. of Radiology, Univ. of GOLDWASSER, S. M., REYNOLDS, R. A., BAPTY, ‘1’., Pennsylvania, Unpublished report BARAFF, D., SUMMERS, J., TALTON, D, A., AND REYNOLDS, R. A. 1985. Fast methods for 3D dis- WALSH, E. 1985 Physician’s workstation with play of medical objects. Ph.D. Dissertation, real-time performance, IEEE Comput. Graph. Dept. of Computer and Information Science, Appl. 5, 12 (Dec.), 44-57. Univ. of Pennsylvania.

Recewed August 1988, final revision accepted March 1991

ACM Computing Surveys, Vol. 23, No 4, December 1991