<<

Interactive Volume Rendering

Lee Westover

Department of Computer Science The University of North Carolina at Chapel Hill Chapel Hill, North Carolina, 27599-3175 USA

ABSTRACT computed scans, for medical . Volume rendering is the display of data sampled in three Direct display of volume data is called volume render- dimensions. Traditionally, of such data has ing. Current techniques are too slow to be interactive been through conventional line or because renderers either tri-linearly interpolate many surface drawing methods preceded by processes that points along each sight ray or perform complicated coerce the sampled data into a form suitable for display. polygon fitting for each point neighborhood. Since typi- This approach is being replaced by new techniques cal data range from 64 by 64 by 64 data points to 256 by which operate directly on the three dimensional samples 256 by 256 data points, the sheer number of samples to avoid the artifacts introduced by the use of conven- amplifies any inefficiency of the display algorithm. In a tional graphics primitives. non-interactive mode a user guesses at viewing and shading parameters and renders an image that can take Volume rendering is a compute-intensive operation. anywhere from 5 minutes to many hours to compute. This paper discusses an approach for volume rendering When the image is completed, the user may change the in which interactive speed is achieved through a paral- input parameters and try again, iterating until a satisfac- lelizable forward mapping algorithm, successive tory image is generated. While this batch method of refinement, driven mappings for shading and filter- volume rendering is satisfactory for final presentation ing, and the avoidance of complex machine image generation, it does not lend itself to data explora- classification of the data. tion. The length of time between iter~ons makes exper- Since the renderer is interactive, users are able to imentation painful and breaks idea continuity. specify application specific mapping functions on-the- The goal of this work is to design a system for interac- fly. Current applications include molecular modeling, tive exploration of volume data with enough flexibility geology, computed tomography, and astronomy. to encourage the user to try numerous and possibly unusual mappings. The renderer uses only table driven interpretation and classification so the user can easily KEYWORDS: 3D Image, Volume Rendering, Algo- understand how the data is being mapped to the final rithms. imagE. The user must be able to control each and every step in the generation process. When the user changes INTRODUCTION an input viewing parameter, he should immediately A common data type in scientific computing is a regular (with in 10 seconds) see the change. The altered image thrcc-dimensionul grid of sample points. A single data may not be the final image, but it should be adequate for point may be a scalar, as in electron density , or a the user to quickly steer through his data [Brooks 86] vector, as in the results of fluid flow simulation. Other [Greenberg 86]. examples include seismic studies for oil exploration, light wavelength data for galaxies, and stacks of PREVIOUS WORK Early attempts to visualize volume data often failed because the large data size and the massive amounts of calculations needed. Because of the availability of polygon renderers and line drawing displays, the volume rendering problem was often coerced into one of these problems by preprocessing the data. Contour maps would display a line drawing connection of equal valued data points [Wright 72] [Williams 821. Alternatively, these contours were triangulated to form polygons that

CH Volume Visualization Workshop 9 were then fed into polygon engines [Fuchs 77] image. [Ganapathy 82]. Other algorithms were developed for special classes of point data such as the rings of Saturn [Blinn 82], clouds [Kajiya 84], meteorological data [Hibbard 86], and points as primitives [Reeves 83] [I.~voy 85]. xTL--- Recently, the advances in machine speed and memory Volume

capacity have allowed researchers to eliminate the inter- x x Reconstruction mediate surface model and direcdy display volume data.

These approaches typically fall into one of two i categories, backward mapping and forward mapping. Image ] ,, x x Reconstruction Backward mapping algorithms are those algorithms that I the into the data, commonly called ray I ~ x x x

tracing. For each in the final image, the renderer x II x shoots rays from the pixel into the data and intersects that ray with each data point until either the ray exits the Figure 1. Reconstruction volume or the opacity accumulates enough density to become opaque [VanHook 86] [Levoy 88] [Sabella 88] In backward mapping algorithms input values rarely fall fUpson 88]. exactly along a ray. Consequently, a continuous approx- Forward mapping algorithms are those algorithms that imation of the volume data is generated near a sample directly map the data onto the image plane. Examples in point using tri-linear interpolation. Image reconstruc- surface graphics include the Z-buffer and the painter's tion is required when the intermediate samples do not algorithm. For each data point, the renderer maps the fall exactly on output pixel centers. This occurs when point onto the image plane and then adds its contribution anti-aliasing is done by uniform or stochastic super- to the accumulating image. This accumulation can be sampling. (If all rays originate at exact pixel centers, either back-to-front or front-to-back. The image is com- the second reconstruction step becomes the identity plete when each data point is added to the screen or the mapping.) Anti-aliasing is required in a backward map- image becomes opaque and new samples can have no ping algorithm because M-linear interpolation is a poor further effect on the final result [Drebin 88] [Upson 88]. reconstruction kernel [Levoy 88]. A forward mapping algorithm can perform the volume DESIGN TRADEOFFS space reconstruction in two-space because the data The main goal of this work is to find an algorithm suit- points themselves are fed through the rendering pipeline able for interactive volume rendering. A parallel algo- and only an image space footprint of the data point need rithm is desired because of the large amount of computa- be considered. Furthermore, the image space reconstruc- tion required in the volume rendering process. Simi- tion is not required because the footprint function is a larly, as much of the rendering process as possible continuous function and needs only to be sampled. should be table driven. Detail of this process appear later in the paper. For an arbitrary view, a naive parallel backward map- A problem for both forward and backward mapping ping algorithm requires that the entire data volume be algorithms is . For a perspective view, the replicated at each parallel computation node. More sampling rate of the input data with respect to the screen sophisticated methods may relieve some of this replica- changes with depth. However, orthographic views of tion but will not eliminate it. Since a forward mapping volume data are useful in their own right and often algorithm can treat each data point in isolation, there is desired. Since perspective views are not critical for a no need to replicate any of the input volume in a parallel wide range of applications, the renderer described in this system. paper only generates orthographic views at this time. There are two places where discrete samples may be For maximum speed the renderer uses table driven reconstructed in the volume rendering process. First, the operations as often as possible. For example, the recon- renderer does volume space reconstruction where the struction process is driven by filter tables and the shad- three-dimensional individual input samples are recon- ing process is driven by four shading tables. The structed into a three-dimensional continuous function. renderer should not make any binary classification or Second, the renderer does image space reconstruction shading decisions. All decisions should be probabilistic where individual image space intermediate samples are or weighted [Levoy 88] [Drebin 88]. The use of shading reconstructed into viewable samples that make the final tables does not enforce this rule but certainly supports it.

10 CH Volume Visualization Workshop Since a forward mapping algorithm allows parallel and combined into the image buffer. When all input implementation without input d~t~ replication, allows samples have been processed the image is complete. volume reconstruction to occur in two-dimensional The following sections describe these processing steps image space, and provides speed and flexibility through in detail. table driven shading and reconstruction, the renderer presented here uses a table driven forward mapping TRANSFORMATIONS AND RECONSTRUCTION algorithm. Screen Mapping ALGORITHM Since the input volume is a regular three dimensional The algorithm consists of three main parts: the viewing grid and the renderer only generates orthographic views, transformation, signal reconstruction, and converting an a digital differential analyzer (DDA) can incrementally input sample into a shaded intermediate sample. map each data point from grid space to the screen space.

Raw Data and [ ~:=l[ [I = I dyldi dy/dj dyldk Gradient Operator L dzldi dz/dj dzldk

Transform where dA/dB denotes the change in the A direction in image space for each step in the B direction in grid space. Table Lookups I Thus, the step sizes in each of x, y, and z for each input Shade i, j, and k can be read directly from the transformation matrix since it is made up of only rotations and scaling. By inspecting the sign and magnitude of these values, Reconstruct the renderer can determine a traversal that guarantees a back-to-front ordering or a front-to-back ordering. Each Final Image ordering has its advantages. The back-to-front ordering allows the user to watch the image as it is formed and see features which later may be obscured. Once these Figure 2. Block deltas are known and the ordering is known, the first input point is transformed with a full matrix multiplica- tion. This is the origin for the DDA. The renderer then Pipeline Structure just adds the appropriate delta as it walks through the input volume. The image for each intermediate The path through the rendering pipeline is to take an sample are used to build the image at the end of the input sample packet, rendering pipeline. density value gradient strength Recohstruction gradient direction grid There is one stage in the volume rendering process where the renderer must reconstruct a continuous signal perform the grid space to screen space mapping which from discrete samples. Image plane samples are gen- converts the grid into screen , and erated by sampling the discrete input volume. Since forms a packet that consists of: resampling a sampled signal is undefined, a continuous density value signal needs to be reconstructed from the original data gradient strength gradient direction samples prior to resarnpling. The sampled signal can be screen thought of the product of a comb function and some con- tinuous signal. The ideal way to reconslruct such a sig- The packet is then shaded, converting the density value nal is by convolving a sine function with the sampled and gradient information into red, green, blue and alpha data. For a band limited original signal, the convolution values forming a packet that consists of: can exactly reconstruct the original signal. Similar to red ] the way surface graphics image reconstruction can be green either pixel driven or polygon fragment driven [Car- blue alpha penter 84] [Abram 85], volume graphics reconstruction ~creen can either be pixel driven or data sample driven. This packet is then passed through a reconstruction step

CH Volume Visualization Workshop 11 A backward mapping algorithm is pixel driven and can each sample along the view direction. Therefore the have problems with the volume signal reconstruction. sample is projected onto the image plane. Projecting the The volume convolution between a three-dimensional sample onto the image plane at pixel is: sine function and the input volume is an expensive operation. For speed considerations, most forward map- effecti(x,y) = J hv(iz-x,iy-y,w) p (i) dw ping algorithms use tri-linear interpolation to generate points that lie exactly along a ray. However tri-linear interpolation is an ineffective reconstruction kernel Since p is independent of z, p can be moved outside the which necessitates over sampling and a second low pass integral: filtering step of the generated samples. effecti(x,y) = p (i) S hv(x-i,,,y-iy,w) dw Instead of trying to determine which part of the input volume affects a given pixel, the forward mapping algo- rithm turns the problem "inside out" and determines Notice that the integral is independent of the sample's what output a given sample can effect. Since the density. Since it only depends on the sample's data points themselves are the input samples there is no projected location, the function footprint can be defined need to generate interpolated values. This does not as follows: alleviate the need for volume reconstruction, it just moves the problem to a later stage of the rendering pipe- footprint (x,y) = 5 hv(x,y,w) dw line when a sample is added to the final image. Reconstruction is the process of convolving the recon- where denotes the displacement of an image struction kernel with the sampled signal. The volume sample from the center of the volume reconsla'uction reconstruction equation is: kernel's projection. signal 3o = Now the renderer adds the point to the image buffer. Since the footprint function is continuous in , ~hv(u-x,v-y,w-z) ~ 8(x,y,z) p(x,y,z) dudvdw there is no need to reconstruct this function. It only needs to be sampled at pixel centers to determine the where hvO denotes the volume reconstruction kernel, p footprint's contribution to a pixel, denotes the density function, and ~5 denotes the comb function. weight (u,v)e = footprint (Px-u,Py-v) Moving the summation outside the integral and evaluat- where denotes the of a sample's image ing the integral at point results in: plane projection and P denotes the pixel in question. signalao(x,y,z) = ~ hv (ix-x, iy-y,iz-z) p (i) If the three-dimensional volume kernel is rotationally i ~ Vol symmetric, the footprint function can be precomputed for all viewpoints and stored in a table in a preprocess- where i ranges over the input samples that lie within the ing step. If not, the footprint function can be computed kernel centered at . once per view and stored in a table. In either case the Instead of considering how multiple samples effect a table is indexed by the fractional offset of a sample's point, consider how a sample can affect many points in projection from the pixel center and returns the weight space. The effect at a point by a data sample for each pixel in the neighborhood of a sample's projec- is: tion. The sample's (deter- mined by the ) is then weighted by the table value effect~(x,y,z) = p (i) hv(i:x,iy-y,i,-z) and added to the image buffer. The process of table lookup, weighting, and combining is called splatting. Therefore, the renderer can treat each data sample indi- vidually and spread its effect to the output pixels. Once the point's image plane footprint is determined, it is added to the image buffer. The combining rules are Combining different for a front-to-back and a back-to-front traver- sal, but these rules are equivalent (in the absence of Visibility in and intensity from density functions is often roundofferrors). modeled by a scattering equation which integrates brightness along the view direction [Blinn 82] [Kajiya For a front-to-back traversal the formula are: 84] [Sabella 88]. An effective approximation to a It = Ic + ((1-At) * (In*An)) scattering equation is to use image composition func- tions [Porter 84]. Since the composition functions require discrete layers, the renderer needs to integrate

12 CH Volume Visualization Workshop Ao = Ac + ((1-At) * An) Iai# = Tabler,~[indexr,yt] * DOT(L,G) The specular rule for shading is: For a back-to-front traversal the formula are: In,,c = Tabler,~[index,,~] * DOT(H,G) ~ Io = ((1-An)*lc) + (I,~*A,,) The opacity rule for shading is: Ao = ((1-A.)*Ac) + A,, Ar~u = Tableor,~ [indexot~ ] * Table~,~, [index~,t, ] where I denotes the intensity, A denotes the opacity, o The final intensity is: denotes the output, c denotes what is already in the l,.t, = I.~, + I,~. + I.v.~ image buffer, and n denotes the new point. where 1 denotes the intensity, A denotes the opacity, L SHADING denotes the light vector, G denotes the gradient direc- tion, H denotes the vector half way between the eye vec- Provided the renderer's shader uses only information tor and the light vector, and n denotes the specular that is either part of the sample item or can be generated power. Intensity has three components: red, green and once in a preprocessing step, any shader can fit in the blue. Each intensity component is clamped to fall forward mapping algorithm. For speed considerations, between 0 and 255. the renderer in the current system uses a table driven shader that is made up of four parts: emittance, diffuse Since the color specified in the tables is the color for a reflection, specular reflection, and opacity calculations. fully opaque sample, the color is attenuated by the opa- Conceptually a shaded object is a reflective light emit- city value which occurs during the combining stage. ting semi-transparent blob. This shader requires that the An example of a use for the opacity modulation table is density, gradient strength and gradient direction be surface enhancement. If the table is loaded with a ramp known for each input sample. and the gradient strength is used to select a value, the Since the gradient operator requires knowledge of effect is to increase the opacity of samples that lie neighboring samples, gradients are generated in a between two samples that are drastically different, thus preprocessing step. The result of the preprocessing step bringing out pseudo surfaces. Another example is to is a 32 bit packet made up of 8 bits for density, 8 bits for index the opacity modulation table with the packet's z gradient strength, and 5 bits each for gradient x, y, and z value for pseudo depth queuing. Other interesting direction (there is 1 bit leftover). While many gradient effects are achieved by selecting emitted and reflected operators are used in similar applications [Horn 81] color with different packet elements. For example, [VanHook 86] [Drebin 88] the renderer uses the follow- choosing emitted light based on gradient strength and ing: reflected light based on density value is a useful way to view molecular electron density volumes. gradienti(i,j,k ) = data (i +l,j,k) - data (i-l,j,k ) INTERAC'nON gradient /(i,j,k ) = data (i,j + 1,k) - data (i,j-l,k ) User Interface gradientk(i,j,k ) = data (i,j,k + 1) - data (i,j,k-1) The user has many interactive controls for image gen- The shading model uses four tables: a table to determine eration. emitted color, a table to determine reflected color, a For shading, the user selects the emittance and table to determine opacity and a table to modulate the reflectance tables which are full color tables and opacity opacity. Each table has 256 entries and the table can be and the modulation tables which are scalar tables. The indexed by any value available in a sample packet. The contents of the four tables are displayed on the user's indices select the corresponding shading value in the screen. The user also has control over which parts of the following shading rules. In addition, the emitted, sample packet are used to index each table by a shading diffuse, or specular component can be set to zero and the cross-bar selector. For lighfng, the user has the option modulation component can be set to one, effectively of having the light direction fixed to the world coordi- turning off that component nate system or fixed to the grid coordinate system. The emittance rule for shading is: For viewing parameters, the user uses a virtual track-ball to select the view direction. In addition, the user selects Ic~t = Table,m~t[index, mit] the high and low bounds for his data. This The diffuse rule for shading is: allows clipping in grid space. The user specifies which grid appears in the center of the final image.

CH Volume Visualization Workshop 13 The user can also zoom into or out of the data. The user of SUN-3s and SUN-4s with anywhere from 4 to 32 MB selects whether to generate the images in a back-to-front of main memory each, At the command of the client, or a front-to-back ordering. each map/shade server reads a sub-cube of the input volume off disk. When the client tells each map/shade Successive Refinement server to render an image, the map/shade server renders A way to improve the update rate of image generation is its sub-cube, generating shaded packets that it sends to to display partial images during image generation. This the client in groups. allows the user to view the data with the current parame- The only complicated part of the parallel version of the ters as quickly as possible. If the user does not change forward mapping algorithm is that the splat server must the viewing parameters, the image continues to improve guarantee a back-to-front or a front-to-back traversal of while the user watches the display [Bergman 86]. The the shaded packets from the multiple map/shade servers. successive refinement takes two forms. This is done with a sorted linked-list of the map/shade First, the splat buffer is visible to the user at all times, so server packet buffers. he can see any partial image. If the view displayed is not to his liking, he can change a view parameter without RESULTS waiting for the image to complete and image generation starts over. Timing Tests Second, different features of the rendering pipeline take Both the single processor version and the parallel ver- different amounts of time to compute. The most impor- sion of the algorithm were used to generate six images, tant item effecting rendering time is the number of sam- one for each step in successive refinement. pie packets that are passed down the rendering pipeline. IMAGE STEP IN REFINEMENT Another place to gain speed is to skip formal reconstruc- A low resolution version of the data tion and simply map each output sample packet to its B middle resolution version of the data nearest pixel. In addition, the different components of C full resolution version of the data the shading equation take different amounts of time to D full reconstruction compute, and can be turned on in succession. Therefore, E diffuse portion of the shading model there are three axes along which to refine an image: F specular portion of the shading model input resolution, reconstruction and shading. Currently, while input parameters remain unchanged the renderer The timing volume data was a 96 by 128 by 113 com- starts with a low resolution copy of the input, moves to a puted tomography study of a human head. Color plates 1 middle resolution copy, then moves to the full resolution and 2 contain the six result images. The single Sun-3 copy. It then turns on reconstruction. Once this image was a Sun-3/60M with 8 MB of main memory. The sin- completes the diffuse and specular part of the shading gle Sun-4 was a Sun-4/280S with 64 MB of main model are turned on. memory. The single TAAC-1 was in a Sun-3/180 with The low resolution version of the data is achieved by 16 MB of main memory. The four Sun-3s were all Sun- visiting only every 4th input sample in each grid direc- 3/60M with 8 MB of main memory. The four Sun-4s tion. With a three-dimensional input volume this reduces were all Sun-4/280S with 32 MB of main memory. All the number of samples for the pipeline by a factor of 64. times are in seconds. The middle resolution version is achieved by visiting MACHINES A B C D E F every other input sample. This reduces the number of TAAC- 1 3 9 65 165 179 182 input points by a factor of 8. 1 Sun-3 20 60 363 733 843 898 PARALLEL IMPLEMENTATION 4 Sun-3 3 21 141 172 205 223 1 Sun-4 8 28 181 481 484 522 A parallel version of the forward mapping algorithm has 4 Sun-4 3 11 89 108 112 116 been implemented. The system consists of a client, a splat server, and a set of map/shade servers. The splat Sample Images server is currently a TAAC-1. It receives shaded packets from the client and reconstructs and combines the pack- Included are 16 sample images that display the results at ets into the image buffer. The client is currently a SUN- each step in the refinement process as well as the output 3/180C with 16 MB of main memory. It runs the user of the algorithm on various applications. On all image, interface, controls each map/shade server's actions, col- the boundaries of the input volume is displayed as white lects shaded packets from each map/shade server and edges. down loads these packets into the splat server. The map/shade servers are a run-time configurable collection

14 CH Volume Visualization Workshop DATA SlZl~ DIMENSIONS Both the top imagos and both bottom images were gen- CT of head 96 128 113 Three Spatial orated with the same shading parameters and tables. P-orbital 64 64 64 Three Spatial The only difference in the fight and left image in each Transfer RNA 64 64 64 Three Spatial case is a rotation. Hemoglobin 64 64 64 Three Spatial Super-oxide 64 64 64 Three Frequency ACKNOWLEDGEMENTS Galaxy 100 1130 80 Two Spatial and I would like to thank Apple Computer Inc., Light Wavelength Schlumberger-Doll Research, and Sun Microsystems for Seismic 64 64 64 Three Spatial supporting this research. In addition, I would like to thank my advisor, Turner Whitted, for our numerous dis- Color plate 1 and color plate 2 each contain four images cussions and his helpful insight. I would also like to of the computed tomography data set used in the timing thank Mark Harris for his willingness to be a guinea pig tests. The four images of color plate 1 are the first four on many versions of the system. The seismic data is steps in the refinement process. The upper left image is courtesy of Schlumberger-DoU Research. The super- from the low resolution version of the data. The upper oxide dismutase data and the copper chloride p-orbitals right image is from the middle resolution version of the data courtesy of Michael Pique of the Scripts Institute. data. The lower left image is from the full resolution The transfer RNA data and the hemoglobin data cour- version of the data. The lower fight image is from the tesy of Frank Hage of the Biochemistry Department of full resolution version of the data with full reconstruc- the University of Noah Carolina at Chapel Hill. The tion. The upper two images of color plate 2 are also computed tomogaphy data courtesy of Radiation from the timing tests. The left image is with the diffuse Oncology of the the University of North Carolina at portion of the shading model. The right image is with Chapel Hill. The Fabry-Perot data courtesy of Gerald both the diffuse and specular portion of the shading Cecil of the Institute for Advanced Study at Princeton. model. All these images were generated with the same shading tables, chosen to bring out the skin surface. The REFERENCES bottom two images are other views of the same data set. Abram, G.D., L.A. Westovor, J.T. Whitted, [1985] The left image was made with tables that were chosen to "Efficient Alias-Free Rondering Using Bit-Masks and display bone. In addition, the reflection table was loaded Look-Up Tables", Computer Graphics, vol. 19, no. 3, with green to emphasize the rettected light. The right July 1985. image has the right part of the head clipped away and was generated with tables that emphasize surfaces by Bergman, L., H. Fuchs, E. Grant, S. Spach, [1986] using the opacity modulation table. Note the detail in "Image Rendering by Adaptive Refinement", Computer both the bone surfaces and the skin-to-air interface Graphics, vol. 20, no. 4, August 1986. including the nasal sinuses and throat regions. Blinn, J.F., [1982] "Light Reflection Functions for Color plate 3 contains four images of electron density Simulation for Clouds and Dusty Surfaces", Computer data. The upper left image is an image of the p-orbitals Graphics, vol. 16, no. 3, July 1982. of copper chloride. Note the soft bonding between the Brooks Jr., F.P., [1986] "'Interactive Graphics and lobes where the electrons are shared. In the upper right Supercomputer Debate", 1986 Workshop on Interactive image of transfer RNA, the volume is colored base on 3D Graphics, October 1986. gradient strength to bring out the atom boundaries. The Carpenter, L., [1984] "The A-buffer, and Antialiased lower left image is an image of a synthetic hemoglobin Hidden Surface Method", Computer Graphics, vol. 18, density map. The map was generated from known atom no. 3, July 1984. centers, with bonds added to the volume data in a separate process. Since electron density is directly Drebin, R.A., L. Carpenter, P. Hanrahan, [1988] related to atom type and in the synthetic case these den- "Volume Rendering", Computer Graphics, vol. 22, no. sity can be determined for each atom type, the shading 4, August 1988. tables were chosen to color the density by atom type: Fuchs, H., Z.M. Kedem, S.P. Uselton, [1977] "'Optimal green for carbon, blue for nitrogen, red for oxygen, and Surface Reconstruction from Planar Contours", Com- yellow for sulfur. The lower fight image is an image of munication of the ACM, vol. 20, no. 10, October 1977. the structure factors for super-oxide dismutase. The Ganapathy, S., T.G. Dennehy, [1982] "'A New General points were colored by phase angle and the opacity Triangulation Method for Planar Contours", Computer depends the intensity. Graphics, vol. 16, no. 3, July 1982. Color plate 4 contains four images: two of a Fabry-Perot Greenberg, D.P., [1986] "Scientist Wants a Window map of a galaxy and two of a synthetic seismic study. into the Database", Panel on Graphics, Image

CH Volume Visualization Workshop 15 Processing, and Workstations, October, 1986. Reeves, W.T., [1983] "Particle Systems -- A Technique Hibbard, W.L., [1986] "4-D Display of Meteorological for Modeling a Class of Fuzzy Objects", Computer Data", 1986 Workshop on Interactive 3D Graphics, Graphics, vol. 17, no. 3, July, 1983. October 1986. Porter, T., T Duff, [1984] "Compositing Digital Images" Horn, B.K.P, [1981] "Hill Shading and the Reflectance Computer Graphics, vol. 18, no. 3, July 1984. Map" Proceedings of the IEEE, vol. 69, no. 1, January Sabella, P., [1988] "A Rendering Algorithm for Visual- 1981. izing 3D Scalar Data" Computer Graphics, vol. 22, no. Kajiya, J.T., B. Von Herzen, [1984] " 4, August 1988. Volume Densities", Computer Graphics, vol. 18, no. 3, Upson, C., K Keller, [1988] "VBUFFER: Visible July 1984. Volume Rendering" Computer Graphics, vol. 22, no. 4, Lenz, R., B. Gudnumdsson, B. Lindskog, P.E. Daniels- August 1988. son, [1986] "Display of Density Volumes", IEEE Com- VanHook, T., [1986] Personal Communication. Sep- puter Graphics and Applications, vol. 6, no. 7, July tember 1986. 1986. Williams, T.V., [1982] "'A Man-Machine Interface for Levoy, M.S., J.T. Whitted, [1985] "The Use of Points as Interpreting Electron Density Maps", Ph.D. dissertation, a Display Primitive", Technical Report #85-022, University of North Carolina, Chapel Hill, NC, 1982. University of North Carolina, Chapel Hill, NC, 1985. Wright, W.V., [1972] "An Interactive Computer Graph- Levoy, M.S., [1988] "Volume Rendering: Display of ics System for Molecular Studies", Ph.D. dissertation, Surfaces from Volume Data", IEEE Computer Graphics University of North Carolina, Chapel Hill, NC, 1972. and Applications, vol. 8, no. 3, May 1988.

16 CH Volume Visualization Workshop Chapel Hill Workshop

Westover. Color Plate 1.

Westover. Color Plate 2. Volume Visualization

Westover. Color Plate 3.

Westover. Color Plate 4.