
High-QualityVolume Rendering Using Texture Mapping Hardware y Frank Dachille, Kevin Kreeger, Bao quan Chen, Ingmar Bitter and Arie Kaufman z Center for Visual Computing CVC and Department of Computer Science State University of New York at Stony Bro ok Stony Bro ok, NY 11794-4400, USA Abstract store the volume as a solid texture on the graphics hardware, then to sample the texture using planes parallel to the image We present a method for volume rendering of regular grids plane and comp osite them into the frame bu er using the which takes advantage of 3D texture mapping hardware cur- blending hardware. This approach considers only ambient rently available on graphics workstations. Our methodpro- light and quickly pro duces unshaded images. The images duces accurate shading for arbitrary and dynamical ly chang- could b e improved byvolumetric shading, which implements ing directional lights, viewing parameters, and transfer func- a full lighting equation for eachvolume sample. tions. This is achieved by hardware interpolating the data Cabral et al. [3] rendered 51251264 volumes into a values and gradients before software classi cation and shad- 512512 window presumably with 64 sampling planes in ing. The method works equal ly wel l for paral lel and perspec- 0.1 seconds on a four Raster Manager SGI RealityEngi ne tive projections. We present two approaches for our method: Onyx with one 150MHz CPU. Cullip and Neumann [4] one which takes advantage of softwareray casting optimiza- also pro duced 512512 images on the SGI RealityEngi ne tions and another which takes advantage of hardware blend- again presumably 64 sampling planes since the volume is ing acceleration. 12812864 in 0.1 seconds. All of these approaches keep time-critical computations inside the graphics pip eline at the CR Categories: I.3.1 [Computer Graphics]: Hardware exp ense of volumetric shading and image quality. Architecture; I.3.3 [Computer Graphics]: Picture/Image Van Gelder and Kim [6] prop osed a metho d by which Generation; I.3.7 [Computer Graphics]: Three-Dimensional volumetric shading could b e incorp orated at the exp ense of Graphics and Realism|Color, shading, shadowing, and tex- interactivity. Their shaded renderings of 256256113 vol- ture 2 umes into 600 images with 1000 samples along eachraytook Keywords: volume rendering, shading, ray casting, tex- 13.4 seconds. Their metho d is slower than Cullip and Neu- ture mapping, solid texture, hardware acceleration, parallel mann's and Cabral et al.'s b ecause they must re-shade the rendering volume and reload the texture map for every frame b ecause the colors in the texture memory are view dep endant. Cullip and Neumann also describ ed a metho d utilizin g 1 Intro duction the PixelFlow machine which pre-computes the x, y and z gradient comp onents and uses the texture mapping to inter- Volumetric data is p ervasive in many areas such as medi- p olate the density data and the three gradient comp onents. cal diagnosis, geophysical analysis, and computational uid The latter is implemented partially in hardware and par- dynamics. Visualizati on byinteractive, high-quali tyvol- 2 tially in software on the 128 SIMD pixel pro cessors [5]. ume rendering enhances the usefulness of this data. To All four of these values are used to compute Phong shaded date, manyvolume rendering metho ds have b een prop osed samples which are comp osited in the frame bu er. They pre- on general and sp ecial purp ose hardware, but most fail to 3 dicted that 256 volume could b e rendered at over 10Hz into achieve reasonable cost-p erformance ratios. We prop ose a a 640x512 image with 400 sample planes. Although this is high-qualityvolume rendering metho d suitable for imple- the rst prop osed solution to implement full Phong lighting mentation on machines with 3D texture mapping hardware. functionality, it has never b een realized as far as we know Akeley [1] rst mentioned the p ossibili ty of accelerating b ecause it would require 43 pro cessor cards, a numb er which volume rendering using of 3D texture mapping hardware, can not easily t into a standard workstation chassis [4]. sp eci cally on the SGI Reality Engine. The metho d is to 3 Sommer et al. [13] describ ed a metho d to render 128 2 y volumes at 400 resolution with 128 samples p er ray in 2.71 fdachillejkkreegerjbao [email protected] z seconds. They employ a full lighting equation by computing http://www.cvc.sunysb.edu a smo oth gradient from a second copy of the volume stored in main memory. Therefore, they do not have to reload the texture maps when viewing parameters change. However, this rendering rate is for isosurface extraction; if translucent pro jections are required, it takes 33.2 seconds for the same rendering. They were the rst to prop ose to resample the texture volume in planes parallel to a row of image pixels so that a whole raywas in main memory at one time. They mention the p otential to also interp olate gradients with the hardware. All of these texture map based metho ds either non- Data, Gradients Classification, Data, Gradients Classification, Data, Gradients Classification, map to colors map to colors map to colors 3-4 parameter texture memory software texture LUT 3-4 parameter texture memory software texture LUT 3-4 parameter Main Memory hardware texture LUT and CPU texture memory Frame Buffer Frame Buffer Frame Buffer (a) (b) (c) Figure 1: Threearchitectures for texture map based volume rendering: a Our architecture, b Traditional architectureof Van Gelder and Kim, and c Ideal architectureofVan Gelder and Kim. The thick lines are the operations which must be performed for every frame. by the texture hardware along rays cast through the volume. interactively recompute direction-dep endent shading each The sample data for eachray or slice is then transferred time any of the viewing parameters change, compute only to a bu er in main memory and shaded by the CPU. The direction-indep end ent shading, or compute no shading at shaded samples along a ray are comp osited and the nal all. Our metho d shades every visible sample with view- pixels are moved to the frame bu er for display. Alterna- dep endent lighting at interactive rates. tively within the same architecture, the shaded voxels can We do not adapt the ray casting algorithm to t within b e comp osited by the frame bu er. the existing graphics pip elin e, whichwould compromise the Fig. 1b shows the architecture that is traditional ly used image quality. Instead, we only utilize the hardware where in texture map based shaded volume rendering. One of the it provides run time advantages, but maintain the integrity disadvantages of this architecture is that the volume must of the ray casting algorithm. For the p ortions of the volume b e re-shaded and re-loaded every time any of the viewing rendering pip eline which can not b e p erformed in graphics parameters changes. Another problem with this metho d is hardware sp eci cally shading we use the CPU. that RGB values are interp olated by the texture hardware. In volume rendering byray casting, data values and gra- Therefore, when non-linear mappings from densitytoRGB dients are estimated at evenly spaced intervals along rays are used, the interp olated samples are incorrect. We present emanating from pixels of the nal image plane. Resam- a more detailed comparison of the various metho ds in Sec. 4. pling these data values and gradients is often the most time In Fig. 1c, Van Gelder and Kim's [6] Ideal architecture consuming task in software implementations . The texture is presented. In this architecture, the raw density and vol- mapping hardware on high-end graphics workstations is de- ume gradients are loaded into the texture memory one time signed to p erform resampling of solid textures with very high only. The density and gradients are then interp olated by the throughput. We leverage this capability to implement high texture hardware and passed to a p ost-texturing LUT. The throughput density and gradient resampling. densityvalues and gradients are used as an index into the Shading is the missing key in conventional texture map LUT to get the RGB values for each sample. The LUT based volume rendering. This is one of the reasons that pure is based on the current view direction and can b e created graphics hardware metho ds su er from lower image qual- using any lighting mo del desired e.g., Phong for any level ity than software implementations of ray-casting. For high- of desired image quality. This metho d solves the problems quality images, our metho d implements full Phong shading of the current architecture includin g pre-shading the volume using the estimated surface normal gradient of the density. and interp olatin g RBG values. However, a p ost-texturing We pre-compute the estimated gradient of the density and LUT would need to b e indexed by the lo cal gradient which store it in texture memory.We also pre-compute a lo okup would require an infeasibly large LUT see Sec. 2.2. table LUT to store the e ect of an arbitrary number of light sources using full Phong shading. The nal step in volume rendering is the comp ositing, or 2.1 Sampling blending, of the color samples along eachrayinto a nal im- age color. Most graphics systems have a frame bu er with Ray casting is an image-order algorithm, which has the an opacitychannel and ecient blending hardware which drawbackofmultiple access of voxel data, since sampling can b e used for back-to-front comp ositing.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-