High-Quality Volume Rendering Using Texture Mapping Hardware

Total Page:16

File Type:pdf, Size:1020Kb

High-Quality Volume Rendering Using Texture Mapping Hardware High-QualityVolume Rendering Using Texture Mapping Hardware y Frank Dachille, Kevin Kreeger, Bao quan Chen, Ingmar Bitter and Arie Kaufman z Center for Visual Computing CVC and Department of Computer Science State University of New York at Stony Bro ok Stony Bro ok, NY 11794-4400, USA Abstract store the volume as a solid texture on the graphics hardware, then to sample the texture using planes parallel to the image We present a method for volume rendering of regular grids plane and comp osite them into the frame bu er using the which takes advantage of 3D texture mapping hardware cur- blending hardware. This approach considers only ambient rently available on graphics workstations. Our methodpro- light and quickly pro duces unshaded images. The images duces accurate shading for arbitrary and dynamical ly chang- could b e improved byvolumetric shading, which implements ing directional lights, viewing parameters, and transfer func- a full lighting equation for eachvolume sample. tions. This is achieved by hardware interpolating the data Cabral et al. [3] rendered 51251264 volumes into a values and gradients before software classi cation and shad- 512512 window presumably with 64 sampling planes in ing. The method works equal ly wel l for paral lel and perspec- 0.1 seconds on a four Raster Manager SGI RealityEngi ne tive projections. We present two approaches for our method: Onyx with one 150MHz CPU. Cullip and Neumann [4] one which takes advantage of softwareray casting optimiza- also pro duced 512512 images on the SGI RealityEngi ne tions and another which takes advantage of hardware blend- again presumably 64 sampling planes since the volume is ing acceleration. 12812864 in 0.1 seconds. All of these approaches keep time-critical computations inside the graphics pip eline at the CR Categories: I.3.1 [Computer Graphics]: Hardware exp ense of volumetric shading and image quality. Architecture; I.3.3 [Computer Graphics]: Picture/Image Van Gelder and Kim [6] prop osed a metho d by which Generation; I.3.7 [Computer Graphics]: Three-Dimensional volumetric shading could b e incorp orated at the exp ense of Graphics and Realism|Color, shading, shadowing, and tex- interactivity. Their shaded renderings of 256256113 vol- ture 2 umes into 600 images with 1000 samples along eachraytook Keywords: volume rendering, shading, ray casting, tex- 13.4 seconds. Their metho d is slower than Cullip and Neu- ture mapping, solid texture, hardware acceleration, parallel mann's and Cabral et al.'s b ecause they must re-shade the rendering volume and reload the texture map for every frame b ecause the colors in the texture memory are view dep endant. Cullip and Neumann also describ ed a metho d utilizin g 1 Intro duction the PixelFlow machine which pre-computes the x, y and z gradient comp onents and uses the texture mapping to inter- Volumetric data is p ervasive in many areas such as medi- p olate the density data and the three gradient comp onents. cal diagnosis, geophysical analysis, and computational uid The latter is implemented partially in hardware and par- dynamics. Visualizati on byinteractive, high-quali tyvol- 2 tially in software on the 128 SIMD pixel pro cessors [5]. ume rendering enhances the usefulness of this data. To All four of these values are used to compute Phong shaded date, manyvolume rendering metho ds have b een prop osed samples which are comp osited in the frame bu er. They pre- on general and sp ecial purp ose hardware, but most fail to 3 dicted that 256 volume could b e rendered at over 10Hz into achieve reasonable cost-p erformance ratios. We prop ose a a 640x512 image with 400 sample planes. Although this is high-qualityvolume rendering metho d suitable for imple- the rst prop osed solution to implement full Phong lighting mentation on machines with 3D texture mapping hardware. functionality, it has never b een realized as far as we know Akeley [1] rst mentioned the p ossibili ty of accelerating b ecause it would require 43 pro cessor cards, a numb er which volume rendering using of 3D texture mapping hardware, can not easily t into a standard workstation chassis [4]. sp eci cally on the SGI Reality Engine. The metho d is to 3 Sommer et al. [13] describ ed a metho d to render 128 2 y volumes at 400 resolution with 128 samples p er ray in 2.71 fdachillejkkreegerjbao [email protected] z seconds. They employ a full lighting equation by computing http://www.cvc.sunysb.edu a smo oth gradient from a second copy of the volume stored in main memory. Therefore, they do not have to reload the texture maps when viewing parameters change. However, this rendering rate is for isosurface extraction; if translucent pro jections are required, it takes 33.2 seconds for the same rendering. They were the rst to prop ose to resample the texture volume in planes parallel to a row of image pixels so that a whole raywas in main memory at one time. They mention the p otential to also interp olate gradients with the hardware. All of these texture map based metho ds either non- Data, Gradients Classification, Data, Gradients Classification, Data, Gradients Classification, map to colors map to colors map to colors 3-4 parameter texture memory software texture LUT 3-4 parameter texture memory software texture LUT 3-4 parameter Main Memory hardware texture LUT and CPU texture memory Frame Buffer Frame Buffer Frame Buffer (a) (b) (c) Figure 1: Threearchitectures for texture map based volume rendering: a Our architecture, b Traditional architectureof Van Gelder and Kim, and c Ideal architectureofVan Gelder and Kim. The thick lines are the operations which must be performed for every frame. by the texture hardware along rays cast through the volume. interactively recompute direction-dep endent shading each The sample data for eachray or slice is then transferred time any of the viewing parameters change, compute only to a bu er in main memory and shaded by the CPU. The direction-indep end ent shading, or compute no shading at shaded samples along a ray are comp osited and the nal all. Our metho d shades every visible sample with view- pixels are moved to the frame bu er for display. Alterna- dep endent lighting at interactive rates. tively within the same architecture, the shaded voxels can We do not adapt the ray casting algorithm to t within b e comp osited by the frame bu er. the existing graphics pip elin e, whichwould compromise the Fig. 1b shows the architecture that is traditional ly used image quality. Instead, we only utilize the hardware where in texture map based shaded volume rendering. One of the it provides run time advantages, but maintain the integrity disadvantages of this architecture is that the volume must of the ray casting algorithm. For the p ortions of the volume b e re-shaded and re-loaded every time any of the viewing rendering pip eline which can not b e p erformed in graphics parameters changes. Another problem with this metho d is hardware sp eci cally shading we use the CPU. that RGB values are interp olated by the texture hardware. In volume rendering byray casting, data values and gra- Therefore, when non-linear mappings from densitytoRGB dients are estimated at evenly spaced intervals along rays are used, the interp olated samples are incorrect. We present emanating from pixels of the nal image plane. Resam- a more detailed comparison of the various metho ds in Sec. 4. pling these data values and gradients is often the most time In Fig. 1c, Van Gelder and Kim's [6] Ideal architecture consuming task in software implementations . The texture is presented. In this architecture, the raw density and vol- mapping hardware on high-end graphics workstations is de- ume gradients are loaded into the texture memory one time signed to p erform resampling of solid textures with very high only. The density and gradients are then interp olated by the throughput. We leverage this capability to implement high texture hardware and passed to a p ost-texturing LUT. The throughput density and gradient resampling. densityvalues and gradients are used as an index into the Shading is the missing key in conventional texture map LUT to get the RGB values for each sample. The LUT based volume rendering. This is one of the reasons that pure is based on the current view direction and can b e created graphics hardware metho ds su er from lower image qual- using any lighting mo del desired e.g., Phong for any level ity than software implementations of ray-casting. For high- of desired image quality. This metho d solves the problems quality images, our metho d implements full Phong shading of the current architecture includin g pre-shading the volume using the estimated surface normal gradient of the density. and interp olatin g RBG values. However, a p ost-texturing We pre-compute the estimated gradient of the density and LUT would need to b e indexed by the lo cal gradient which store it in texture memory.We also pre-compute a lo okup would require an infeasibly large LUT see Sec. 2.2. table LUT to store the e ect of an arbitrary number of light sources using full Phong shading. The nal step in volume rendering is the comp ositing, or 2.1 Sampling blending, of the color samples along eachrayinto a nal im- age color. Most graphics systems have a frame bu er with Ray casting is an image-order algorithm, which has the an opacitychannel and ecient blending hardware which drawbackofmultiple access of voxel data, since sampling can b e used for back-to-front comp ositing.
Recommended publications
  • Stardust: Accessible and Transparent GPU Support for Information Visualization Rendering
    Eurographics Conference on Visualization (EuroVis) 2017 Volume 36 (2017), Number 3 J. Heer, T. Ropinski and J. van Wijk (Guest Editors) Stardust: Accessible and Transparent GPU Support for Information Visualization Rendering Donghao Ren1, Bongshin Lee2, and Tobias Höllerer1 1University of California, Santa Barbara, United States 2Microsoft Research, Redmond, United States Abstract Web-based visualization libraries are in wide use, but performance bottlenecks occur when rendering, and especially animating, a large number of graphical marks. While GPU-based rendering can drastically improve performance, that paradigm has a steep learning curve, usually requiring expertise in the computer graphics pipeline and shader programming. In addition, the recent growth of virtual and augmented reality poses a challenge for supporting multiple display environments beyond regular canvases, such as a Head Mounted Display (HMD) and Cave Automatic Virtual Environment (CAVE). In this paper, we introduce a new web-based visualization library called Stardust, which provides a familiar API while leveraging GPU’s processing power. Stardust also enables developers to create both 2D and 3D visualizations for diverse display environments using a uniform API. To demonstrate Stardust’s expressiveness and portability, we present five example visualizations and a coding playground for four display environments. We also evaluate its performance by comparing it against the standard HTML5 Canvas, D3, and Vega. Categories and Subject Descriptors (according to ACM CCS):
    [Show full text]
  • Volume Rendering
    Volume Rendering 1.1. Introduction Rapid advances in hardware have been transforming revolutionary approaches in computer graphics into reality. One typical example is the raster graphics that took place in the seventies, when hardware innovations enabled the transition from vector graphics to raster graphics. Another example which has a similar potential is currently shaping up in the field of volume graphics. This trend is rooted in the extensive research and development effort in scientific visualization in general and in volume visualization in particular. Visualization is the usage of computer-supported, interactive, visual representations of data to amplify cognition. Scientific visualization is the visualization of physically based data. Volume visualization is a method of extracting meaningful information from volumetric datasets through the use of interactive graphics and imaging, and is concerned with the representation, manipulation, and rendering of volumetric datasets. Its objective is to provide mechanisms for peering inside volumetric datasets and to enhance the visual understanding. Traditional 3D graphics is based on surface representation. Most common form is polygon-based surfaces for which affordable special-purpose rendering hardware have been developed in the recent years. Volume graphics has the potential to greatly advance the field of 3D graphics by offering a comprehensive alternative to conventional surface representation methods. The object of this thesis is to examine the existing methods for volume visualization and to find a way of efficiently rendering scientific data with commercially available hardware, like PC’s, without requiring dedicated systems. 1.2. Volume Rendering Our display screens are composed of a two-dimensional array of pixels each representing a unit area.
    [Show full text]
  • Efficiently Using Graphics Hardware in Volume Rendering Applications
    Efficiently Using Graphics Hardware in Volume Rendering Applications Rudiger¨ Westermann, Thomas Ertl Computer Graphics Group Universitat¨ Erlangen-Nurnber¨ g, Germany Abstract In this paper we are dealing with the efficient generation of a visual representation of the information present in volumetric data OpenGL and its extensions provide access to advanced per-pixel sets. For scalar-valued volume data two standard techniques, the operations available in the rasterization stage and in the frame rendering of iso-surfaces, and the direct volume rendering, have buffer hardware of modern graphics workstations. With these been developed to a high degree of sophistication. However, due to mechanisms, completely new rendering algorithms can be designed the huge number of volume cells which have to be processed and and implemented in a very particular way. In this paper we extend to the variety of different cell types only a few approaches allow the idea of extensively using graphics hardware for the rendering of parameter modifications and navigation at interactive rates for real- volumetric data sets in various ways. First, we introduce the con- istically sized data sets. To overcome these limitations we provide cept of clipping geometries by means of stencil buffer operations, a basis for hardware accelerated interactive visualization of both and we exploit pixel textures for the mapping of volume data to iso-surfaces and direct volume rendering on arbitrary topologies. spherical domains. We show ways to use 3D textures for the ren- Direct volume rendering tries to convey a visual impression of dering of lighted and shaded iso-surfaces in real-time without ex- the complete 3D data set by taking into account the emission and tracting any polygonal representation.
    [Show full text]
  • Rendering of Feature-Rich Dynamically Changing Volumetric Datasets on GPU
    Procedia Computer Science Volume 29, 2014, Pages 648–658 ICCS 2014. 14th International Conference on Computational Science Rendering of Feature-Rich Dynamically Changing Volumetric Datasets on GPU Martin Schreiber, Atanas Atanasov, Philipp Neumann, and Hans-Joachim Bungartz Technische Universit¨at M¨unchen, Munich, Germany [email protected],[email protected],[email protected],[email protected] Abstract Interactive photo-realistic representation of dynamic liquid volumes is a challenging task for today’s GPUs and state-of-the-art visualization algorithms. Methods of the last two decades consider either static volumetric datasets applying several optimizations for volume casting, or dynamic volumetric datasets with rough approximations to realistic rendering. Nevertheless, accurate real-time visualization of dynamic datasets is crucial in areas of scientific visualization as well as areas demanding for accurate rendering of feature-rich datasets. An accurate and thus realistic visualization of such datasets leads to new challenges: due to restrictions given by computational performance, the datasets may be relatively small compared to the screen resolution, and thus each voxel has to be rendered highly oversampled. With our volumetric datasets based on a real-time lattice Boltzmann fluid simulation creating dynamic cavities and small droplets, existing real-time implementations are not applicable for a realistic surface extraction. This work presents a volume tracing algorithm capable of producing multiple refractions which is also robust to small droplets and cavities. Furthermore we show advantages of our volume tracing algorithm compared to other implementations. Keywords: 1 Introduction The photo-realistic real-time rendering of dynamic liquids, represented by free surface flows and volumetric datasets, has been studied in numerous works [13, 9, 10, 14, 18].
    [Show full text]
  • A Survey of Algorithms for Volume Visualization
    A Survey of Algorithms for Volume Visualization T. Todd Elvins Advanced Scientific Visualization Laboratory San Diego Supercomputer Center "... in 10 years, all rendering will be volume rendering." Jim Kajiya at SIGGRAPH '91 Many computer graphics programmers are working in the area is given in [Fren89]. Advanced topics in medical volume of scientific visualization. One of the most interesting and fast- visualization are covered in [Hohn90][Levo90c]. growing areas in scientific visualization is volume visualization. Volume visualization systems are used to create high-quality Furthering scientific insight images from scalar and vector datasets defined on multi- dimensional grids, usually for the purpose of gaining insight into a This section introduces the reader to the field of volume scientific problem. Most volume visualization techniques are visualization as a subfield of scientific visualization and discusses based on one of about five foundation algorithms. These many of the current research areas in both. algorithms, and the background necessary to understand them, are described here. Pointers to more detailed descriptions, further Challenges in scientific visualization reading, and advanced techniques are also given. Scientific visualization uses computer graphics techniques to help give scientists insight into their data [McCo87] [Brod91]. Introduction Insight is usually achieved by extracting scientifically meaningful information from numerical descriptions of complex phenomena The following is an introduction to the fast-growing field of through the use of interactive imaging systems. Scientists need volume visualization for the computer graphics programmer. these systems not only for their own insight, but also to share their Many computer graphics techniques are used in volume results with their colleagues, the institutions that support the visualization.
    [Show full text]
  • Opengl 4.0 Shading Language Cookbook
    OpenGL 4.0 Shading Language Cookbook Over 60 highly focused, practical recipes to maximize your use of the OpenGL Shading Language David Wolff BIRMINGHAM - MUMBAI OpenGL 4.0 Shading Language Cookbook Copyright © 2011 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: July 2011 Production Reference: 1180711 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849514-76-7 www.packtpub.com Cover Image by Fillipo ([email protected]) Credits Author Project Coordinator David Wolff Srimoyee Ghoshal Reviewers Proofreader Martin Christen Bernadette Watkins Nicolas Delalondre Indexer Markus Pabst Hemangini Bari Brandon Whitley Graphics Acquisition Editor Nilesh Mohite Usha Iyer Valentina J. D’silva Development Editor Production Coordinators Chris Rodrigues Kruthika Bangera Technical Editors Adline Swetha Jesuthas Kavita Iyer Cover Work Azharuddin Sheikh Kruthika Bangera Copy Editor Neha Shetty About the Author David Wolff is an associate professor in the Computer Science and Computer Engineering Department at Pacific Lutheran University (PLU).
    [Show full text]
  • Ray Casting Architectures for Volume Visualization
    Ray Casting Architectures for Volume Visualization The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Ray, Harvey, Hanspeter Pfister, Deborah Silver, and Todd A. Cook. 1999. Ray casting architectures for volume visualization. IEEE Transactions on Visualization and Computer Graphics 5(3): 210-223. Published Version doi:10.1109/2945.795213 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:4138553 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA Ray Casting Architectures for Volume Visualization Harvey Ray, Hansp eter P ster , Deb orah Silver , Todd A. Co ok Abstract | Real-time visualization of large volume datasets demands high p erformance computation, pushing the stor- age, pro cessing, and data communication requirements to the limits of current technology. General purp ose paral- lel pro cessors have b een used to visualize mo derate size datasets at interactive frame rates; however, the cost and size of these sup ercomputers inhibits the widespread use for real-time visualization. This pap er surveys several sp e- cial purp ose architectures that seek to render volumes at interactive rates. These sp ecialized visualization accelera- tors have cost, p erformance, and size advantages over par- allel pro cessors. All architectures implement ray casting using parallel and pip elined hardware. Weintro duce a new metric that normalizes p erformance to compare these ar- Fig.
    [Show full text]
  • Statistically Quantitative Volume Visualization
    Statistically Quantitative Volume Visualization Joe M. Kniss∗ Robert Van Uitert† Abraham Stephens‡ Guo-Shi Li§ Tolga Tasdizen University of Utah National Institutes of Health University of Utah University of Utah University of Utah Charles Hansen¶ University of Utah Abstract Visualization users are increasingly in need of techniques for assessing quantitative uncertainty and error in the im- ages produced. Statistical segmentation algorithms compute these quantitative results, yet volume rendering tools typi- cally produce only qualitative imagery via transfer function- based classification. This paper presents a visualization technique that allows users to interactively explore the un- certainty, risk, and probabilistic decision of surface bound- aries. Our approach makes it possible to directly visual- A) Transfer Function-based Classification B) Unsupervised Probabilistic Classification ize the combined ”fuzzy” classification results from multi- ple segmentations by combining these data into a unified Figure 1: A comparison of transfer function-based classification ver- probabilistic data space. We represent this unified space, sus data-specific probabilistic classification. Both images are based the combination of scalar volumes from numerous segmen- on T1 MRI scans of a human head and show fuzzy classified white- tations, using a novel graph-based dimensionality reduction matter, gray-matter, and cerebro-spinal fluid. Subfigure A shows the scheme. The scheme both dramatically reduces the dataset results of classification using a carefully designed 2D transfer func- size and is suitable for efficient, high quality, quantitative tion based on data value and gradient magnitude. Subfigure B shows visualization. Lastly, we show that the statistical risk aris- a visualization of the data classified using a fully automatic, atlas- ing from overlapping segmentations is a robust measure for based method that infers class statistics using minimum entropy, visualizing features and assigning optical properties.
    [Show full text]
  • Phong Shading
    Computer Graphics Shading Based on slides by Dianna Xu, Bryn Mawr College Image Synthesis and Shading Perception of 3D Objects • Displays almost always 2 dimensional. • Depth cues needed to restore the third dimension. • Need to portray planar, curved, textured, translucent, etc. surfaces. • Model light and shadow. Depth Cues Eliminate hidden parts (lines or surfaces) Front? “Wire-frame” Back? Convex? “Opaque Object” Concave? Why we need shading • Suppose we build a model of a sphere using many polygons and color it with glColor. We get something like • But we want Shading implies Curvature Shading Motivation • Originated in trying to give polygonal models the appearance of smooth curvature. • Numerous shading models – Quick and dirty – Physics-based – Specific techniques for particular effects – Non-photorealistic techniques (pen and ink, brushes, etching) Shading • Why does the image of a real sphere look like • Light-material interactions cause each point to have a different color or shade • Need to consider – Light sources – Material properties – Location of viewer – Surface orientation Wireframe: Color, no Substance Substance but no Surfaces Why the Surface Normal is Important Scattering • Light strikes A – Some scattered – Some absorbed • Some of scattered light strikes B – Some scattered – Some absorbed • Some of this scattered light strikes A and so on Rendering Equation • The infinite scattering and absorption of light can be described by the rendering equation – Cannot be solved in general – Ray tracing is a special case for
    [Show full text]
  • CS488/688 Glossary
    CS488/688 Glossary University of Waterloo Department of Computer Science Computer Graphics Lab August 31, 2017 This glossary defines terms in the context which they will be used throughout CS488/688. 1 A 1.1 affine combination: Let P1 and P2 be two points in an affine space. The point Q = tP1 + (1 − t)P2 with t real is an affine combination of P1 and P2. In general, given n points fPig and n real values fλig such that P P i λi = 1, then R = i λiPi is an affine combination of the Pi. 1.2 affine space: A geometric space composed of points and vectors along with all transformations that preserve affine combinations. 1.3 aliasing: If a signal is sampled at a rate less than twice its highest frequency (in the Fourier transform sense) then aliasing, the mapping of high frequencies to lower frequencies, can occur. This can cause objectionable visual artifacts to appear in the image. 1.4 alpha blending: See compositing. 1.5 ambient reflection: A constant term in the Phong lighting model to account for light which has been scattered so many times that its directionality can not be determined. This is a rather poor approximation to the complex issue of global lighting. 1 1.6CS488/688 antialiasing: Glossary Introduction to Computer Graphics 2 Aliasing artifacts can be alleviated if the signal is filtered before sampling. Antialiasing involves evaluating a possibly weighted integral of the (geometric) image over the area surrounding each pixel. This can be done either numerically (based on multiple point samples) or analytically.
    [Show full text]
  • FAKE PHONG SHADING by Daniel Vlasic Submitted to the Department
    FAKE PHONG SHADING by Daniel Vlasic Submitted to the Department of Electrical Engineering and Computer Science in Partial Fulfillment of the Requirements for the Degrees of Bachelor of Science in Computer Science and Engineering and Master of Engineering in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology May 17, 2002 Copyright 2002 M.I.T. All Rights Reserved. Author ________________________________________________________ Department of Electrical Engineering and Computer Science May 17, 2002 Approved by ___________________________________________________ Leonard McMillan Thesis Supervisor Accepted by ____________________________________________________ Arthur C. Smith Chairman, Department Committee on Graduate Theses FAKE PHONG SHADING by Daniel Vlasic Submitted to the Department of Electrical Engineering and Computer Science May 17, 2002 In Partial Fulfillment of the Requirements for the Degrees of Bachelor of Science in Computer Science and Engineering And Master of Engineering in Electrical Engineering and Computer Science ABSTRACT In the real-time 3D graphics pipeline framework, rendering quality greatly depends on illumination and shading models. The highest-quality shading method in this framework is Phong shading. However, due to the computational complexity of Phong shading, current graphics hardware implementations use a simpler Gouraud shading. Today, programmable hardware shaders are becoming available, and, although real-time Phong shading is still not possible, there is no reason not to improve on Gouraud shading. This thesis analyzes four different methods for approximating Phong shading: quadratic shading, environment map, Blinn map, and quadratic Blinn map. Quadratic shading uses quadratic interpolation of color. Environment and Blinn maps use texture mapping. Finally, quadratic Blinn map combines both approaches, and quadratically interpolates texture coordinates. All four methods adequately render higher-resolution methods.
    [Show full text]
  • Ray Tracing Height Fields
    Ray Tracing Height Fields £ £ £ Huamin Qu£ Feng Qiu Nan Zhang Arie Kaufman Ming Wan † £ Center for Visual Computing (CVC) and Department of Computer Science State University of New York at Stony Brook, Stony Brook, NY 11794-4400 †The Boeing Company, P.O. Box 3707, M/C 7L-40, Seattle, WA 98124-2207 Abstract bility, make the ray tracing approach a promising alterna- tive for rasterization approach when the size of the terrain We present a novel surface reconstruction algorithm is very large [13]. More importantly, ray tracing provides which can directly reconstruct surfaces with different levels more flexibility than hardware rendering. For example, ray of smoothness in one framework from height fields using 3D tracing allows us to operate directly on the image/Z-buffer discrete grid ray tracing. Our algorithm exploits the 2.5D to render special effects such as terrain with underground nature of the elevation data and the regularity of the rect- bunkers, terrain with shadows, and flythrough with a fish- angular grid from which the height field surface is sampled. eye view. In addition, it is easy to incorporate clouds, haze, Based on this reconstruction method, we also develop a hy- flames, and other amorphous phenomena into the scene by brid rendering method which has the features of both ras- ray tracing. Ray tracing can also be used to fill in holes in terization and ray tracing. This hybrid method is designed image-based rendering. to take advantage of GPUs newly available flexibility and processing power. Most ray tracing height field papers [2, 4, 8, 10] focused on fast ray traversal algorithms and antialiasing methods.
    [Show full text]