High-Quality Volume Rendering Using Texture Mapping Hardware
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Stardust: Accessible and Transparent GPU Support for Information Visualization Rendering
Eurographics Conference on Visualization (EuroVis) 2017 Volume 36 (2017), Number 3 J. Heer, T. Ropinski and J. van Wijk (Guest Editors) Stardust: Accessible and Transparent GPU Support for Information Visualization Rendering Donghao Ren1, Bongshin Lee2, and Tobias Höllerer1 1University of California, Santa Barbara, United States 2Microsoft Research, Redmond, United States Abstract Web-based visualization libraries are in wide use, but performance bottlenecks occur when rendering, and especially animating, a large number of graphical marks. While GPU-based rendering can drastically improve performance, that paradigm has a steep learning curve, usually requiring expertise in the computer graphics pipeline and shader programming. In addition, the recent growth of virtual and augmented reality poses a challenge for supporting multiple display environments beyond regular canvases, such as a Head Mounted Display (HMD) and Cave Automatic Virtual Environment (CAVE). In this paper, we introduce a new web-based visualization library called Stardust, which provides a familiar API while leveraging GPU’s processing power. Stardust also enables developers to create both 2D and 3D visualizations for diverse display environments using a uniform API. To demonstrate Stardust’s expressiveness and portability, we present five example visualizations and a coding playground for four display environments. We also evaluate its performance by comparing it against the standard HTML5 Canvas, D3, and Vega. Categories and Subject Descriptors (according to ACM CCS): -
Volume Rendering
Volume Rendering 1.1. Introduction Rapid advances in hardware have been transforming revolutionary approaches in computer graphics into reality. One typical example is the raster graphics that took place in the seventies, when hardware innovations enabled the transition from vector graphics to raster graphics. Another example which has a similar potential is currently shaping up in the field of volume graphics. This trend is rooted in the extensive research and development effort in scientific visualization in general and in volume visualization in particular. Visualization is the usage of computer-supported, interactive, visual representations of data to amplify cognition. Scientific visualization is the visualization of physically based data. Volume visualization is a method of extracting meaningful information from volumetric datasets through the use of interactive graphics and imaging, and is concerned with the representation, manipulation, and rendering of volumetric datasets. Its objective is to provide mechanisms for peering inside volumetric datasets and to enhance the visual understanding. Traditional 3D graphics is based on surface representation. Most common form is polygon-based surfaces for which affordable special-purpose rendering hardware have been developed in the recent years. Volume graphics has the potential to greatly advance the field of 3D graphics by offering a comprehensive alternative to conventional surface representation methods. The object of this thesis is to examine the existing methods for volume visualization and to find a way of efficiently rendering scientific data with commercially available hardware, like PC’s, without requiring dedicated systems. 1.2. Volume Rendering Our display screens are composed of a two-dimensional array of pixels each representing a unit area. -
Efficiently Using Graphics Hardware in Volume Rendering Applications
Efficiently Using Graphics Hardware in Volume Rendering Applications Rudiger¨ Westermann, Thomas Ertl Computer Graphics Group Universitat¨ Erlangen-Nurnber¨ g, Germany Abstract In this paper we are dealing with the efficient generation of a visual representation of the information present in volumetric data OpenGL and its extensions provide access to advanced per-pixel sets. For scalar-valued volume data two standard techniques, the operations available in the rasterization stage and in the frame rendering of iso-surfaces, and the direct volume rendering, have buffer hardware of modern graphics workstations. With these been developed to a high degree of sophistication. However, due to mechanisms, completely new rendering algorithms can be designed the huge number of volume cells which have to be processed and and implemented in a very particular way. In this paper we extend to the variety of different cell types only a few approaches allow the idea of extensively using graphics hardware for the rendering of parameter modifications and navigation at interactive rates for real- volumetric data sets in various ways. First, we introduce the con- istically sized data sets. To overcome these limitations we provide cept of clipping geometries by means of stencil buffer operations, a basis for hardware accelerated interactive visualization of both and we exploit pixel textures for the mapping of volume data to iso-surfaces and direct volume rendering on arbitrary topologies. spherical domains. We show ways to use 3D textures for the ren- Direct volume rendering tries to convey a visual impression of dering of lighted and shaded iso-surfaces in real-time without ex- the complete 3D data set by taking into account the emission and tracting any polygonal representation. -
Rendering of Feature-Rich Dynamically Changing Volumetric Datasets on GPU
Procedia Computer Science Volume 29, 2014, Pages 648–658 ICCS 2014. 14th International Conference on Computational Science Rendering of Feature-Rich Dynamically Changing Volumetric Datasets on GPU Martin Schreiber, Atanas Atanasov, Philipp Neumann, and Hans-Joachim Bungartz Technische Universit¨at M¨unchen, Munich, Germany [email protected],[email protected],[email protected],[email protected] Abstract Interactive photo-realistic representation of dynamic liquid volumes is a challenging task for today’s GPUs and state-of-the-art visualization algorithms. Methods of the last two decades consider either static volumetric datasets applying several optimizations for volume casting, or dynamic volumetric datasets with rough approximations to realistic rendering. Nevertheless, accurate real-time visualization of dynamic datasets is crucial in areas of scientific visualization as well as areas demanding for accurate rendering of feature-rich datasets. An accurate and thus realistic visualization of such datasets leads to new challenges: due to restrictions given by computational performance, the datasets may be relatively small compared to the screen resolution, and thus each voxel has to be rendered highly oversampled. With our volumetric datasets based on a real-time lattice Boltzmann fluid simulation creating dynamic cavities and small droplets, existing real-time implementations are not applicable for a realistic surface extraction. This work presents a volume tracing algorithm capable of producing multiple refractions which is also robust to small droplets and cavities. Furthermore we show advantages of our volume tracing algorithm compared to other implementations. Keywords: 1 Introduction The photo-realistic real-time rendering of dynamic liquids, represented by free surface flows and volumetric datasets, has been studied in numerous works [13, 9, 10, 14, 18]. -
A Survey of Algorithms for Volume Visualization
A Survey of Algorithms for Volume Visualization T. Todd Elvins Advanced Scientific Visualization Laboratory San Diego Supercomputer Center "... in 10 years, all rendering will be volume rendering." Jim Kajiya at SIGGRAPH '91 Many computer graphics programmers are working in the area is given in [Fren89]. Advanced topics in medical volume of scientific visualization. One of the most interesting and fast- visualization are covered in [Hohn90][Levo90c]. growing areas in scientific visualization is volume visualization. Volume visualization systems are used to create high-quality Furthering scientific insight images from scalar and vector datasets defined on multi- dimensional grids, usually for the purpose of gaining insight into a This section introduces the reader to the field of volume scientific problem. Most volume visualization techniques are visualization as a subfield of scientific visualization and discusses based on one of about five foundation algorithms. These many of the current research areas in both. algorithms, and the background necessary to understand them, are described here. Pointers to more detailed descriptions, further Challenges in scientific visualization reading, and advanced techniques are also given. Scientific visualization uses computer graphics techniques to help give scientists insight into their data [McCo87] [Brod91]. Introduction Insight is usually achieved by extracting scientifically meaningful information from numerical descriptions of complex phenomena The following is an introduction to the fast-growing field of through the use of interactive imaging systems. Scientists need volume visualization for the computer graphics programmer. these systems not only for their own insight, but also to share their Many computer graphics techniques are used in volume results with their colleagues, the institutions that support the visualization. -
Opengl 4.0 Shading Language Cookbook
OpenGL 4.0 Shading Language Cookbook Over 60 highly focused, practical recipes to maximize your use of the OpenGL Shading Language David Wolff BIRMINGHAM - MUMBAI OpenGL 4.0 Shading Language Cookbook Copyright © 2011 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: July 2011 Production Reference: 1180711 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849514-76-7 www.packtpub.com Cover Image by Fillipo ([email protected]) Credits Author Project Coordinator David Wolff Srimoyee Ghoshal Reviewers Proofreader Martin Christen Bernadette Watkins Nicolas Delalondre Indexer Markus Pabst Hemangini Bari Brandon Whitley Graphics Acquisition Editor Nilesh Mohite Usha Iyer Valentina J. D’silva Development Editor Production Coordinators Chris Rodrigues Kruthika Bangera Technical Editors Adline Swetha Jesuthas Kavita Iyer Cover Work Azharuddin Sheikh Kruthika Bangera Copy Editor Neha Shetty About the Author David Wolff is an associate professor in the Computer Science and Computer Engineering Department at Pacific Lutheran University (PLU). -
Ray Casting Architectures for Volume Visualization
Ray Casting Architectures for Volume Visualization The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Ray, Harvey, Hanspeter Pfister, Deborah Silver, and Todd A. Cook. 1999. Ray casting architectures for volume visualization. IEEE Transactions on Visualization and Computer Graphics 5(3): 210-223. Published Version doi:10.1109/2945.795213 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:4138553 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA Ray Casting Architectures for Volume Visualization Harvey Ray, Hansp eter P ster , Deb orah Silver , Todd A. Co ok Abstract | Real-time visualization of large volume datasets demands high p erformance computation, pushing the stor- age, pro cessing, and data communication requirements to the limits of current technology. General purp ose paral- lel pro cessors have b een used to visualize mo derate size datasets at interactive frame rates; however, the cost and size of these sup ercomputers inhibits the widespread use for real-time visualization. This pap er surveys several sp e- cial purp ose architectures that seek to render volumes at interactive rates. These sp ecialized visualization accelera- tors have cost, p erformance, and size advantages over par- allel pro cessors. All architectures implement ray casting using parallel and pip elined hardware. Weintro duce a new metric that normalizes p erformance to compare these ar- Fig. -
Statistically Quantitative Volume Visualization
Statistically Quantitative Volume Visualization Joe M. Kniss∗ Robert Van Uitert† Abraham Stephens‡ Guo-Shi Li§ Tolga Tasdizen University of Utah National Institutes of Health University of Utah University of Utah University of Utah Charles Hansen¶ University of Utah Abstract Visualization users are increasingly in need of techniques for assessing quantitative uncertainty and error in the im- ages produced. Statistical segmentation algorithms compute these quantitative results, yet volume rendering tools typi- cally produce only qualitative imagery via transfer function- based classification. This paper presents a visualization technique that allows users to interactively explore the un- certainty, risk, and probabilistic decision of surface bound- aries. Our approach makes it possible to directly visual- A) Transfer Function-based Classification B) Unsupervised Probabilistic Classification ize the combined ”fuzzy” classification results from multi- ple segmentations by combining these data into a unified Figure 1: A comparison of transfer function-based classification ver- probabilistic data space. We represent this unified space, sus data-specific probabilistic classification. Both images are based the combination of scalar volumes from numerous segmen- on T1 MRI scans of a human head and show fuzzy classified white- tations, using a novel graph-based dimensionality reduction matter, gray-matter, and cerebro-spinal fluid. Subfigure A shows the scheme. The scheme both dramatically reduces the dataset results of classification using a carefully designed 2D transfer func- size and is suitable for efficient, high quality, quantitative tion based on data value and gradient magnitude. Subfigure B shows visualization. Lastly, we show that the statistical risk aris- a visualization of the data classified using a fully automatic, atlas- ing from overlapping segmentations is a robust measure for based method that infers class statistics using minimum entropy, visualizing features and assigning optical properties. -
Phong Shading
Computer Graphics Shading Based on slides by Dianna Xu, Bryn Mawr College Image Synthesis and Shading Perception of 3D Objects • Displays almost always 2 dimensional. • Depth cues needed to restore the third dimension. • Need to portray planar, curved, textured, translucent, etc. surfaces. • Model light and shadow. Depth Cues Eliminate hidden parts (lines or surfaces) Front? “Wire-frame” Back? Convex? “Opaque Object” Concave? Why we need shading • Suppose we build a model of a sphere using many polygons and color it with glColor. We get something like • But we want Shading implies Curvature Shading Motivation • Originated in trying to give polygonal models the appearance of smooth curvature. • Numerous shading models – Quick and dirty – Physics-based – Specific techniques for particular effects – Non-photorealistic techniques (pen and ink, brushes, etching) Shading • Why does the image of a real sphere look like • Light-material interactions cause each point to have a different color or shade • Need to consider – Light sources – Material properties – Location of viewer – Surface orientation Wireframe: Color, no Substance Substance but no Surfaces Why the Surface Normal is Important Scattering • Light strikes A – Some scattered – Some absorbed • Some of scattered light strikes B – Some scattered – Some absorbed • Some of this scattered light strikes A and so on Rendering Equation • The infinite scattering and absorption of light can be described by the rendering equation – Cannot be solved in general – Ray tracing is a special case for -
CS488/688 Glossary
CS488/688 Glossary University of Waterloo Department of Computer Science Computer Graphics Lab August 31, 2017 This glossary defines terms in the context which they will be used throughout CS488/688. 1 A 1.1 affine combination: Let P1 and P2 be two points in an affine space. The point Q = tP1 + (1 − t)P2 with t real is an affine combination of P1 and P2. In general, given n points fPig and n real values fλig such that P P i λi = 1, then R = i λiPi is an affine combination of the Pi. 1.2 affine space: A geometric space composed of points and vectors along with all transformations that preserve affine combinations. 1.3 aliasing: If a signal is sampled at a rate less than twice its highest frequency (in the Fourier transform sense) then aliasing, the mapping of high frequencies to lower frequencies, can occur. This can cause objectionable visual artifacts to appear in the image. 1.4 alpha blending: See compositing. 1.5 ambient reflection: A constant term in the Phong lighting model to account for light which has been scattered so many times that its directionality can not be determined. This is a rather poor approximation to the complex issue of global lighting. 1 1.6CS488/688 antialiasing: Glossary Introduction to Computer Graphics 2 Aliasing artifacts can be alleviated if the signal is filtered before sampling. Antialiasing involves evaluating a possibly weighted integral of the (geometric) image over the area surrounding each pixel. This can be done either numerically (based on multiple point samples) or analytically. -
FAKE PHONG SHADING by Daniel Vlasic Submitted to the Department
FAKE PHONG SHADING by Daniel Vlasic Submitted to the Department of Electrical Engineering and Computer Science in Partial Fulfillment of the Requirements for the Degrees of Bachelor of Science in Computer Science and Engineering and Master of Engineering in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology May 17, 2002 Copyright 2002 M.I.T. All Rights Reserved. Author ________________________________________________________ Department of Electrical Engineering and Computer Science May 17, 2002 Approved by ___________________________________________________ Leonard McMillan Thesis Supervisor Accepted by ____________________________________________________ Arthur C. Smith Chairman, Department Committee on Graduate Theses FAKE PHONG SHADING by Daniel Vlasic Submitted to the Department of Electrical Engineering and Computer Science May 17, 2002 In Partial Fulfillment of the Requirements for the Degrees of Bachelor of Science in Computer Science and Engineering And Master of Engineering in Electrical Engineering and Computer Science ABSTRACT In the real-time 3D graphics pipeline framework, rendering quality greatly depends on illumination and shading models. The highest-quality shading method in this framework is Phong shading. However, due to the computational complexity of Phong shading, current graphics hardware implementations use a simpler Gouraud shading. Today, programmable hardware shaders are becoming available, and, although real-time Phong shading is still not possible, there is no reason not to improve on Gouraud shading. This thesis analyzes four different methods for approximating Phong shading: quadratic shading, environment map, Blinn map, and quadratic Blinn map. Quadratic shading uses quadratic interpolation of color. Environment and Blinn maps use texture mapping. Finally, quadratic Blinn map combines both approaches, and quadratically interpolates texture coordinates. All four methods adequately render higher-resolution methods. -
Ray Tracing Height Fields
Ray Tracing Height Fields £ £ £ Huamin Qu£ Feng Qiu Nan Zhang Arie Kaufman Ming Wan † £ Center for Visual Computing (CVC) and Department of Computer Science State University of New York at Stony Brook, Stony Brook, NY 11794-4400 †The Boeing Company, P.O. Box 3707, M/C 7L-40, Seattle, WA 98124-2207 Abstract bility, make the ray tracing approach a promising alterna- tive for rasterization approach when the size of the terrain We present a novel surface reconstruction algorithm is very large [13]. More importantly, ray tracing provides which can directly reconstruct surfaces with different levels more flexibility than hardware rendering. For example, ray of smoothness in one framework from height fields using 3D tracing allows us to operate directly on the image/Z-buffer discrete grid ray tracing. Our algorithm exploits the 2.5D to render special effects such as terrain with underground nature of the elevation data and the regularity of the rect- bunkers, terrain with shadows, and flythrough with a fish- angular grid from which the height field surface is sampled. eye view. In addition, it is easy to incorporate clouds, haze, Based on this reconstruction method, we also develop a hy- flames, and other amorphous phenomena into the scene by brid rendering method which has the features of both ras- ray tracing. Ray tracing can also be used to fill in holes in terization and ray tracing. This hybrid method is designed image-based rendering. to take advantage of GPUs newly available flexibility and processing power. Most ray tracing height field papers [2, 4, 8, 10] focused on fast ray traversal algorithms and antialiasing methods.