Out-Of-Core Multi-Resolution Volume Rendering of Large Data Sets Fredrik Lundell

Total Page:16

File Type:pdf, Size:1020Kb

Out-Of-Core Multi-Resolution Volume Rendering of Large Data Sets Fredrik Lundell LiU-ITN-TEK-A--11/038--SE Out-of-Core Multi-Resolution Volume Rendering of Large Data Sets Fredrik Lundell 2011-06-10 Department of Science and Technology Institutionen för teknik och naturvetenskap Linköping University Linköpings universitet SE-601 74 Norrköping, Sweden 601 74 Norrköping LiU-ITN-TEK-A--11/038--SE Out-of-Core Multi-Resolution Volume Rendering of Large Data Sets Examensarbete utfört i medieteknik vid Tekniska högskolan vid Linköpings universitet Fredrik Lundell Examinator Karljohan Lundin Palmerius Norrköping 2011-06-10 Upphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under en längre tid från publiceringsdatum under förutsättning att inga extra- ordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/ Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: http://www.ep.liu.se/ © Fredrik Lundell Abstract A modality device can today capture high resolution volumetric data sets and as the data resolutions increase so does the challenges of processing volumetric data through a visualization pipeline. Standard volume ren- dering pipelines often use a graphic processing unit (GPU) to accelerate rendering performance by taking beneficial use of the parallel architec- ture on such devices. Unfortunately, graphics cards have limited amounts of video memory (VRAM), causing a bottleneck in a standard pipeline. Multi-resolution techniques can be used to efficiently modify the rendering pipeline, allowing a sub-domain within the volume to be represented at dif- ferent resolutions. The active resolution distribution is temporarily stored on the VRAM for rendering and the inactive parts are stored on secondary memory layers such as the system RAM or on disk. The active resolution set can be optimized to produce high quality renders while minimizing the amount of storage required. This is done by using a dynamic compression scheme which optimize the visual quality by evaluating user-input data. The optimized resolution of each sub-domain is then, on demand, streamed to the VRAM from secondary memory layers. Rendering a multi-resolution data set requires some extra care between boundaries of sub-domains. To avoid artifacts, an intrablock interpolation (II) sampling scheme capable of creating smooth transitions between sub-domains at arbitrary resolutions can be used. The result is a highly optimized rendering pipeline comple- mented with a preprocessing pipeline together capable of rendering large volumetric data in real-time. Acknowledgements I especially want to thank my supervisors Daniel Jönsson and Erik Sundén for their support during this project. Special thanks to my examiner Karljo- han E. Lundin Palmerius for valuable feedback and to the Voreen commu- nity for answering my questions. I would also like to thank my family, friends and girlfriend for supporting me through all these years. Contents 1 Introduction 4 1.1 Motivation . 4 1.2 Aim . 5 2 Concepts of Direct Volume Rendering 6 2.1 Volumetric Data . 6 2.1.1 Volumetric Data Acquisition . 7 2.1.2 Volumetric Data Representation . 7 2.2 Transfer functions . 8 2.3 Direct Volume Rendering . 9 2.3.1 Volume Rendering Integral . 9 2.3.2 Volume Ray Casting . 10 2.4 Large Data sets . 11 2.4.1 Static Data Reduction . 12 2.4.2 Dynamic Data Reduction . 12 2.4.3 Distortion metrics . 13 2.5 Graphic Processing Unit . 14 2.6 GPU Based Ray Casting . 14 2.7 OpenCL . 15 3 Out-of-core Streaming and Rendering of Multi-resolution Volu- metric data 17 3.1 Data Preprocessing and Analysis . 18 3.1.1 Multi-resolution Blocking . 18 3.1.2 Approximating Density Distribution Histograms . 19 3.1.3 Error estimation . 20 3.2 Level-of-detail management . 21 3.2.1 View-Dependent Scheme . 21 3.2.2 Transfer function based Scheme . 22 3.3 Out-of-Core Data Management . 24 3.3.1 Multi-threaded Data Stream . 25 1 3.3.2 Data Stream Optimization . 26 3.4 Mixed-Resolution Texture Packing . 26 3.4.1 Dynamic Updates . 27 3.5 Pipeline Overview . 29 3.6 Multi-resolution Volume Rendering . 31 3.6.1 Multi-resolution Raycasting . 31 4 Implementation Details 36 4.1 Voreen Framework Integration . 36 4.1.1 Preprocessing . 37 4.1.2 Rendering . 37 4.2 QT Threading . 39 4.3 OpenCL Ray-Casting . 40 5 Result 41 5.1 Test Data . 41 5.2 Test System . 41 5.3 Preprocessing . 42 5.4 Out-of-Core Block Reading Performance . 42 5.5 Intrablock Volume sampling . 43 5.6 Adaptive Sampling . 45 5.7 TF Based Data Reduction . 46 5.8 View Based Data Compression . 49 5.9 Rendering Performance . 49 6 Discussion 55 6.1 Conclusion . 55 6.2 Future Work . 56 References . 57 2 Abbreviations LOD Level Of Detail FPS Frames Per Second GPU Graphic Processing Unit GPGPU General-Purpose computation on Graphics Processing Units Streaming Transfer data between different layers of memory TF Transfer function DVR Direct Volume Rendering VRAM Video Random-Access Memory NN Sampling Nearest Block Sampling II Sampling Intrablock Interpolation Sampling 3 Chapter 1 Introduction The following chapter will give a brief introduction to some of the chal- lenges currently facing the field of volumetric visualization and provide the main motivation and aim of this thesis. 1.1 Motivation Scientific visualization is a field in science aiming to reveal and find corre- lation within spatial and temporal structures of data. Volume rendering is a specific branch of visualization used to obtain images from three- dimensional data sets. Volumetric data holds information about the inter- nal structure of an object and special visualization techniques are needed for extracting different abstractions within the data. Volume visualization has a wide range of applications in particular within the field of medical visualization. The data sets obtained from acquisition devices has in recent years rapidly increased in size and hardware limitations in terms of mem- ory capacity and transfers rates forces the use of date reduction schemes and out-of-core storage to overcome these limitations. Direct volume rendering (DVR) is a volumetric visualization technique used to directly extract images from a volumetric data set. The original DVR pipeline gives the user the ability to interact with the data in order to reveal interior structures within the volume. To provide a user with full understanding of the underlying data it is important that the system properly responds to user interaction. Real-time performance is hard to achieve for the rendering of high quality images. The DVR pipeline often uses hardware acceleration units such as a Graphical Processing Unit (GPU) to accelerate rendering. Data must then be stored on the graphic card’s VRAM, which is a very limited resource. 4 1.2 Aim The aim of this thesis is to extend the basic DVR pipeline to render large volumetric data sets in real-time. The proposed pipeline shall benefit from out-of-core storage and multi-resolution volume representation where subparts of arbitrary resolution can be loaded in-core on request during runtime. The system will exploit parallel data processing on the GPU as well as multi-threading on the CPU for maximizing the performance. The system shall respond properly to user interaction and use such information to minimize the amount of data to process through the pipeline while maximizing the rendering quality. The system shall implement multi- resolution rendering techniques used to increase the render quality for data represented at arbitrary resolution and use acceleration techniques such as adaptive sampling to increase the performance of rendering. 5 Chapter 2 Concepts of Direct Volume Rendering This chapter covers the basic concepts of DVR. DVR is the process of extracting two-dimensional images directly from a three-dimensional scalar field of data. Contradictory to DVR, an indirect approach is to extract a polygonal mesh of an iso-surface obtained from the volume. The DVR pipeline consist of several important steps, all of which will be covered in this chapter. The first part explains the essence of volumetric data and how it can be represented in a computer memory. The next part discusses the vital aspects of volumetric classifications and how it can be used to reveal embedded structures within the volume.
Recommended publications
  • Ways of Seeing Data: a Survey of Fields of Visualization
    • Ways of seeing data: • A survey of fields of visualization • Gordon Kindlmann • [email protected] Part of the talk series “Show and Tell: Visualizing the Life of the Mind” • Nov 19, 2012 http://rcc.uchicago.edu/news/show_and_tell_abstracts.html 2012 Presidential Election REPUBLICAN DEMOCRAT http://www.npr.org/blogs/itsallpolitics/2012/11/01/163632378/a-campaign-map-morphed-by-money 2012 Presidential Election http://gizmodo.com/5960290/this-is-the-real-political-map-of-america-hint-we-are-not-that-divided 2012 Presidential Election http://www.npr.org/blogs/itsallpolitics/2012/11/01/163632378/a-campaign-map-morphed-by-money 2012 Presidential Election http://www.npr.org/blogs/itsallpolitics/2012/11/01/163632378/a-campaign-map-morphed-by-money 2012 Presidential Election http://www.npr.org/blogs/itsallpolitics/2012/11/01/163632378/a-campaign-map-morphed-by-money Clarifying distortions Tube map from 1908 http://www.20thcenturylondon.org.uk/beck-henry-harry http://briankerr.wordpress.com/2009/06/08/connections/ http://en.wikipedia.org/wiki/Harry_Beck Clarifying distortions http://www.20thcenturylondon.org.uk/beck-henry-harry Harry Beck 1933 http://briankerr.wordpress.com/2009/06/08/connections/ http://en.wikipedia.org/wiki/Harry_Beck Clarifying distortions Clarifying distortions Joachim Böttger, Ulrik Brandes, Oliver Deussen, Hendrik Ziezold, “Map Warping for the Annotation of Metro Maps” IEEE Computer Graphics and Applications, 28(5):56-65, 2008 Maps reflect conventions, choices, and priorities “A single map is but one of an indefinitely large
    [Show full text]
  • Scientific Visualization
    Report from Dagstuhl Seminar 11231 Scientific Visualization Edited by Min Chen1, Hans Hagen2, Charles D. Hansen3, and Arie Kaufman4 1 University of Oxford, GB, [email protected] 2 TU Kaiserslautern, DE, [email protected] 3 University of Utah, US, [email protected] 4 SUNY – Stony Brook, US, [email protected] Abstract This report documents the program and the outcomes of Dagstuhl Seminar 11231 “Scientific Visualization”. Seminar 05.–10. June, 2011 – www.dagstuhl.de/11231 1998 ACM Subject Classification I.3 Computer Graphics, I.4 Image Processing and Computer Vision, J.2 Physical Sciences and Engineering, J.3 Life and Medical Sciences Keywords and phrases Scientific Visualization, Biomedical Visualization, Integrated Multifield Visualization, Uncertainty Visualization, Scalable Visualization Digital Object Identifier 10.4230/DagRep.1.6.1 1 Executive Summary Min Chen Hans Hagen Charles D. Hansen Arie Kaufman License Creative Commons BY-NC-ND 3.0 Unported license © Min Chen, Hans Hagen, Charles D. Hansen, and Arie Kaufman Scientific Visualization (SV) is the transformation of abstract data, derived from observation or simulation, into readily comprehensible images, and has proven to play an indispensable part of the scientific discovery process in many fields of contemporary science. This seminar focused on the general field where applications influence basic research questions on one hand while basic research drives applications on the other. Reflecting the heterogeneous structure of Scientific Visualization and the currently unsolved problems in the field, this seminar dealt with key research problems and their solutions in the following subfields of scientific visualization: Biomedical Visualization: Biomedical visualization and imaging refers to the mechan- isms and techniques utilized to create and display images of the human body, organs or their components for clinical or research purposes.
    [Show full text]
  • Multi-Feature Based Transfer Function Design for 3D Medical Image Visualization
    2010 3rd International Conference on Biomedical Engineering and Informatics (BMEI 2010) Multi-feature Based Transfer Function Design for 3D Medical Image Visualization Yi Peng, Li Chen Institute of CG & CAD School of Software, Tsinghua University Beijing, China Abstract—Visualization of three dimensional volumetric medical in multi-dimensional transfer function design. Each of them is data has been widely used. Even though, it still faces a lot of related to a particular local feature. Chen, et al. introduce a challenges, such as helping users find out structure of their volume illustration technique which is shape-aware, as it interest, illustrating features which are significant for diagnosis depends not only on the rendering styles, but also the shape of disease. In our paper, a multi-feature based transfer function styles [6]. Chen’s work improves the expressiveness of volume is provided to improve the quality of visualization. We compute a illustration by focusing more on shape, but it needs the help of multi-feature descriptor for both two-phase clustering and other given illustrations. transfer function design. Moreover, we test the transfer function on several medical datasets to show the efficiency and In addition, [7] presents two visualization techniques for practicability of our method. curve-centric volume reformation to create compelling comparative visualizations, and [8] introduces a texture-based Multi-feature, statistical analysis, pre-segmentation, transfer volume rendering approach which achieves the image quality function, volume rendering, visualization of the best post-shading approaches with far less slices. Their approaches use the latest techniques to accelerate volume I. INTRODUCTION rendering while keep using the general visualization methods.
    [Show full text]
  • Surfacing Visualization Mirages
    Surfacing Visualization Mirages Andrew McNutt Gordon Kindlmann Michael Correll University of Chicago University of Chicago Tableau Research Chicago, IL Chicago, IL Seattle, WA [email protected] [email protected] [email protected] ABSTRACT In this paper, we present a conceptual model of these visual- Dirty data and deceptive design practices can undermine, in- ization mirages and show how users’ choices can cause errors vert, or invalidate the purported messages of charts and graphs. in all stages of the visual analytics (VA) process that can lead These failures can arise silently: a conclusion derived from to untrue or unwarranted conclusions from data. Using our a particular visualization may look plausible unless the an- model we observe a gap in automatic techniques for validating alyst looks closer and discovers an issue with the backing visualizations, specifically in the relationship between data data, visual specification, or their own assumptions. We term and chart specification. We address this gap by developing a such silent but significant failures visualization mirages. We theory of metamorphic testing for visualization which synthe- describe a conceptual model of mirages and show how they sizes prior work on metamorphic testing [92] and algebraic can be generated at every stage of the visual analytics process. visualization errors [54]. Through this combination we seek to We adapt a methodology from software testing, metamorphic alert viewers to situations where minor changes to the visual- testing, as a way of automatically surfacing potential mirages ization design or backing data have large (but illusory) effects at the visual encoding stage of analysis through modifications on the resulting visualization, or where potentially important to the underlying data and chart specification.
    [Show full text]
  • New Faculty Members and Postdoctoral Fellows Spill the Beans
    New faculty members and postdoctoral fellows spill the beans Alark Joshi ∗ Jeffrey Heer† Gordon L. Kindlmann‡ Miriah Meyer§ Yale University Stanford University University of Chicago Harvard University ABSTRACT • Strategically scheduling your interviews Applying for an academic position can be a daunting task. In this • Cultivating internal champions panel, we talk with a few new faculty members in the field of vi- sualization and find out more about the process. We share some • Planning and practicing ones job talk insights into how does one go about finding an academic position, what kind of material is required for the application packet, how do • Remembering to have fun with the interview process you prepare the material, how does one apply for the faculty posi- • Framing negotiation as enabling research success tion, what happens on the day of the job interview, what would new faculty members have wished they had known before they applied • Deciding whats right for you - An exercise in multi-objective and much more. optimization (?) With many universities facing budget cuts in this economy, the likelihood of new faculty positions opening up may be slim. We • And a good problem to have: the pain of rejecting offers discuss the wonderful alternative of taking up a postdoctoral posi- tion. Postdoctoral fellows on the panel will share their experiences Biosketch: and discuss what the position entails. Jeffrey Heer is a newly minted Assistant Professor of Com- puter Science at Stanford University, where he works on human- 1 INTRODUCTION computer interaction, visualization, and social computing. Heer’s For graduate students and postdoctoral fellows, academic careers research has produced novel visualization techniques for explor- always seem to be a challenging proposition.
    [Show full text]
  • Visualization
    Tamara Munzner 2 7 Visualization A major application area of computer graphics is visualization, where computer- generated images are used to help people understand both spatial and non-spatial data. Visualization is used when the goal is to augment human capabilities in situations where the problem is not sufficiently well defined for a computer to handle algorithmically. If a totally automatic solution can completely replace hu- man judgement, then visualization is not typically required. Visualization can be used to generate new hypotheses when exploring a completely unfamiliar dataset, to confirm existing hypotheses in a partially understood dataset, or to present in- formation about a known dataset to another audience. Visualization allows people to offload cognition to the perceptual system, us- ing carefully designed images as a form of external memory. The human visual system is a very high-bandwidth channel to the brain, with a significant amount of processing occurring in parallel and at the pre-conscious level. We can thus use external images as a substitute for keeping track of things inside our own heads. For an example, let us consider the task of understanding the relationships between a subset of the topics in the splendid book Godel,¨ Escher, Bach: The Eternal Golden Braid (Hofstadter, 1979); see Figure 27.1. When we see the dataset as a text list, at the low level we must read words and compare them to memories of previously read words. It is hard to keep track of just these dozen topics using cognition and memory alone, let alone the hun- dreds of topics in the full book.
    [Show full text]
  • A Preliminary Model for the Design of Music Visualizations 2
    A Preliminary Model for the Design of Music Visu- 1 alizations 1 This paper appears as a poster at PacificVis 2021 Swaroop Panda, Shatarupa Thakurta Roy Indian Institute of Technology Kanpur Music Visualization is basically the transformation of data from the aural to the visual space. There are a variety of music visualizations, across applications, present on the web. Models of Visualization in- clude conceptual frameworks helpful for designing, understanding and making sense of visualizations. In this paper, we propose a pre- liminary model for Music Visualization. We build the model by using two conceptual pivots – Visualization Stimulus and Data Property. To demonstrate the utility of the model we deconstruct and design visu- alizations with toy examples using the model and finally conclude by proposing further applications of and future work on our proposed model. Introduction Music is primarily aural in nature. Music is recorded, processed and distributed in a variety of data formats; such as digital audio, MIDI (Musical Instrument Digital Interface), and other symbolic structures like sheet notations. These data formats (or representations) contain rich information about the music file and thus is available for the purpose of analysis. Visualization of music refers to the transformation of the data from the aural to the visual space. Music is made visible. There are a plenty of existing music visualizations across the web. They are found in a variety of formats. Their wide range of applications makes them complex and hard to decipher. These visualizations range from ordinary spectrograms (used for sophisticated Machine Learning tasks) to fancy graphics in media players and streaming ser- vices accompanying the music (for a augmented and a pleasing user experience) to fanciful tree maps representing a playlist (used in ex- ploratory music databases).
    [Show full text]
  • Perspectives on Teaching Data Visualization
    Perspectives on Teaching Data Visualization Jason Dykes∗ Daniel F. Keefe† Gordon Kindlmann‡ Tamara Munzner § Alark Joshi¶ City University London University of Minnesota University of Chicago University of British Columbia Yale University ABSTRACT in my teaching, even in courses that draw entirely computer science We propose to present our perspectives on teaching data visualiza- students. tion to a variety of audiences. The panelists will address issues My visualization course at the University of Minnesota places an related to increasing student engagement with class material, ways emphasis on interacting with data and draws heavily upon research of dealing with heavy reading load, tailoring course material based in virtual reality and human-computer interaction. One strategy that on the audience and incorporating an interdisciplinary approach in I have used in this course to teach about visual design and critique is the course. to lecture and develop assignments around sketching (prototyping) Developing and teaching truly interdisciplinary data visualiza- user experiences, as inspired by Bill Buxtons recent book, which tion courses can be challenging. Panelists will present their expe- includes a number of smart case studies and rapid design activities, riences regarding courses that were successful and address finer is- from paper to video prototyping. sues related to designing assignments for an interdisciplinary class, My perspective is that visualization courses should reflect the in- textbooks, collaboration-based final projects. terdisciplinary (often team approach) to discovery that is part of vi- sualization. I try to do this by highlighting the connections between 1 INTRODUCTION visualization and art/design, but to be successful in visualization Teaching a course in data visualization can be daunting.
    [Show full text]
  • Charles D. Hansen January 2, 2021
    Vita Charles D. Hansen September 15, 2021 Current Address: School of Computing 50 S. Central Campus Drive, 3190 MEB, University of Utah Salt Lake City, Utah 84112 (801) 581-3154 (work) [email protected] email www.cs.utah.edu/˜hansen webpage Professional Employment 3/1/2019topresent UniversityofUtah DistinguishedProfessor of Computing 7/1 2005 to 3/1/2019 University of Utah Professor (School of Computing) 9/1/2003 to 6/20/2018 University of Utah Associate Director Scientific Computing and Imaging Institute 5/1 2012 to 06/30 2012 University of Stuttgart SimTech Senior Fellow 11/1 2011 to 04/30 2012 University Joseph Fourier Visiting Professor 7/1/2008 to 6/30/2010 University of Utah Associate Director School of Computing 8/15 2004 to 7/30 2005 INRIA Rhone-Alpes Visiting Scientist 9/1 1998 to 6/30 2005 University of Utah Associate Professor (School of Computing) 7/1 2001 to 6/03 2003 University of Utah Associate Director School of Computing 2/1 1997 to 8/31 1998 University of Utah Research Associate Professor (Dept. of Computer Science) 9/18 1989 to 1/31 1997 Los Alamos National Lab Technical Staff Member Advanced Computing Laboratory 9/1 1994 to 1/31 1997 Univ. of New Mexico Visiting Research Assistant Professor 9/1 1990 to 1/31 1997 New Mexico Tech Adjunct Professor 8/1 1988 to 7/31 1989: University of Utah Visiting Assistant Professor 7/5 1987 to 7/31 1988: INRIA Postdoctoral Research Scientist (Vision and Robotics) 1/1 1987 to 7/5 1987: University of Utah Research Assistant (Vision Group) 9/15 1986 to 12/31 1986: University of Utah Teaching Assistant (Computer Science Dept) 9/1 1986 to 9/14 1986: University of Utah Research Assistant (Vision Group) 9/1 1983 to 8/31 1986: ARO Research Fellow 6/16 1983 to 8/31 1983: University of Utah Research Assistant (Very Large Text Retrieval Project) 9/15 1982 to 6/15 1983: University of Utah Teaching Assistant (Computer Science Dept) 8/1 1979 to 8/31 1982 Fred P.
    [Show full text]
  • Semi-Automatic Generation of Transfer Functions for Direct Volume Rendering
    Semi-Automatic Generation of Transfer Functions for Direct Volume Rendering Gordon Kindlmann *and James W. Durkin t Program of Computer Graphics Comell University Abstract Finding a good transfer function is critical to producing an in- formative rendering, but even if the only variable which needs to Although direct volume rendering is a powerful tool for visualizing be set is opacity, it is a difficult task. Looking through slices of complex structures within volume data, the size and complexity of the volume dataset allows one to spatially locate features of inter- the parameter space controlling the rendering process makes gener- est, and a means of reading off data values from a user-specified ating an informative rendering challenging. In particular, the speci- point on the slice can help in setting an opacity function to high- fication of the transfer function - the mapping from data values to light those features, but there is no way to know how representative renderable optical properties -is frequently a time-consuming and of the whole feature, in three dimensions, these individually sam- unintuitive task. Ideally, the data being visualiied should itself sug- pled values are. User interfaces for opacity function specification gest an appropriate transfer function that brings out the features of typically allow the user to alter the opacity function by directly edit- interest without obscuring them with elements of little importance. ing its graph, usually as a series of linear ramps joining adjustable We demonstrate that this is possible for a large class of scalar vol- control points. This interface doesn't itself guide the user towards ume data, namely that where the regions of inlerest are the bound- a useful setting, as the movement of the control points is uncon- aries between different materials.
    [Show full text]
  • 9 Multi-Dimensional Transfer Functions for Volume Rendering
    Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 181 9 Multi-Dimensional Transfer Functions for Volume Rendering JOE KNISS, GORDON KINDLMANN, and CHARLES HANSEN Scientific Computing and Imaging Institute School of Computing, University of Utah single boundary, such as thickness. When 9.1 Introduction working with multivariate data, a similar diffi- Direct volume-rendering has proven to be an culty arises with features that can be identified effective and flexible visualization method for only by their unique combination of multiple 3D scalar fields. Transfer functions are funda- data values. A 1D transfer function is simply mental to direct volume-rendering because their not capable of capturing this relationship. role is essentially to make the data visible: by Unfortunately, using multidimensional trans- assigning optical properties like color and fer functions in volume rendering is complicated. opacity to the voxel data, the volume can be Even when the transfer function is only 1D, rendered with traditional computer graphics finding an appropriate transfer function is gen- methods. Good transfer functions reveal the erally accomplished by trial and error. This is one important structures in the data without obscur- of the main challenges in making direct volume- ing them with unimportant regions. To date, rendering an effective visualization tool. Adding transfer functions have generally been limited dimensions to the transfer-function domain only Q1 to 1D domains, meaning that the 1D space of compounds the problem. While this is an on- scalar data value has been the only variable to going research area, many of the proposed which opacity and color are assigned.
    [Show full text]
  • Semi-Automatic Generation of Transfer Functions for Direct
    SEMI-AUTOMATIC GENERATION OF TRANSFER FUNCTIONS FOR DIRECT VOLUME RENDERING A Thesis Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Master of Science by Gordon Lothar Kindlmann January 1999 c 1999 Gordon Lothar Kindlmann ALL RIGHTS RESERVED ABSTRACT Finding appropriate transfer functions for direct volume rendering is a difficult problem because of the large amount of user experimentation typically involved. Ideally, the dataset being rendered should itself be able to suggest a transfer func- tion which makes the important structures visible. We demonstrate that this is possible for a large class of scalar volume data, namely that where the region of interest is the boundary between different materials. A transfer function which makes boundaries readily visible can be generated from the relationship between three quantities: the data value and its first and second directional derivatives along the gradient direction. A data structure we term the histogram volume cap- tures the relationship between these quantities throughout the volume in a position independent, computationally efficient fashion. We describe the theoretical impor- tance of the quantities measured by the histogram volume, the implementation issues in its calculation, and a method for semi-automatic transfer function gen- eration through its analysis. The techniques presented here make direct volume rendering easier to use, not only because there are much fewer variables for the user to adjust to find an informative rendering, but because using them is more intuitive then current interfaces for transfer function specification. Furthermore, the results are derived solely from the original dataset and its inherent patterns of values, without the introduction of any artificial structures or limitations.
    [Show full text]