Volume Rendering
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Copyrighted Material
1 The Duality of Peer Production Infrastructure for the Digital Commons, Free Labor for Free‐Riding Firms Mathieu O’Neil, Sophie Toupin, and Christian Pentzold 1 Introduction There never was a “tragedy of the commons”: Garrett Hardin’s overgrazing farmers were victims of a tragedy of self‐management, as they failed to collectively regulate, as equals, their common pasture. When Elinor Ostrom was awarded the Nobel Prize in Economics in 2009, the immemorial notion that there are only two types of goods in the world – private and public, coordinated by markets or the state – was finally put to rest. In the most general terms, peer producers are people who create and manage common‐pool resources together. It sometimes seems as if “peer production” and “digital commons” can be used interchangeably. Digital commons are non‐rivalrous (they can be reproduced at little or no cost) and non‐excludable (no‐one can prevent others from using them, through prop- erty rights for example). So, practically speaking, proprietary objects could be produced by equal “peers,” however we argue that peer production has a normative dimension, so that what chiefly char- acterizes this mode of production is that “the output is orientated towards the further expansion of the commons; while the commons, recursively, is the chief resource in this mode of production” (Söderberg & O’Neil, 2014, p. 2). Though there are many historical antecedents, the term “peer pro- duction,” as an object of public and scientific interest, is historically situated in the early 2000s.1 The meanings associated with a term that is deeply connected to the Internet as it was 20 years ago are bound to change. -
Stardust: Accessible and Transparent GPU Support for Information Visualization Rendering
Eurographics Conference on Visualization (EuroVis) 2017 Volume 36 (2017), Number 3 J. Heer, T. Ropinski and J. van Wijk (Guest Editors) Stardust: Accessible and Transparent GPU Support for Information Visualization Rendering Donghao Ren1, Bongshin Lee2, and Tobias Höllerer1 1University of California, Santa Barbara, United States 2Microsoft Research, Redmond, United States Abstract Web-based visualization libraries are in wide use, but performance bottlenecks occur when rendering, and especially animating, a large number of graphical marks. While GPU-based rendering can drastically improve performance, that paradigm has a steep learning curve, usually requiring expertise in the computer graphics pipeline and shader programming. In addition, the recent growth of virtual and augmented reality poses a challenge for supporting multiple display environments beyond regular canvases, such as a Head Mounted Display (HMD) and Cave Automatic Virtual Environment (CAVE). In this paper, we introduce a new web-based visualization library called Stardust, which provides a familiar API while leveraging GPU’s processing power. Stardust also enables developers to create both 2D and 3D visualizations for diverse display environments using a uniform API. To demonstrate Stardust’s expressiveness and portability, we present five example visualizations and a coding playground for four display environments. We also evaluate its performance by comparing it against the standard HTML5 Canvas, D3, and Vega. Categories and Subject Descriptors (according to ACM CCS): -
Volume Rendering
Volume Rendering 1.1. Introduction Rapid advances in hardware have been transforming revolutionary approaches in computer graphics into reality. One typical example is the raster graphics that took place in the seventies, when hardware innovations enabled the transition from vector graphics to raster graphics. Another example which has a similar potential is currently shaping up in the field of volume graphics. This trend is rooted in the extensive research and development effort in scientific visualization in general and in volume visualization in particular. Visualization is the usage of computer-supported, interactive, visual representations of data to amplify cognition. Scientific visualization is the visualization of physically based data. Volume visualization is a method of extracting meaningful information from volumetric datasets through the use of interactive graphics and imaging, and is concerned with the representation, manipulation, and rendering of volumetric datasets. Its objective is to provide mechanisms for peering inside volumetric datasets and to enhance the visual understanding. Traditional 3D graphics is based on surface representation. Most common form is polygon-based surfaces for which affordable special-purpose rendering hardware have been developed in the recent years. Volume graphics has the potential to greatly advance the field of 3D graphics by offering a comprehensive alternative to conventional surface representation methods. The object of this thesis is to examine the existing methods for volume visualization and to find a way of efficiently rendering scientific data with commercially available hardware, like PC’s, without requiring dedicated systems. 1.2. Volume Rendering Our display screens are composed of a two-dimensional array of pixels each representing a unit area. -
Efficiently Using Graphics Hardware in Volume Rendering Applications
Efficiently Using Graphics Hardware in Volume Rendering Applications Rudiger¨ Westermann, Thomas Ertl Computer Graphics Group Universitat¨ Erlangen-Nurnber¨ g, Germany Abstract In this paper we are dealing with the efficient generation of a visual representation of the information present in volumetric data OpenGL and its extensions provide access to advanced per-pixel sets. For scalar-valued volume data two standard techniques, the operations available in the rasterization stage and in the frame rendering of iso-surfaces, and the direct volume rendering, have buffer hardware of modern graphics workstations. With these been developed to a high degree of sophistication. However, due to mechanisms, completely new rendering algorithms can be designed the huge number of volume cells which have to be processed and and implemented in a very particular way. In this paper we extend to the variety of different cell types only a few approaches allow the idea of extensively using graphics hardware for the rendering of parameter modifications and navigation at interactive rates for real- volumetric data sets in various ways. First, we introduce the con- istically sized data sets. To overcome these limitations we provide cept of clipping geometries by means of stencil buffer operations, a basis for hardware accelerated interactive visualization of both and we exploit pixel textures for the mapping of volume data to iso-surfaces and direct volume rendering on arbitrary topologies. spherical domains. We show ways to use 3D textures for the ren- Direct volume rendering tries to convey a visual impression of dering of lighted and shaded iso-surfaces in real-time without ex- the complete 3D data set by taking into account the emission and tracting any polygonal representation. -
Inviwo — a Visualization System with Usage Abstraction Levels
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL X, NO. Y, MAY 2019 1 Inviwo — A Visualization System with Usage Abstraction Levels Daniel Jonsson,¨ Peter Steneteg, Erik Sunden,´ Rickard Englund, Sathish Kottravel, Martin Falk, Member, IEEE, Anders Ynnerman, Ingrid Hotz, and Timo Ropinski Member, IEEE, Abstract—The complexity of today’s visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities. Index Terms—Visualization systems, data visualization, visual analytics, data analysis, computer graphics, image processing. F 1 INTRODUCTION The field of visualization is maturing, and a shift can be employing different layers of abstraction. -
A Process for Digitizing and Simulating Biologically Realistic Oligocellular Networks Demonstrated for the Neuro-Glio-Vascular Ensemble Edited By: Yu-Guo Yu, Jay S
fnins-12-00664 September 27, 2018 Time: 15:25 # 1 METHODS published: 25 September 2018 doi: 10.3389/fnins.2018.00664 A Process for Digitizing and Simulating Biologically Realistic Oligocellular Networks Demonstrated for the Neuro-Glio-Vascular Ensemble Edited by: Yu-Guo Yu, Jay S. Coggan1 †, Corrado Calì2 †, Daniel Keller1, Marco Agus3,4, Daniya Boges2, Fudan University, China * * Marwan Abdellah1, Kalpana Kare2, Heikki Lehväslaiho2,5, Stefan Eilemann1, Reviewed by: Renaud Blaise Jolivet6,7, Markus Hadwiger3, Henry Markram1, Felix Schürmann1 and Clare Howarth, Pierre J. Magistretti2 University of Sheffield, United Kingdom 1 Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland, 2 Biological and Environmental Ying Wu, Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia, 3 Visual Xi’an Jiaotong University, China Computing Center, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia, 4 CRS4, Center of Research *Correspondence: and Advanced Studies in Sardinia, Visual Computing, Pula, Italy, 5 CSC – IT Center for Science, Espoo, Finland, Jay S. Coggan 6 Département de Physique Nucléaire et Corpusculaire, University of Geneva, Geneva, Switzerland, 7 The European jay.coggan@epfl.ch; Organization for Nuclear Research, Geneva, Switzerland [email protected] Corrado Calì [email protected]; One will not understand the brain without an integrated exploration of structure and [email protected] function, these attributes being two sides of the same coin: together they form the †These authors share first authorship currency of biological computation. Accordingly, biologically realistic models require the re-creation of the architecture of the cellular components in which biochemical reactions Specialty section: This article was submitted to are contained. -
Image Processing on Optimal Volume Sampling Lattices
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1314 Image processing on optimal volume sampling lattices Thinking outside the box ELISABETH SCHOLD LINNÉR ACTA UNIVERSITATIS UPSALIENSIS ISSN 1651-6214 ISBN 978-91-554-9406-3 UPPSALA urn:nbn:se:uu:diva-265340 2015 Dissertation presented at Uppsala University to be publicly examined in Pol2447, Informationsteknologiskt centrum (ITC), Lägerhyddsvägen 2, hus 2, Uppsala, Friday, 18 December 2015 at 10:00 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Professor Alexandre Falcão (Institute of Computing, University of Campinas, Brazil). Abstract Schold Linnér, E. 2015. Image processing on optimal volume sampling lattices. Thinking outside the box. (Bildbehandling på optimala samplingsgitter. Att tänka utanför ramen). Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1314. 98 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9406-3. This thesis summarizes a series of studies of how image quality is affected by the choice of sampling pattern in 3D. Our comparison includes the Cartesian cubic (CC) lattice, the body- centered cubic (BCC) lattice, and the face-centered cubic (FCC) lattice. Our studies of the lattice Brillouin zones of lattices of equal density show that, while the CC lattice is suitable for functions with elongated spectra, the FCC lattice offers the least variation in resolution with respect to direction. The BCC lattice, however, offers the highest global cutoff frequency. The difference in behavior between the BCC and FCC lattices is negligible for a natural spectrum. We also present a study of pre-aliasing errors on anisotropic versions of the CC, BCC, and FCC sampling lattices, revealing that the optimal choice of sampling lattice is highly dependent on lattice orientation and anisotropy. -
Blueprints Free
FREE BLUEPRINTS PDF Barbara Delinsky | 512 pages | 25 Feb 2016 | Little, Brown Book Group | 9780349405049 | English | London, United Kingdom Blueprints | Changing Lives and Shaping Futures in Southwest Pennsylvania and West Virginia How does Blueprints complicated structure with so many parts, materials and workers come together? The answer is in the history of blueprints. These documents are truly the Blueprints of any construction project but they have been around for some time now. So, where did blueprints originate from and where are they evolving today? Before blueprints evolved into their modern form, look and purpose, drawings from the medieval times appear to be their earliest formations. The Plan of St. Gall, is one of the oldest known surviving architectural plans. Some historians consider Blueprints 9th century drawing as the very beginning of the history of Blueprints. Mysteriously, the monastery depicted in the drawing was never actually built. So, a group in Germany is using this drawing, along with period tools and techniques, to learn Blueprints about architectural history. You can view a detailed Blueprints and models based on the plan here. The documents that emerged from the Blueprints era look more like modern blueprints than Blueprints ones from the Blueprints Period. In fact, Blueprints and engineer Filippo Brunelleschi Blueprints the camera obscura to copy architectural details from the classical ruins that inspired his work. Today, Brunelleschi is considered to be the father the modern history of blueprints. The architects of the Blueprints period brought architectural drawing Blueprints we know it into existence, precisely and accurately reproducing the detail Blueprints a structure via the tools of scale and perspective. -
A Survey of Algorithms for Volume Visualization
A Survey of Algorithms for Volume Visualization T. Todd Elvins Advanced Scientific Visualization Laboratory San Diego Supercomputer Center "... in 10 years, all rendering will be volume rendering." Jim Kajiya at SIGGRAPH '91 Many computer graphics programmers are working in the area is given in [Fren89]. Advanced topics in medical volume of scientific visualization. One of the most interesting and fast- visualization are covered in [Hohn90][Levo90c]. growing areas in scientific visualization is volume visualization. Volume visualization systems are used to create high-quality Furthering scientific insight images from scalar and vector datasets defined on multi- dimensional grids, usually for the purpose of gaining insight into a This section introduces the reader to the field of volume scientific problem. Most volume visualization techniques are visualization as a subfield of scientific visualization and discusses based on one of about five foundation algorithms. These many of the current research areas in both. algorithms, and the background necessary to understand them, are described here. Pointers to more detailed descriptions, further Challenges in scientific visualization reading, and advanced techniques are also given. Scientific visualization uses computer graphics techniques to help give scientists insight into their data [McCo87] [Brod91]. Introduction Insight is usually achieved by extracting scientifically meaningful information from numerical descriptions of complex phenomena The following is an introduction to the fast-growing field of through the use of interactive imaging systems. Scientists need volume visualization for the computer graphics programmer. these systems not only for their own insight, but also to share their Many computer graphics techniques are used in volume results with their colleagues, the institutions that support the visualization. -
Statistically Quantitative Volume Visualization
Statistically Quantitative Volume Visualization Joe M. Kniss∗ Robert Van Uitert† Abraham Stephens‡ Guo-Shi Li§ Tolga Tasdizen University of Utah National Institutes of Health University of Utah University of Utah University of Utah Charles Hansen¶ University of Utah Abstract Visualization users are increasingly in need of techniques for assessing quantitative uncertainty and error in the im- ages produced. Statistical segmentation algorithms compute these quantitative results, yet volume rendering tools typi- cally produce only qualitative imagery via transfer function- based classification. This paper presents a visualization technique that allows users to interactively explore the un- certainty, risk, and probabilistic decision of surface bound- aries. Our approach makes it possible to directly visual- A) Transfer Function-based Classification B) Unsupervised Probabilistic Classification ize the combined ”fuzzy” classification results from multi- ple segmentations by combining these data into a unified Figure 1: A comparison of transfer function-based classification ver- probabilistic data space. We represent this unified space, sus data-specific probabilistic classification. Both images are based the combination of scalar volumes from numerous segmen- on T1 MRI scans of a human head and show fuzzy classified white- tations, using a novel graph-based dimensionality reduction matter, gray-matter, and cerebro-spinal fluid. Subfigure A shows the scheme. The scheme both dramatically reduces the dataset results of classification using a carefully designed 2D transfer func- size and is suitable for efficient, high quality, quantitative tion based on data value and gradient magnitude. Subfigure B shows visualization. Lastly, we show that the statistical risk aris- a visualization of the data classified using a fully automatic, atlas- ing from overlapping segmentations is a robust measure for based method that infers class statistics using minimum entropy, visualizing features and assigning optical properties. -
HIA Announces Ceasefire, Vows to Disarm Fighters the Vice-President Blasted the Countries in the Region, He Said
Eye on the News [email protected] Truthful, Factual and Unbiased Vol:XI Issue No:54 Price: Afs.15 Weekend Issue, Sponsored by Etisalat FRIDAY . SEPTEMBER 23. 2016 -Mizan 02, 1395 H.S www.facebook.com/ afghanistantimes www.twitter.com/ afghanistantime Pakistan clings to policy of good, bad terrorists: Danish WASHINGTON: Vice-President reserved the right to do what- Sarwar Danish has alleged that ever was necessary for the de- Pakistan-based terrorists continue fence of its people. He urged to conduct ruthless attacks against the international community to Afghan civilians. eliminate terror safe havens. Addressing the United Nations Afghanistan had always de- General Assembly on Wednesday, sired peaceful relations with all HIA announces ceasefire, vows to disarm fighters the vice-president blasted the countries in the region, he said. By Farhad Naibkhel neighbouring country for its fail- “However, the government of ure to eradicate terrorist safe ha- national unity reserves the KABUL: The long-awaited vens on its soil. right to do whatever is neces- peace agreement was signed on While accusing Islamabad of sary for the defense and pro- Thursday here between Af- differentiating between “good and tection of our people.” He ghan government and Hezb-e- bad terrorists", he implied Afghan urged states to honestly im- Islami Afghanistan (HIA), the Taliban and the Haqqani network plement their international largest militant group after enjoyed consistent support from pledges in the fight against ter- Taliban. Pakistani forces. He went on to rorism and avoid a dual policy The peace agreement with ask: “Where were the leaders of of making a distinction be- HIA, led by Gulbadin Hek- the Taliban and Al-Qaeda residing tween good and bad terrorists. -
Direct Volume Rendering with Nonparametric Models of Uncertainty
Direct Volume Rendering with Nonparametric Models of Uncertainty Tushar Athawale, Bo Ma, Elham Sakhaee, Chris R. Johnson, Fellow, IEEE, and Alireza Entezari, Senior Member, IEEE Fig. 1. Nonparametric models of uncertainty improve the quality of reconstruction and classification within an uncertainty-aware direct volume rendering framework. (a) Improvements in topology of an isosurface in the teardrop dataset (64 × 64 × 64) with uncertainty due to sampling and quantization. (b) Improvements in classification (i.e., bones in gray and kidneys in red) of the torso dataset with uncertainty due to downsampling. Abstract—We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertainty in direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) of the ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models of uncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametric distributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-ray sample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametric statistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such as uniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametric framework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework to ensemble, downsampled, and bivariate versions of scalar field datasets.