VTK-M: Accelerating the Visualization Toolkit for Massively Threaded Architectures

Total Page:16

File Type:pdf, Size:1020Kb

VTK-M: Accelerating the Visualization Toolkit for Massively Threaded Architectures Feature Article VTK-m: Accelerating the Visualization Toolkit for Massively Threaded Architectures Kenneth Moreland ■ Sandia National Laboratories Hendrik Schroots ■ Intel Christopher Sewell ■ Los Alamos National Laboratory Kwan-Liu Ma ■ University of California, Davis William Usher ■ University of Utah Hank Childs ■ University of Oregon Li-ta Lo ■ Los Alamos National Laboratory Matthew Larsen ■ Lawrence Livermore National Laboratory Jeremy Meredith and David Pugmire ■ Oak Ridge National Laboratory Chun-Ming Chen ■ Ohio State University James Kress ■ University of Oregon Robert Maynard and Berk Geveci ■ Kitware lthough the basic architecture for high- This trend toward massive threading can be seen performance computing (HPC) plat- in high-performance computing today. The cur- forms has remained homogeneous and rent leadership-class computer at the Oak Ridge Aconsistent for more than a decade, revolutionary National Laboratory, Titan, requires between 70 changes are appearing on leading-edge supercom- and 500 million threads to run at peak, which is puters. Plans for future supercomputers promise 300 times more than was required by its prede- even larger changes. But one cessor, JaguarPF. In contrast, the system memory Traditional scientific troubling attribute of future HPC grew only by a factor of 2.3. visualization software machines is the massive increase The increasing reliance on concurrency to achieve in concurrency required to sus- faster execution rates invalidates the scalability of approaches do not fare tain peak computation. In fact, much of our scientific HPC code. New processor well in massively threaded most project billions of threads architectures are leading to new programming environments. To address the to achieve an exaflop.1 This in- models and new algorithmic approaches. The de- needs of the high-performance crease is partially accredited to sign of new algorithms and their practical imple- computing community, the requiring more cores to achieve mentation are a critical extreme-scale challenge.2,3 VTK-m framework fills the faster aggregate computing rates To address these needs, HPC scientific visual- gaps in functionality by and partially accredited to using ization researchers working for the United States bringing together the most additional threads per core to Department of Energy are building a new library recent research. hide memory latency. Because of called VTK-m that provides a framework for sim- cost and power limitations, the plifying the design of visualization algorithms on system memory will not commensurately increase, current and future architectures. VTK-m also pro- which means algorithms will need strong scaling vides a flexible data model that can adapt to many (that is, more parallelism per datum). scientific data types and operate well on multi- 48 May/June 2016 Published by the IEEE Computer Society 0272-1716/16/$33.00 © 2016 IEEE g3mor.indd 48 4/19/16 2:07 PM threaded devices. Finally, VTK-m serves as a con- sor or small set of processors capable of executing tainer for algorithms designed in the framework numerous threads that may require coordination. and gives the visualization community a common From our point of view, the principle advan- point to collaborate, contribute, and leverage mas- tage of the mixed-mode parallel model is that sively threaded algorithms. we can leverage our existing software to manage the message-passing parallel units. The VTK-m The Challenges of Highly Threaded framework focuses on the intranode parallelism Visualization that often requires massive threading and syn- The scientific visualization research community chronized execution. has been building scalable HPC algorithms for Thus, new HPC systems require a much higher more than 15 years, and today there are multiple degree of parallelism that could require threads production tools that provide excellent scalability. operating on as few as one to 10 data cells. At this However, our current visualization tools are based fine degree of parallelism, our conventional visu- on a message-passing programming model. They alization breaks down in multiple different ways. expect a coarse decomposition of the data that works best when each processing element has on Load Imbalance the order of 100,000 to 1 million data cells. Typically, data are partitioned under the assumption For many years, HPC visualization applications that the amount of work per datum is uniform. How- such as ParaView, VisIt, EnSight, and FieldView ever, this is not true for all visualization algorithms, have supported parallel processing on distributed many of which generate data conditionally based memory computer systems. The approach used by on the input values. With only a few exceptions, all these software products is a bulk synchronous current parallel visualization functions completely parallel model, where algorithms perform the ignore this load imbalance, which is considered tol- majority of their computation on independent erable when amortized over larger partitions. local operations.4 When the data gets decomposed to the cell level, This parallel computation model has worked this amortization no longer occurs, which results well for the last 15 years. Even recent multicore in a much more severe load imbalance. Finely processors could be leveraged reasonably effi- threaded visualization algorithms need to be cog- ciently as independent message-passing processes nizant of potential load imbalance and schedule on each core, allowing these tools to scale to pet- work accordingly. ascale machines.5 However, processors designed for HPC instal- Dynamic Memory Allocation lations are undergoing transformative design When the amount of data a visualization algo- changes. With physical limitations that prevent rithm generates is dependent on the values of the individual cores from executing instructions ap- input data, the output data’s size and structure is preciably faster than their current rate, manu- not known at the execution outset. In such a case, facturers are increasing the total computational the algorithm must dynamically allocate memory bandwidth by adding more cores to each proces- as data are generated. sor.6 Some HPC processor designs go even further Because our conventional parallel visualiza- to increase the total possible execution throughput tion algorithms operate on coarse partitions in by removing latency-hiding features and incor- distributed memory spaces, processing elements porating vector processing. A consequence of all can dynamically allocate memory completely in- these features is that it is no longer sufficient to dependent from one another. In contrast, dynamic treat each core as an independent processor. memory allocation from many threads within a The upshot is that our parallel computing model shared memory environment requires explicit syn- is no longer symmetric. The relationship between chronization that inhibits parallel execution. two cores on the same processor differs signifi- cantly from the relationship between two nodes of Topological Connections a supercomputer. To address this asymmetry, a pop- Scientific visualization algorithms are dominated ular approach is to use a mixed-mode, or hybrid, by operations on topological connections in parallel computing model that incorporates two meshes. Care must be taken when defining these levels of task organization. The first level comprises connections across boundaries of data assigned to distributed-memory message-passing nodes in a different processing elements. Mutual data being cluster-like arrangement. Then, within each node read must be consistent, and mutual data being of the distributed-memory arrangement is a proces- written must be coordinated. IEEE Computer Graphics and Applications 49 g3mor.indd 49 4/19/16 2:07 PM Feature Article Predecessors of VTK-m lthough the VTK-m software project itself started little scale visualization problem, but the software packages did Amore than a year ago, the software originated as an not integrate well. Recognizing the prospect of substantial aggregation of three predecessor products: PISTON, Dax, duplication of effort, the developers of these software and EAVL. The US Department of Energy (DoE) high- projects came together to work under a unified software performance computing (HPC) community predicted early product: VTK-m. Although VTK-m was born from a new on that leadership class facilities would be transitioning code base, the PISTON, Dax, and EAVL developers con- to heavily threaded processors and that we would need tributed and evolved their respective technologies. The a significant change to our visualization algorithms and development of its predecessors has been phased out, and software.1 Consequently, researchers at the DoE national VTK-m is now a unified, well-integrated product of these laboratories began considering the challenges of visualiza- three predecessors with continuing evolving capabilities. tion on accelerator processors and created three separate toolkits, each focusing on a specific aspect of the problem. The first toolkit, PISTON,2 considers the design of por- References table multithreaded visualization algorithms. Built on top 1. S. Ahern et al., “Scientific Discovery at the Exascale: Report of the Thrust library,3 algorithms in PISTON comprise a se- from the DOE ASCR 2011 Workshop on Exascale Data quence of general parallel operations. Originally designed Management, Analysis, and Visualization,” Dept. of Energy for CUDA, Thrust has a flexible device back
Recommended publications
  • Pyvista: Managing & Visualizing Geospatial Data Using an Open
    PYVISTA: MANAGING & VISUALIZING GEOSPATIAL DATA USING AN OPEN-SOURCE FRAMEWORK by C. Bane Sullivan c Copyright by C. Bane Sullivan, 2020 All Rights Reserved A thesis submitted to the Faculty and the Board of Trustees of the Colorado School of Mines in partial fulfillment of the requirements for the degree of Master of Science (Hydrol- ogy). Golden, Colorado Date Signed: C. Bane Sullivan Signed: Dr. Whitney J. Trainor-Guitton Thesis Advisor Golden, Colorado Date Signed: Dr. Josh Sharp Professor and Director Hydrologic Science & Engineering Program ii ABSTRACT There is a wide range of data types present in typical hydrogeophysical studies; being able to gather all data types for a given project into a single framework is challenging and often unachievable at an affordable cost for hydrological researchers. Steep licensing fees for commercial software and complex user interfaces in existing open software have limited the accessibility of tools to build, integrate, and make decisions with diverse types of 3D geospa- tial data and models. In earth science research, particularly in hydrology, restricted budgets exacerbate these limitations, creating barriers for using software to manage, visualize, and exchange 3D data in reproducible workflows. In response to these challenges, I have created the PyVista software as an open-source framework for 3D geospatial data management, fu- sion, and visualization. The PyVista Python package provides an accessible and intuitive interface back to a robust and established visualization library, the Visualization Toolkit (VTK), to facilitate rapid analysis and visual integration of spatially referenced datasets. This interface implements spatial data structures encompassing a majority of subsurface applications to make creating, managing, and analyzing spatial data more streamlined for domain scientists.
    [Show full text]
  • Introduction to Medical Image Computing
    1 MEDICAL IMAGE COMPUTING (CAP 5937)- SPRING 2017 LECTURE 1: Introduction Dr. Ulas Bagci HEC 221, Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL 32814. [email protected] or [email protected] 2 • This is a special topics course, offered for the second time in UCF. Lorem Ipsum Dolor Sit Amet CAP5937: Medical Image Computing 3 • This is a special topics course, offered for the second time in UCF. • Lectures: Mon/Wed, 10.30am- 11.45am Lorem Ipsum Dolor Sit Amet CAP5937: Medical Image Computing 4 • This is a special topics course, offered for the second time in UCF. • Lectures: Mon/Wed, 10.30am- 11.45am • Office hours: Lorem Ipsum Dolor Sit Amet Mon/Wed, 1pm- 2.30pm CAP5937: Medical Image Computing 5 • This is a special topics course, offered for the second time in UCF. • Lectures: Mon/Wed, 10.30am-11.45am • Office hours: Mon/Wed, 1pm- 2.30pm • No textbook is Lorem Ipsum Dolor Sit Amet required, materials will be provided. • Avg. grade was A- last CAP5937: Medical Image Computing spring. 6 Image Processing Computer Vision Medical Image Imaging Computing Sciences (Radiology, Biomedical) Machine Learning 7 Motivation • Imaging sciences is experiencing a tremendous growth in the U.S. The NYT recently ranked biomedical jobs as the number one fastest growing career field in the nation and listed bio-medical imaging as the primary reason for the growth. 8 Motivation • Imaging sciences is experiencing a tremendous growth in the U.S. The NYT recently ranked biomedical jobs as the number one fastest growing career field in the nation and listed bio-medical imaging as the primary reason for the growth.
    [Show full text]
  • An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool
    ORIGINAL RESEARCH ARTICLE published: 27 March 2009 NEUROINFORMATICS doi: 10.3389/neuro.11.009.2009 DataViewer3D: an open-source, cross-platform multi-modal neuroimaging data visualization tool André Gouws*, Will Woods, Rebecca Millman, Antony Morland and Gary Green Department of Psychology, York NeuroImaging Centre, University of York, UK Edited by: Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance Rolf Kötter, Radboud University imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data Nijmegen, The Netherlands within a common, defi ned coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging Reviewed by: Stephen C. Strother, Baycrest, Canada; data visualization tool offering a cross-platform, open-source solution to simultaneous data University of Toronto, Canada overlay visualization requirements of imaging studies. While DV3D is primarily a visualization David Kennedy, Harvard Medical tool, the package allows an analysis approach where results from one imaging modality can School, USA guide comparative analysis of another modality in a single coordinate space. DV3D is built on *Correspondence: Python, a dynamic object-oriented programming language with support for integration of modular André Gouws, York NeuroImaging Centre, University of York, York Science toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the Park, York YO10 5DG, UK. power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling e-mail: [email protected] VTK’s low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user’s operating system dialogs and graphical user interface tools.
    [Show full text]
  • Performing Maximum Intensity Projection with the Visualization Toolkit
    Performing Maximum Intensity Projection with the Visualization Toolkit Stefan Bruckner∗ Seminar Paper The Institute of Computer Graphics and Algorithms Vienna University of Technology, Austria Abstract intensity and the maximum intensity seen through a pixel is pro- jected onto that pixel. It looks more like a search algorithm than a Maximum Intensity Projection (MIP) is a volume rendering tech- traditional volume color/opacity accumulation algorithm. nique commonly used to depict vascular structures. The Visual- MIP is often employed in clinical practice for depicting vascular ization Toolkit (VTK) is an open source, freely available software structures. The data is a set of slices where most areas are dark, but system for 3D computer graphics, image processing, and visual- vessels tend to be brighter. This set is collapsed into a single image ization. In this paper, MIP and related methods are presented and by performing a projection through the set that assigns the bright- VTK's capabilities to use these methods are examined. est voxel over all slices to each pixel in the projection. In contrast Keywords: Visualization Toolkit, VTK, Maximum Intensity Pro- to DVR, MIP does not require the tedious generation of color and jection, MIP opacity transfer functions, but unlike SR it does still display den- sity information. In addition, since datasets usually contain a lot of noise, threshold values for SR, which allow extraction of vascular 1 Introduction structures are difficult to find. This paper gives an overview over MIP and related methods and For many scientific applications three-dimensional arrays of data shows how VTK can be used to perform these techniques efficiently.
    [Show full text]
  • Parallel Ray Tracing in Scientific Visualization
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by The University of Utah: J. Willard Marriott Digital Library PARALLEL RAY TRACING IN SCIENTIFIC VISUALIZATION by Carson Brownlee A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science School of Computing The University of Utah December 2012 Copyright © Carson Brownlee 2012 All Rights Reserved The University of Utah Graduate School STATEMENT OF DISSERTATION APPROVAL Carson Brownlee The dissertation of has been approved by the following supervisory committee members: Charles D. Hansen 9/14/12 Chair Date Approved Steven G. Parker 9/14/12 Member Date Approved Peter Shirley 9/14/12 Member Date Approved Claudio T. Silva 9/19/12 Member Date Approved James Ahrens 9/14/12 Member Date Approved Alan Davis and by Chair of the Department of School of Computing and by Charles A. Wight, Dean of The Graduate School. ABSTRACT Ray tracing presents an efficient rendering algorithm for scientific visualization using common visualization tools and scales with increasingly large geometry counts while allowing for accurate physically-based visualization and analysis, which enables enhanced rendering and new visualization techniques. Interactivity is of great importance for data exploration and analysis in order to gain insight into large-scale data. Increasingly large data sizes are pushing the limits of brute-force rasterization algorithms present in the most widely-used visualization software. Interactive ray tracing presents an alternative rendering solution which scales well on multicore shared memory machines and multinode distributed systems while scaling with increasing geometry counts through logarithmic acceleration structure traversals.
    [Show full text]
  • VTK and Adobe Flash
    Visualization of Plasma Edges using VTK and Adobe Flash By LAVESH KUMAR GUPTA A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey in partial fulfillment of the requirements for the degree of Master of Science Graduate Program in Electrical and Computer Engineering written under the direction of Prof Deborah Silver and approved by ________________________ ________________________ ________________________ ________________________ New Brunswick, New Jersey May, 2011 2011 LAVESH KUMAR GUPTA ALL RIGHTS RESERVED Abstract of the Thesis Visualization of Plasma Edges using VTK and Adobe Flash By Lavesh Kumar Gupta Thesis Director: Professor Deborah Silver Visualization is a very useful tool in the study and analysis of huge experimental or simulation datasets. It converts vast amount of numerical data into graphical images, extracts important hidden information and aids scientists in hypothesis building. This thesis focuses on migrating visualization module of Plasma Edge from commercial visualization software AVS to open source visualization library VTK. The VTK visualization module will be integrated in new plasma edge simulation code package at the Center for Plasma Edge Simulation for the study of plasma edge region relevant to both existing magnetic fusion facilities and next generation burning plasma experiments. VTK visualization module generates two set of images: 2D images representing cross- sectional view of 3D torroidal plane, and 3D images representing whole torroidal plane. Two Adobe Flash modules were also developed using open source Adobe Flex to integrate the VTK visualization module with eSimMon dashboard, a front-end tool for simulation monitoring. Using these modules, virtual 3D visualization capability was provided in the dashboard for viewing generated plasma images.
    [Show full text]
  • UC Davis IDAV Publications
    UC Davis IDAV Publications Title Extraction of Crack-free Isosurfaces from Adaptive Mesh Refinement Data Permalink https://escholarship.org/uc/item/4k23g9q1 Authors Weber, Gunther H. Kreylos, Oliver Ligocki, Terry J. et al. Publication Date 2001 Peer reviewed eScholarship.org Powered by the California Digital Library University of California Extraction of Crack-free Isosurfaces from Adaptive Mesh Refinement Data Gunther H. Weber1;2;3, Oliver Kreylos1;3, Terry J. Ligocki3, John M. Shalf 3;4, Hans Hagen2, Bernd Hamann1;3, and Kenneth I. Joy1 1 Center for Image Processing and Integrated Computing (CIPIC), Department of Computer Science, University of California, Davis 2 Department of Computer Science, University of Kaiserslautern, Germany 3 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory, Berkeley 4 National Center for Supercomputing Applications (NCSA), University of Illinois, Urbana-Champaign Abstract. Adaptive mesh refinement (AMR) is a numerical simulation technique used in computational fluid dynamics (CFD). It permits the efficient simulation of phenomena characterized by substantially varying scales in complexity of local behavior of certain variables. By using a set of nested grids at different resolu- tions, AMR combines the simplicity of structured rectilinear grids with the possi- bility to adapt to local changes in complexity and spatial resolution. Hierarchical representations of scientific data pose challenges when isosurfaces are extracted. Cracks can arise at the boundaries between regions represented at different res- olutions. We present a method for the extraction of isosurfaces from AMR data that avoids cracks at the boundaries between levels of different resolution. 1 Introduction AMR was introduced to computational physics by Berger and Oliger [3] in 1984.
    [Show full text]
  • Using Visualization to Understand the Behavior of Computer Systems
    USING VISUALIZATION TO UNDERSTAND THE BEHAVIOR OF COMPUTER SYSTEMS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Robert P. Bosch Jr. August 2001 c Copyright by Robert P. Bosch Jr. 2001 All Rights Reserved ii I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Dr. Mendel Rosenblum (Principal Advisor) I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Dr. Pat Hanrahan I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Dr. Mark Horowitz Approved for the University Committee on Graduate Studies: iii Abstract As computer systems continue to grow rapidly in both complexity and scale, developers need tools to help them understand the behavior and performance of these systems. While information visu- alization is a promising technique, most existing computer systems visualizations have focused on very specific problems and data sources, limiting their applicability. This dissertation introduces Rivet, a general-purpose environment for the development of com- puter systems visualizations. Rivet can be used for both real-time and post-mortem analyses of data from a wide variety of sources. The modular architecture of Rivet enables sophisticated visualiza- tions to be assembled using simple building blocks representing the data, the visual representations, and the mappings between them.
    [Show full text]
  • Visualization of Calendar Data
    FAKULTÄT FÜR !NFORMATIK Visualization of Calendar Data DIPLOMARBEIT zur Erlangung des akademischen Grades Diplom-Ingenieur im Rahmen des Studiums Informatik eingereicht von Philipp Rudolf Hartl Matrikelnummer 0025681 an der Fakultät für Informatik der Technischen Universität Wien Betreuung Betreuer: Ao.Univ.Prof. Dipl.-Ing. Dr.techn. Eduard Gröller Mitwirkung: Dipl.-Ing. Dr.techn. Matej Mlejnek Wien, 27.10.2008 (Unterschrift Verfasser) (Unterschrift Betreuer) Technischen Universität Wien A-1040 Wien Karlsplatz 13 Tel. +43/(0)1/58801-0 http://www.tuwien.ac.at Philipp Hartl Visualization of Calendar Data Diploma Thesis supervised by Ao.Univ.Prof. Dipl.-Ing. Dr.techn. Eduard Gröller Dipl.-Ing. Dr.techn. Matej Mlejnek Institute of Computer Graphics and Algorithms Vienna University of Technology Abstract Since many centuries, calendars are used to organize appointments, events and tasks. In this thesis interaction and visualization techniques for calendar data are presented, which do not only support the organization and analysis, but also facilitate and im- prove them. The basis of the solution is a 3D heightfield visualization, which displays the workloads of time slots over periods of time in a compact manner. Thereon, inter- action and visualization techniques are used to investigate the data set for regularities and irregularities. The comparison of data sets while planning events is just as im- portant as the integration of fuzzy tasks into one’s schedule and their manipulation. The visualization and exploration process is completed by statistical representations showing trends and patterns. The use of these techniques and the combination of them are presented with the help of real examples. Kurzfassung Seit Anbeginn der Zeit werden Kalender benutzt, um Aufgaben und Termine zu or- ganisieren.
    [Show full text]
  • Generic Caching Library and Its Use for VTK-Based Real-Time Simulation and Visualization Systems
    Generic Caching Library and Its Use for VTK-based Real-time Simulation and Visualization Systems Luka´sˇ Hruda1 and Josef Kohout2 1Department of Computer Science and Engineering, Faculty of Applied Sciences, University of West Bohemia, Univerzitn´ı 8, Pilsen, Czech Republic 2NTIS - New Technologies for the Information Society, Faculty of Applied Sciences, University of West Bohemia, Univerzitn´ı 8, Pilsen, Czech Republic Keywords: Cache, Memoization, VTK, Visualization. Abstract: In many visualization applications, physics-based simulations are so time consuming that desired real-time manipulation with the visual content is not possible. Very often this time consumption can be prevented by using some kind of caching since many of the processes in these simulations repeat with the same inputs pro- ducing the same output. Creating a simple caching mechanism for cases where the order of the data repetition is known in advance is not a very difficult task. But in reality, the data repetition is often unpredictable and in such cases some more sophisticated caching mechanism has to be used. This paper presents a novel generic caching library for C++ language that is suitable for such situations. It also presents a wrapper that simplifies the usage of this library in the applications based on the popular VTK visualization tool. Our experiments show that the developed library can speed up VTK based visualizations significantly with a minimal effort. 1 INTRODUCTION sary repetition occurs, some form of caching can be used to prevent it, at least partially. The VTK (Visualization Toolkit) (Schroeder et al., Although the VTK has been under development 2004) is a very popular tool for visualizing various for many years, its caching possibilities still seem to types of data and information, usually scientific.
    [Show full text]
  • Bioimagesuite Manual 95522 2
    v2.6 c Copyright 2008 X. Papademetris, M. Jackowski, N. Rajeevan, R.T. Constable, and L.H Staib. Section of Bioimaging Sciences, Dept. of Diagnostic Radiology, Yale School of Medicine. All Rights Reserved ii Draft July 18, 2008 v2.6 Contents I A. Overview 1 1. Introduction 2 1.1. BioImage Suite Functionality . 3 1.2. BioImage Suite Software Infrastructure . 4 1.3. A Brief History . 4 2. Background 6 2.1. Applications of Medical Imaging Analysis: A Brief Overview . 6 2.2. Medical Image Processing & Analysis . 8 2.3. Software Development Related to Medical Image Analysis . 9 2.4. 3D Graphics and Volume Rendering . 11 3. Starting and Running BioImage Suite 18 3.1. Installation Overview . 18 3.2. Installation Instructions . 19 3.3. The Main BioImage Suite Menu . 22 3.4. Preferences Editor . 23 4. Application Structure 26 4.1. Application Structure Overview . 26 4.2. The File Menu . 27 4.3. The Display Menu . 31 5. Looking At Images 32 5.1. Image Formats . 32 5.2. The Viewers . 33 5.3. The Colormap Editor . 37 5.4. Coordinates for NeuroImaging . 39 5.5. Atlas Tools . 44 6. Advanced Image Visualization 47 6.1. 4D Images . 47 6.2. 3D Rendering Controls . 48 6.3. Volume Rendering . 48 6.4. Oblique Slices . 53 6.5. The Animation Tool . 54 iii Draft July 18, 2008 CONTENTS v2.6 II B. Anatomical Image Analysis 59 7. The Image Processing and Histogram Tools 60 7.1. Introduction . 60 7.2. “Image” and “Results” . 61 7.3. Histogram Control . 62 7.4.
    [Show full text]
  • Improved Visualization of Rock Carvings
    IT 16 089 Examensarbete 15 hp November 2016 Improved Visualization of Rock Carvings Filip Hedman Institutionen för informationsteknologi Department of Information Technology Abstract Improved Visualization of Rock Carvings Filip Hedman Teknisk- naturvetenskaplig fakultet UTH-enheten Digitizing rock carvings is a way to archive historically important sites and to make the data available for interpretation all around the world. Laser scanning has become Besöksadress: a very useful tool to capture the details of the rocks, and this thesis aim to answer Ångströmlaboratoriet Lägerhyddsvägen 1 how we can use different filters on the captured 3D data to visualize patterns from Hus 4, Plan 0 the rock carvings, while minimizing the noise from the surrounding geometry. Different coloring methods are evaluated to accentuate the rock carvings, while a Postadress: median filter is implemented to reduce the noise of the renderings. The results show Box 536 751 21 Uppsala that it is indeed possible to perform this kind of visualization, and that some methods are more suitable for the task than others. The results of this thesis will hopefully Telefon: make the choice of method easier for other researchers in forthcoming projects. 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student Handledare: Filip Malmberg Ämnesgranskare: Anders Hast Examinator: Olle Gällmo IT 16 089 Tryckt av: Reprocentralen ITC Contents 1 Introduction 8 2 Background 10 2.1 Galician rock carvings . 10 2.2 Traditional methods used in archeology . 11 2.3 Laser scanning . 11 2.4 Surface reconstruction from point clouds . 12 2.5 Visualization Toolkit . 13 2.6 PLY 3D format .
    [Show full text]