Visual Computing in Materials Sciences

Total Page:16

File Type:pdf, Size:1020Kb

Visual Computing in Materials Sciences Report from Dagstuhl Seminar 19151 Visual Computing in Materials Sciences Edited by Christoph Heinzl1, Robert Michael Kirby2, Stepan V. Lomov3, Guillermo Requena4, and Rüdiger Westermann5 1 FH Oberösterreich – Wels, AT, [email protected] 2 University of Utah – Salt Lake City, US, [email protected] 3 KU Leuven, BE, [email protected] 4 DLR – Köln, DE, [email protected] 5 TU München, DE, [email protected] Abstract Visual computing has become highly attractive for boosting research endeavors in the materials science domain [1]. Using visual computing, a multitude of different phenomena may now be studied, at various scales, dimensions, or using different modalities. This was simply impossible before. Visual computing techniques generate novel insights to understand, discover, design, and use complex material systems of interest. Its huge potential for retrieving and visualizing (new) information on materials, their characteristics and interrelations as well as on simulating the material’s behavior in its target application environment is of core relevance to material scientists. This Dagstuhl seminar on Visual Computing in Materials Sciences thus focuses on the intersection of both domains to guide research endeavors in this field. It targets to provide answers regarding the following four challenges, which are of imminent need: The Integrated Visual Analysis Challenge identifies standard visualization tools as insufficient for exploring materials science data in detail. What is required are integrated visual analysis tools, which are tailored to a specific application area and guide users in their investigations. Using linked views and other interaction concepts, these tools are required to combine all data domains using meaningful and easy to understand visualization techniques. Especially for the analysis of spatial and temporal data in dynamic processes (e.g., materials tested under load or in different environmental conditions) or multimodal, multiscale data, these tools and techniques are highly anticipated. Only integrated analysis concepts allow to make the most out of all the data available. The Quantitative Data Visualization Challenge centers around the design and im- plementation of tailored visual analysis systems for extracting and analyzing derived data (e.g., computed from extracted features over spatial, temporal or even higher dimensional domains). Therefore, feature extraction and quantification techniques, segmentation tech- niques, or clustering techniques, are required as prerequisites for the targeted visual analysis. As the quantification may easily end up in 25 or more properties to be computed per feature, clustering techniques allow to distinguish features of interest into feature classes. These fea- ture classes may then be statistically evaluated to visualize the properties of the individual features as well as the properties of the different classes. Information visualization techniques will be of special interest for solving this challenge. The Visual Debugger Challenge is an idea which uses visual analysis to remove errors in the parametrization of a simulation or a data acquisition process. Similarly, to a debugger in computer programming, identifying errors in the code and providing hints to improve, a visual debugger in the domain of visual computing for materials science should show the following characteristics: It should indicate errors and identify wrongly used algorithms in the data analysis. Such a tool should also identify incorrect parameters, which either show no or very limited benefit or even provide erroneous results. Furthermore, it should give directions on how to improve a targeted analysis and suggest suitable algorithms or pipelines for specific tasks. Except where otherwise noted, content of this report is licensed under a Creative Commons BY 3.0 Unported license Visual Computing in Materials Sciences, Dagstuhl Reports, Vol. 9, Issue 4, pp. 1–42 Editors: Christoph Heinzl, Robert Michael Kirby, Stepan V. Lomov, Guillermo Requena, and Rüdiger Westermann Dagstuhl Reports Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany 2 19151 – Visual Computing in Materials Sciences The Interactive Steering Challenge uses visual analysis tools to control a running sim- ulation or an ongoing data acquisition process. Respective tools monitor costly processes and give directions to improve results regarding the respective targets. For example, in the material analysis domain, this could be a system which provides settings for improved data acquisition based on the current image quality achieved: If the image quality does no more fulfill the target requirements, the system influences all degrees of freedom in the data acquis- ition to enhance image quality. The same holds for the materials simulation domain. Visual analysis can help to steer target material properties in a specific application environment by predicting tendencies of costly simulation runs, e.g., using cheaper surrogate models. References 1 Heinzl, C. and Stappen, S. (2017), STAR: Visual Computing in Materials Science. Com- puter Graphics Forum, 36: 647-666. doi:10.1111/cgf.13214 Seminar April 7–12, 2019 – http://www.dagstuhl.de/19151 2012 ACM Subject Classification Computing methodologies → Computer graphics, Comput- ing methodologies → Computer vision, Information systems → Data structures, Computing methodologies → Modeling and simulation, Theory of computation → Parameterized com- plexity and exact algorithms Keywords and phrases Data Structures, Interaction, Materials Science, Visual Computing, Visu- alization / Visual Analysis Digital Object Identifier 10.4230/DagRep.9.4.1 Edited in cooperation with Kristi Potter 1 Executive Summary Christoph Heinzl (FH Oberösterreich – Wels, AT) Robert Michael Kirby (University of Utah – Salt Lake City, US) Stepan V. Lomov (KU Leuven, BE) Guillermo Requena (DLR – Köln, DE) Rüdiger Westermann (TU München, DE) License Creative Commons BY 3.0 Unported license © Christoph Heinzl, Robert Michael Kirby, Stepan V. Lomov, Guillermo Requena, and Rüdiger Westermann In this Dagstuhl workshop, we brought together computer and computational scientists interested in building tools for use in visual computing with material scientists with expressed interest in using such tools. As would be anticipated when one brings together two distinct fields, the initial challenge we encountered was that of language. Although both groups came together having experiences with visual computing tools – some as developers and some as users – although they often used the same terms, they semantically meant different things. We found that the Dagstuhl philosophy of “immersion” was most helpful to this issue as having several days together helped break down these barriers. Over the course of the week, we interspersed talks by computational scientists and material scientists. The talks by computational scientists often presented their current understanding of what kinds of tools are needed, demonstrations of current tools they have developed in collaboration with domain-specific experts, and success stories of applications they have currently impacted. The talks by the material scientists often presented a description of the tools they currently C. Heinzl, R. M. Kirby, S. V. Lomov, G. Requena, and R. Westermann 3 use, the positive points and deficiencies of current tools, the types of features that they would like to see in future tools, and examples of current challenge problems and how they might be impacted by the next generation of tools. Fundamental Results: 1. The systems that are desired by many material scientists will be used both for exploration and for interactive steering. When used for exploration, material scientists want tools that not only present the data with its corresponding reliability (uncertainty) bounds, but which also give predictive capabilities such as where next to sample. 2. There is a general acknowledgement that both automation and interactivity are needed. Automation of tasks and procedures through AI and Machine Learning can be used to help deal with the volumes of data being produced – helping scientists sift through the field of possibilities to isolate those places for which they should expend human effort. At the same time, there are many current practices that continue to require “the human in the loop” to make decisions. In such cases, tools are needed that have smart defaults but yet allow the user to explore, navigate and possibly refine data. 3. Although many current tools used for material science applications leverage previous visualization and interaction technologies, there is still much to be done. Many material science applications require specialization of currently existing algorithms and techniques, especially in cases of real-time systems. Furthermore, many techniques originally designed for batch or manual processing need to be re-engineered to allow for the interactive procedures required by current and future material science application scientists. 4. With regards to visualization scientists, there is a need for both data and tasks. Many researchers requested data on which they can try their methods. In addition to the data itself, descriptors of the data are necessary so that it can be interpreted properly. Once read into their system, the visualization scientists then requested a collection of tasks (driven by the material science domain experts) which would help drive their tool development and evaluation. Final Comments Due to the ever-increasing
Recommended publications
  • Tutorial on the Visualization of Volumetric Data Using Tomviz
    Tutorial on the Visualization of Volumetric Data Downloaded from Using tomviz https://www.cambridge.org/core Barnaby D.A. Levin,1, 2* Yi Jiang,3 Elliot Padgett,1 Shawn Waldon,4 Cory Quammen,4 Chris Harris,4 Utkarsh Ayachit,4 Marcus Hanwell,4 Peter Ercius,5 David A. Muller,1, 6 and Robert Hovden7 1School of Applied and Engineering Physics, 271 Clark Hall, Cornell University, Ithaca, NY 14853 2School for Engineering of Matter, Transport and Energy, 501 E. Tyler Mall, Arizona State University, Tempe, AZ 85287 3Department of Physics, 109 Clark Hall, Cornell University, Ithaca, NY 14853 4Kitware Inc., 28 Corporate Dr., #101, Clifton Park, NY 12065 . IP address: 5National Center for Electron Microscopy, Molecular Foundry, Lawrence Berkeley National Laboratory, One Cyclotron Rd., Berkeley, CA 94720 6Kavli Institute for Nanoscale Science, 420 Physical Sciences Building, Cornell University, Ithaca, NY 14853 170.106.202.58 7Department of Materials Science and Engineering, 2300 Hayward St., University of Michigan, Ann Arbor, MI 48109 *[email protected] , on 01 Oct 2021 at 00:28:12 Abstract: Tomography produces complex volumetric datasets containing into the final result. Ultimately, the software should be able to the entire internal structure and density of an object in three dimensions (3D). Interpreting volumetric data requires 3D visualization but needs produce publication quality figures for scientific articles. specialized software distinguishable from more familiar tools used in The open-source tomviz software package aims to meet animation for 3D surface data. This tutorial reviews 3D visualization each of these criteria, whilst also being free to download and techniques for volumetric data using the open-source tomviz software use.
    [Show full text]
  • SDAV Institute Final Report – February 2017 Table of Contents Table of Contents
    SDAV Institute Final Report – February 2017 http://sdav-scidac.org Table of Contents Table of Contents ........................................................................................................................... 1 1 Introduction ............................................................................................................................ 1 2 Executive Summary ................................................................................................................ 2 3 SDAV Achievements ............................................................................................................... 6 3.1 Data Management ....................................................................................................................... 6 3.1.1 I/O Frameworks ....................................................................................................................................... 6 3.1.2 In Situ Processing and Code Coupling ..................................................................................................... 8 3.1.3 Indexing ................................................................................................................................................. 10 3.1.4 In situ data compression ........................................................................................................................ 11 3.1.5 Parallel I/O and file formats ................................................................................................................... 11
    [Show full text]
  • Gpu-Accelerated Applications Gpu‑Accelerated Applications
    GPU-ACCELERATED APPLICATIONS GPU-ACCELERATED APPLICATIONS Accelerated computing has revolutionized a broad range of industries with over six hundred applications optimized for GPUs to help you accelerate your work. CONTENTS 1 Computational Finance 62 Research: Higher Education and Supercomputing NUMERICAL ANALYTICS 2 Climate, Weather and Ocean Modeling PHYSICS 2 Data Science and Analytics SCIENTIFIC VISUALIZATION 5 Artificial Intelligence 68 Safety and Security DEEP LEARNING AND MACHINE LEARNING 71 Tools and Management 13 Public Sector 79 Agriculture 14 Design for Manufacturing/Construction: 79 Business Process Optimization CAD/CAE/CAM CFD (MFG) CFD (RESEARCH DEVELOPMENTS) COMPUTATIONAL STRUCTURAL MECHANICS DESIGN AND VISUALIZATION ELECTRONIC DESIGN AUTOMATION INDUSTRIAL INSPECTION Test Drive the 29 Media and Entertainment ANIMATION, MODELING AND RENDERING World’s Fastest COLOR CORRECTION AND GRAIN MANAGEMENT COMPOSITING, FINISHING AND EFFECTS Accelerator – Free! (VIDEO) EDITING Take the GPU Test Drive, a free and (IMAGE & PHOTO) EDITING easy way to experience accelerated ENCODING AND DIGITAL DISTRIBUTION computing on GPUs. You can run ON-AIR GRAPHICS your own application or try one of ON-SET, REVIEW AND STEREO TOOLS the preloaded ones, all running on a WEATHER GRAPHICS remote cluster. Try it today. www.nvidia.com/gputestdrive 44 Medical Imaging 47 Oil and Gas 48 Life Sciences BIOINFORMATICS MICROSCOPY MOLECULAR DYNAMICS QUANTUM CHEMISTRY (MOLECULAR) VISUALIZATION AND DOCKING Computational Finance APPLICATION NAME COMPANYNAME PRODUCT DESCRIPTION SUPPORTED FEATURES GPU SCALING Accelerated Elsen Secure, accessible, and accelerated • Web-like API with Native bindings for Multi-GPU Computing Engine back-testing, scenario analysis, Python, R, Scala, C Single Node risk analytics and real-time trading • Custom models and data streams designed for easy integration and rapid development.
    [Show full text]
  • Tomviz: Providing Advanced Electron Tomography by Streamlining Alignment, Reconstruction, and 3D Visualization
    222 Microsc. Microanal. 23 (Suppl 1), 2017 doi:10.1017/S1431927617001799 © Microscopy Society of America 2017 tomviz: Providing Advanced Electron Tomography by Streamlining Alignment, Reconstruction, and 3D Visualization Yi Jiang1, Elliot Padgett2, Marcus D. Hanwell3, Cory Quammen3, Chris Harris3, Shawn Waldon3, David A. Muller2, 4, Robert Hovden5 1. Department of Physics, Cornell University, Ithaca, NY, USA 2. School of Applied & Engineering Physics, Cornell University, Ithaca, NY, USA 3. Kitware, Inc., Clifton Park, NY, USA 4. Kavli Institute at Cornell for Nanoscale Science, Ithaca, NY, USA 5. Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI, USA Electron tomography (ET) is widely used for 3D characterization of biological and material systems at nano-scales and meso-scales [1,2]. The success of an ET experiment critically depends on many image processing steps including tilt series alignment, tomographic reconstruction, segmentation, and visualization. Because each step demands unique algorithms developed by researchers from different fields of expertise, electron microscopists must piece together multiple software tools—making the workflow inefficient and non-reproducible. Here, we present the first full release of tomviz, v.1.0, an advanced and open platform that achieves high-throughput ET by providing state of the art functionality from basic image filters to 3D segmentation. tomviz offers a broad collection of reconstruction methods, from traditional weighted back projection (favored by biological applications) to more advanced optimization-based techniques such as the total- variation minimization method [3] that utilizes the concept of compressed sensing. To lower the barrier of complicated algorithms that involve multiple steps and parameters, the software provides live updates and image inspection tools that facilitate parameter selection and convergence test.
    [Show full text]