Highlights and Impacts of ASCR's Programs

Total Page:16

File Type:pdf, Size:1020Kb

Highlights and Impacts of ASCR's Programs ASCR@40 Highlights and Impacts of ASCR’s Programs A report compiled by the ASCAC Subcommittee on the 40-year history of ASCR for the U.S. Department of Energy’s Office of Advanced Scientific Computing Research 2020 Cover image: This snapshot of a thermonuclear supernova simulation with asymmetric ignition in a white dwarf was created by the Supernova Science Center which was part of SciDAC-1. Led by Michael Zingale of Stony Brook, the project wanted to see whether the ash bubble (in purple) would sweep around the star and collide with enough force to compress fuel in the area to trigger a spontaneous detonation. The 3D results disfavored that possibility. At the time, the issue of how the white dwarf ignites was one of the most debated issues in Type Ia supernova modeling. 11/2020 ASCR@40: Highlights and Impacts of ASCR’s Programs A report compiled by the ASCAC Subcommittee on the 40-year history of ASCR for the U.S. Department of Energy’s Office of Advanced Scientific Computing Research Chairs: Bruce Hendrickson, Paul Messina Committee members and authors: Buddy Bland, Jackie Chen, Phil Colella, Eli Dart, Jack Dongarra, Thom Dunning, Ian Foster, Richard Gerber, Rachel Harken, Wendy Huntoon, Bill Johnston, John Sarrao, Jeff Vetter Editors: Jon Bashor, Tiffani Conner 11/2020 CONTENTS 1.0 INTRODUCTION .................................................................................................................................................. 1 Document Overview .............................................................................................................................................................. 3 2.1 THE THIRD PILLAR OF SCIENCE: DELIVERING ON THE PROMISE OF COMPUTATIONAL SCIENCE ............................................................................................................................................................... 6 Background ............................................................................................................................................................. 7 Infrastructure for Computational Science & Engineering .................................................................................................8 Computational Resources .....................................................................................................................................................8 Advances in Applied Mathematics .......................................................................................................................................9 Advances in Computer Science ........................................................................................................................................... 11 High Performance Computing Initiatives and Programs .................................................................................................. 11 2.2 APPLIED MATHEMATICS: LAYING THE FOUNDATION .......................................................................17 Differential Equations .......................................................................................................................................................... 18 Partial Differential Equations .............................................................................................................................................. 18 Solvers ...................................................................................................................................................................................23 Optimization ........................................................................................................................................................................ 26 2.3 COMPUTER SCIENCE: UNLEASHING SCALE AND SPEED ...............................................................29 Massively Parallel Processing Computing ..........................................................................................................................29 Large-scale Data Analysis and Visualization ......................................................................................................................32 Deciding Who to Trust in a World of Strangers ................................................................................................................35 DOE Science Grid, ANL, LBNL, ORNL, PNNL ............................................................................................................38 Laying the Foundations for Scientific AI ...........................................................................................................................39 2.4 COMPUTER ARCHITECTURE ..................................................................................................................... 40 Context ................................................................................................................................................................................ 40 Paradigm Shifts ................................................................................................................................................................... 40 Computer Architecture Testbeds ........................................................................................................................................41 Government and Industry Partnerships .............................................................................................................................43 2.5 ASCR COMPUTING AND NETWORKING FACILITIES IN SEVENTH DECADE OF DRIVING SCIENTIFIC DISCOVERY ...............................................................................................................................45 Early History .........................................................................................................................................................................46 Bringing Computation to Open Science ...........................................................................................................................47 Working in Parallel ...............................................................................................................................................................47 Big Changes in the 1990s ................................................................................................................................................... 48 ASCR Cements DOE’s Role in HPC .............................................................................................................................. 50 Performance Modeling ....................................................................................................................................................... 53 I 11/2020 Getting Big Science out of Big Data ................................................................................................................................. 54 Networking at the Speed of Science: The Energy Sciences Network ........................................................................... 56 The First DOE Science-Oriented Network ......................................................................................................................57 Saving the Internet .............................................................................................................................................................. 58 Pushing Networking Technology ....................................................................................................................................... 58 Evolving to 400 Gbps Networks ....................................................................................................................................... 59 Services for Advanced Networking Supporting Data-Intensive Science ...................................................................... 59 perfSONAR Performance Monitoring Software ............................................................................................................60 ESnet’s Sixth-generation Network built on SDN Expertise ........................................................................................... 61 ASCR’s Supercomputing Landscape ................................................................................................................................. 61 Conclusion ............................................................................................................................................................................ 61 3.0 ASCR INVESTMENTS HELP DRIVE INDUSTRIAL DISCOVERY AND ECONOMIC COMPETITIVENESS ........................................................................................................................................64 Software Libraries Boost Circulation of ASCR Expertise ...............................................................................................64 ASCR Computing Facilities Help Drive Industrial Innovation ........................................................................................65 Helping Small Businesses Innovate ....................................................................................................................................70 4.0 ASCR’S IMPACT ON WORKFORCE DEVELOPMENT AND EDUCATION ..................................... 75 Educational Activities .......................................................................................................................................................... 75 The DOE Computational Science Graduate Fellowship
Recommended publications
  • Petaflops for the People
    PETAFLOPS SPOTLIGHT: NERSC housands of researchers have used facilities of the Advanced T Scientific Computing Research (ASCR) program and its EXTREME-WEATHER Department of Energy (DOE) computing predecessors over the past four decades. Their studies of hurricanes, earthquakes, NUMBER-CRUNCHING green-energy technologies and many other basic and applied Certain problems lend themselves to solution by science problems have, in turn, benefited millions of people. computers. Take hurricanes, for instance: They’re They owe it mainly to the capacity provided by the National too big, too dangerous and perhaps too expensive Energy Research Scientific Computing Center (NERSC), the Oak to understand fully without a supercomputer. Ridge Leadership Computing Facility (OLCF) and the Argonne Leadership Computing Facility (ALCF). Using decades of global climate data in a grid comprised of 25-kilometer squares, researchers in These ASCR installations have helped train the advanced Berkeley Lab’s Computational Research Division scientific workforce of the future. Postdoctoral scientists, captured the formation of hurricanes and typhoons graduate students and early-career researchers have worked and the extreme waves that they generate. Those there, learning to configure the world’s most sophisticated same models, when run at resolutions of about supercomputers for their own various and wide-ranging projects. 100 kilometers, missed the tropical cyclones and Cutting-edge supercomputing, once the purview of a small resulting waves, up to 30 meters high. group of experts, has trickled down to the benefit of thousands of investigators in the broader scientific community. Their findings, published inGeophysical Research Letters, demonstrated the importance of running Today, NERSC, at Lawrence Berkeley National Laboratory; climate models at higher resolution.
    [Show full text]
  • Arxiv:1911.09220V2 [Cs.MS] 13 Jul 2020
    MFEM: A MODULAR FINITE ELEMENT METHODS LIBRARY ROBERT ANDERSON, JULIAN ANDREJ, ANDREW BARKER, JAMIE BRAMWELL, JEAN- SYLVAIN CAMIER, JAKUB CERVENY, VESELIN DOBREV, YOHANN DUDOUIT, AARON FISHER, TZANIO KOLEV, WILL PAZNER, MARK STOWELL, VLADIMIR TOMOV Lawrence Livermore National Laboratory, Livermore, USA IDO AKKERMAN Delft University of Technology, Netherlands JOHANN DAHM IBM Research { Almaden, Almaden, USA DAVID MEDINA Occalytics, LLC, Houston, USA STEFANO ZAMPINI King Abdullah University of Science and Technology, Thuwal, Saudi Arabia Abstract. MFEM is an open-source, lightweight, flexible and scalable C++ library for modular finite element methods that features arbitrary high-order finite element meshes and spaces, support for a wide variety of dis- cretization approaches and emphasis on usability, portability, and high-performance computing efficiency. MFEM's goal is to provide application scientists with access to cutting-edge algorithms for high-order finite element mesh- ing, discretizations and linear solvers, while enabling researchers to quickly and easily develop and test new algorithms in very general, fully unstructured, high-order, parallel and GPU-accelerated settings. In this paper we describe the underlying algorithms and finite element abstractions provided by MFEM, discuss the software implementation, and illustrate various applications of the library. arXiv:1911.09220v2 [cs.MS] 13 Jul 2020 1. Introduction The Finite Element Method (FEM) is a powerful discretization technique that uses general unstructured grids to approximate the solutions of many partial differential equations (PDEs). It has been exhaustively studied, both theoretically and in practice, in the past several decades [1, 2, 3, 4, 5, 6, 7, 8]. MFEM is an open-source, lightweight, modular and scalable software library for finite elements, featuring arbitrary high-order finite element meshes and spaces, support for a wide variety of discretization approaches and emphasis on usability, portability, and high-performance computing (HPC) efficiency [9].
    [Show full text]
  • 2020 ALCF Science Report
    ARGONNE LEADERSHIP 2020 COMPUTING FACILITY Science On the cover: A snapshot of a visualization of the SARS-CoV-2 viral envelope comprising 305 million atoms. A multi-institutional research team used multiple supercomputing resources, including the ALCF’s Theta system, to optimize codes in preparation for large-scale simulations of the SARS-CoV-2 spike protein that were recognized with the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research. Image: Rommie Amaro, Lorenzo Casalino, Abigail Dommer, and Zied Gaieb, University of California San Diego 2020 SCIENCE CONTENTS 03 Message from ALCF Leadership 04 Argonne Leadership Computing Facility 10 Advancing Science with HPC 06 About ALCF 12 ALCF Resources Contribute to Fight Against COVID-19 07 ALCF Team 16 Edge Services Propel Data-Driven Science 08 ALCF Computing Resources 18 Preparing for Science in the Exascale Era 26 Science 28 Accessing ALCF GPCNeT: Designing a Benchmark 43 Materials Science 51 Physics Resources for Science Suite for Inducing and Measuring Constructing and Navigating Hadronic Light-by-Light Scattering Contention in HPC Networks Polymorphic Landscapes of and Vacuum Polarization Sudheer Chunduri Molecular Crystals Contributions to the Muon 30 2020 Science Highlights Parallel Relational Algebra for Alexandre Tkatchenko Anomalous Magnetic Moment Thomas Blum 31 Biological Sciences Logical Inferencing at Scale Data-Driven Materials Sidharth Kumar Scalable Reinforcement-Learning- Discovery for Optoelectronic The Last Journey Based Neural Architecture Search Applications
    [Show full text]
  • Bench - Benchmarking the State-Of- The-Art Task Execution Frameworks of Many- Task Computing
    MATRIX: Bench - Benchmarking the state-of- the-art Task Execution Frameworks of Many- Task Computing Thomas Dubucq, Tony Forlini, Virgile Landeiro Dos Reis, and Isabelle Santos Illinois Institute of Technology, Chicago, IL, USA {tdubucq, tforlini, vlandeir, isantos1}@hawk.iit.edu Stanford University. Finally HPX is a general purpose C++ Abstract — Technology trends indicate that exascale systems will runtime system for parallel and distributed applications of any have billion-way parallelism, and each node will have about three scale developed by Louisiana State University and Staple is a orders of magnitude more intra-node parallelism than today’s framework for developing parallel programs from Texas A&M. peta-scale systems. The majority of current runtime systems focus a great deal of effort on optimizing the inter-node parallelism by MATRIX is a many-task computing job scheduling system maximizing the bandwidth and minimizing the latency of the use [3]. There are many resource managing systems aimed towards of interconnection networks and storage, but suffer from the lack data-intensive applications. Furthermore, distributed task of scalable solutions to expose the intra-node parallelism. Many- scheduling in many-task computing is a problem that has been task computing (MTC) is a distributed fine-grained paradigm that considered by many research teams. In particular, Charm++ [4], aims to address the challenges of managing parallelism and Legion [5], Swift [6], [10], Spark [1][2], HPX [12], STAPL [13] locality of exascale systems. MTC applications are typically structured as direct acyclic graphs of loosely coupled short tasks and MATRIX [11] offer solutions to this problem and have with explicit input/output data dependencies.
    [Show full text]
  • FUNDAMENTALS of COMPUTING (2019-20) COURSE CODE: 5023 502800CH (Grade 7 for ½ High School Credit) 502900CH (Grade 8 for ½ High School Credit)
    EXPLORING COMPUTER SCIENCE NEW NAME: FUNDAMENTALS OF COMPUTING (2019-20) COURSE CODE: 5023 502800CH (grade 7 for ½ high school credit) 502900CH (grade 8 for ½ high school credit) COURSE DESCRIPTION: Fundamentals of Computing is designed to introduce students to the field of computer science through an exploration of engaging and accessible topics. Through creativity and innovation, students will use critical thinking and problem solving skills to implement projects that are relevant to students’ lives. They will create a variety of computing artifacts while collaborating in teams. Students will gain a fundamental understanding of the history and operation of computers, programming, and web design. Students will also be introduced to computing careers and will examine societal and ethical issues of computing. OBJECTIVE: Given the necessary equipment, software, supplies, and facilities, the student will be able to successfully complete the following core standards for courses that grant one unit of credit. RECOMMENDED GRADE LEVELS: 9-12 (Preference 9-10) COURSE CREDIT: 1 unit (120 hours) COMPUTER REQUIREMENTS: One computer per student with Internet access RESOURCES: See attached Resource List A. SAFETY Effective professionals know the academic subject matter, including safety as required for proficiency within their area. They will use this knowledge as needed in their role. The following accountability criteria are considered essential for students in any program of study. 1. Review school safety policies and procedures. 2. Review classroom safety rules and procedures. 3. Review safety procedures for using equipment in the classroom. 4. Identify major causes of work-related accidents in office environments. 5. Demonstrate safety skills in an office/work environment.
    [Show full text]
  • Accelerating the LOBPCG Method on Gpus Using a Blocked Sparse Matrix Vector Product
    Accelerating the LOBPCG method on GPUs using a blocked Sparse Matrix Vector Product Hartwig Anzt and Stanimire Tomov and Jack Dongarra Innovative Computing Lab University of Tennessee Knoxville, USA Email: [email protected], [email protected], [email protected] Abstract— the computing power of today’s supercomputers, often accel- erated by coprocessors like graphics processing units (GPUs), This paper presents a heterogeneous CPU-GPU algorithm design and optimized implementation for an entire sparse iter- becomes challenging. ative eigensolver – the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) – starting from low-level GPU While there exist numerous efforts to adapt iterative lin- data structures and kernels to the higher-level algorithmic choices ear solvers like Krylov subspace methods to coprocessor and overall heterogeneous design. Most notably, the eigensolver technology, sparse eigensolvers have so far remained out- leverages the high-performance of a new GPU kernel developed side the main focus. A possible explanation is that many for the simultaneous multiplication of a sparse matrix and a of those combine sparse and dense linear algebra routines, set of vectors (SpMM). This is a building block that serves which makes porting them to accelerators more difficult. Aside as a backbone for not only block-Krylov, but also for other from the power method, algorithms based on the Krylov methods relying on blocking for acceleration in general. The subspace idea are among the most commonly used general heterogeneous LOBPCG developed here reveals the potential of eigensolvers [1]. When targeting symmetric positive definite this type of eigensolver by highly optimizing all of its components, eigenvalue problems, the recently developed Locally Optimal and can be viewed as a benchmark for other SpMM-dependent applications.
    [Show full text]
  • Emerging Technologies Multi/Parallel Processing
    Emerging Technologies Multi/Parallel Processing Mary C. Kulas New Computing Structures Strategic Relations Group December 1987 For Internal Use Only Copyright @ 1987 by Digital Equipment Corporation. Printed in U.S.A. The information contained herein is confidential and proprietary. It is the property of Digital Equipment Corporation and shall not be reproduced or' copied in whole or in part without written permission. This is an unpublished work protected under the Federal copyright laws. The following are trademarks of Digital Equipment Corporation, Maynard, MA 01754. DECpage LN03 This report was produced by Educational Services with DECpage and the LN03 laser printer. Contents Acknowledgments. 1 Abstract. .. 3 Executive Summary. .. 5 I. Analysis . .. 7 A. The Players . .. 9 1. Number and Status . .. 9 2. Funding. .. 10 3. Strategic Alliances. .. 11 4. Sales. .. 13 a. Revenue/Units Installed . .. 13 h. European Sales. .. 14 B. The Product. .. 15 1. CPUs. .. 15 2. Chip . .. 15 3. Bus. .. 15 4. Vector Processing . .. 16 5. Operating System . .. 16 6. Languages. .. 17 7. Third-Party Applications . .. 18 8. Pricing. .. 18 C. ~BM and Other Major Computer Companies. .. 19 D. Why Success? Why Failure? . .. 21 E. Future Directions. .. 25 II. Company/Product Profiles. .. 27 A. Multi/Parallel Processors . .. 29 1. Alliant . .. 31 2. Astronautics. .. 35 3. Concurrent . .. 37 4. Cydrome. .. 41 5. Eastman Kodak. .. 45 6. Elxsi . .. 47 Contents iii 7. Encore ............... 51 8. Flexible . ... 55 9. Floating Point Systems - M64line ................... 59 10. International Parallel ........................... 61 11. Loral .................................... 63 12. Masscomp ................................. 65 13. Meiko .................................... 67 14. Multiflow. ~ ................................ 69 15. Sequent................................... 71 B. Massively Parallel . 75 1. Ametek.................................... 77 2. Bolt Beranek & Newman Advanced Computers ...........
    [Show full text]
  • Adaptive Data Migration in Load-Imbalanced HPC Applications
    Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 10-16-2020 Adaptive Data Migration in Load-Imbalanced HPC Applications Parsa Amini Louisiana State University and Agricultural and Mechanical College Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Computer Sciences Commons Recommended Citation Amini, Parsa, "Adaptive Data Migration in Load-Imbalanced HPC Applications" (2020). LSU Doctoral Dissertations. 5370. https://digitalcommons.lsu.edu/gradschool_dissertations/5370 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. ADAPTIVE DATA MIGRATION IN LOAD-IMBALANCED HPC APPLICATIONS A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Computer Science by Parsa Amini B.S., Shahed University, 2013 M.S., New Mexico State University, 2015 December 2020 Acknowledgments This effort has been possible, thanks to the involvement and assistance of numerous people. First and foremost, I thank my advisor, Dr. Hartmut Kaiser, who made this journey possible with their invaluable support, precise guidance, and generous sharing of expertise. It has been a great privilege and opportunity for me be your student, a part of the STE||AR group, and the HPX development effort. I would also like to thank my mentor and former advisor at New Mexico State University, Dr.
    [Show full text]
  • UNICOS/Mk Status)
    Status on the Serverization of UNICOS - (UNICOS/mk Status) Jim Harrell, Cray Research, Inc., 655-F Lone Oak Drive, Eagan, Minnesota 55121 ABSTRACT: UNICOS is being reorganized into a microkernel based system. The purpose of this reorganization is to provide an operating system that can be used on all Cray architectures and provide both the current UNICOS functionality and a path to the future distributed systems. The reorganization of UNICOS is moving forward. The port of this “new” system to the MPP is also in progress. This talk will present the current status, and plans for UNICOS/mk. The chal- lenges of performance, size and scalability will be discussed. 1 Introduction have to be added in order to provide required functionality, such as support for distributed applications. As the work of adding This discussion is divided into four parts. The first part features and porting continues there is a testing effort that discusses the development process used by this project. The ensures correct functionality of the product. development process is the methodology that is being used to serverize UNICOS. The project is actually proceeding along Some of the initial porting can be and has been done in the multiple, semi-independent paths at the same time.The develop- simulator. However, issues such as MPP system organization, ment process will help explain the information in the second which nodes the servers will reside on and how the servers will part which is a discussion of the current status of the project. interact can only be completed on the hardware. The issue of The third part discusses accomplished milestones.
    [Show full text]
  • Florida Course Code Directory Computer Science Course Information 2019-2020
    FLORIDA COURSE CODE DIRECTORY COMPUTER SCIENCE COURSE INFORMATION 2019-2020 Section 1007.2616, Florida Statutes, was amended by the Florida Legislature to include the definition of computer science and a requirement that computer science courses be identified in the Course Code Directory published on the Department of Education’s website. DEFINITION: The study of computers and algorithmic processes, including their principles, hardware and software designs, applications, and their impact on society, and includes computer coding and computer programming. SECONDARY COMPUTER SCIENCE COURSES Middle and high schools in each district, including combination schools in which any of grades 6-12 are taught, must provide an opportunity for students to enroll in a computer science course. If a school district does not offer an identified computer science course the district must provide students access to the course through the Florida Virtual School (FLVS) or through other means. COURSE NUMBER COURSE TITLE 0200000 M/J Computer Science Discoveries 0200010 M/J Computer Science Discoveries 1 0200020 M/J Computer Science Discoveries 2 0200305 Computer Science Discoveries 0200315 Computer Science Principles 0200320 AP Computer Science A 0200325 AP Computer Science A Innovation 0200335 AP Computer Science Principles 0200435 PRE-AICE Computer Studies IGCSE Level 0200480 AICE Computer Science 1 AS Level 0200485 AICE Computer Science 2 A Level 0200490 AICE Information Technology 1 AS Level 0200495 AICE Information Technology 2 A Level 0200800 IB Computer
    [Show full text]
  • Safety and Security Challenge
    SAFETY AND SECURITY CHALLENGE TOP SUPERCOMPUTERS IN THE WORLD - FEATURING TWO of DOE’S!! Summary: The U.S. Department of Energy (DOE) plays a very special role in In fields where scientists deal with issues from disaster relief to the keeping you safe. DOE has two supercomputers in the top ten supercomputers in electric grid, simulations provide real-time situational awareness to the whole world. Titan is the name of the supercomputer at the Oak Ridge inform decisions. DOE supercomputers have helped the Federal National Laboratory (ORNL) in Oak Ridge, Tennessee. Sequoia is the name of Bureau of Investigation find criminals, and the Department of the supercomputer at Lawrence Livermore National Laboratory (LLNL) in Defense assess terrorist threats. Currently, ORNL is building a Livermore, California. How do supercomputers keep us safe and what makes computing infrastructure to help the Centers for Medicare and them in the Top Ten in the world? Medicaid Services combat fraud. An important focus lab-wide is managing the tsunamis of data generated by supercomputers and facilities like ORNL’s Spallation Neutron Source. In terms of national security, ORNL plays an important role in national and global security due to its expertise in advanced materials, nuclear science, supercomputing and other scientific specialties. Discovery and innovation in these areas are essential for protecting US citizens and advancing national and global security priorities. Titan Supercomputer at Oak Ridge National Laboratory Background: ORNL is using computing to tackle national challenges such as safe nuclear energy systems and running simulations for lower costs for vehicle Lawrence Livermore's Sequoia ranked No.
    [Show full text]
  • Unlocking the Full Potential of the Cray XK7 Accelerator
    Unlocking the Full Potential of the Cray XK7 Accelerator ∗ † Mark D. Klein , and, John E. Stone ∗National Center for Supercomputing Application, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA †Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA Abstract—The Cray XK7 includes NVIDIA GPUs for ac- [2], [3], [4], and for high fidelity movie renderings with the celeration of computing workloads, but the standard XK7 HVR volume rendering software.2 In one example, the total system software inhibits the GPUs from accelerating OpenGL turnaround time for an HVR movie rendering of a trillion- and related graphics-specific functions. We have changed the operating mode of the XK7 GPU firmware, developed a custom cell inertial confinement fusion simulation [5] was reduced X11 stack, and worked with Cray to acquire an alternate driver from an estimate of over a month for data transfer to and package from NVIDIA in order to allow users to render and rendering on a conventional visualization cluster down to post-process their data directly on Blue Waters. Users are able just one day when rendered locally using 128 XK7 nodes to use NVIDIA’s hardware OpenGL implementation which has on Blue Waters. The fully-graphics-enabled GPU state is many features not available in software rasterizers. By elim- inating the transfer of data to external visualization clusters, currently considered an unsupported mode of operation by time-to-solution for users has been improved tremendously. In Cray, and to our knowledge Blue Waters is presently the one case, XK7 OpenGL rendering has cut turnaround time only Cray system currently running in this mode.
    [Show full text]