SC20-Final-Program-V2.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

SC20-Final-Program-V2.Pdf Table of Contents ACM Gordon Bell COVID Finalist Keynote ACM Gordon Bell Finalist More Than HPC Plenary ACM Student Research Competition: Panel Graduate Posters Paper ACM Student Research Competition: Research Posters Undergraduate Posters Scientific Visualization Awards Presentation & Data Analytics Showcase Birds of a Feather SCinet Booth Sessions State of the Practice Talk Doctoral Showcase Students@SC Early Career Program Test of Time Exhibitor Forum Tutorial Exhibits Virtual Student Cluster Competition Invited Talk Workshop Job Posting ACM Gordon Bell COVID Finalist (back to top) Thursday, November 19th 10:00 am - 12:00 pm Gordon Bell COVID-19 Prize Finalist Session 1 Session Description: Enabling Rapid COVID-19 Small Molecule Drug Design Through Scalable Deep Learning of Generative Models Sam Ade Jacobs (Lawrence Livermore National Laboratory), Tim Moon (Lawrence Livermore National Laboratory), Kevin McLoughlin (Lawrence Livermore National Laboratory), Derek Jones (Lawrence Livermore National Laboratory), David Hysom (Lawrence Livermore National Laboratory), Dong H. Ahn (Lawrence Livermore National Laboratory), John Gyllenhaal (Lawrence Livermore National Laboratory), Pythagoras Watson (Lawrence Livermore National Laboratory), Felice C. Lightsone (Lawrence Livermore National Laboratory), Jonathan E. Allen (Lawrence Livermore National Laboratory), Ian Karlin (Lawrence Livermore National Laboratory), Brian Van Essen (Lawrence Livermore National Laboratory) We improved the quality and reduced the time to produce machine-learned models for use in small molecule antiviral design. Our globally asynchronous multi-level parallel training approach strong scales to all of Sierra with up to 97.7% efficiency. We trained a novel, character-based Wasserstein autoencoder that produces a higher quality model trained on 1.613 billion compounds in 23 minutes while the previous state-of-the-art takes a day on 1 million compounds. Reducing training time from a day to minutes shifts the model creation bottleneck from computer job turnaround time to human innovation time. Our implementation achieves 318 PFLOPS for 17.1% of half-precision peak. We will incorporate this model into our molecular design loop, enabling the generation of more diverse compounds: searching for novel, candidate antiviral drugs improves and reduces the time to synthesize compounds to be tested in the lab. High-Throughput Virtual Laboratory for Drug Discovery Using Massive Datasets Jens Glaser (Oak Ridge National Laboratory), Josh V. Vermaas (Oak Ridge National Laboratory), David M. Rogers (Oak Ridge National Laboratory), Jeff Larkin (Nvidia Corporation), Scott LeGrand (Nvidia Corporation), Swen Boehm (Oak Ridge National Laboratory), Matthew B. Baker (Oak Ridge National Laboratory), Aaron Scheinberg (Jubilee Development), Andreas F. Tillack (Scripps Research), Mathialakan Thavappiragasam (Oak Ridge National Laboratory), Ada Sedova (Oak Ridge National Laboratory), Oscar Hernandez (Oak Ridge National Laboratory) Time-to-solution for structure-based screening of massive chemical databases for COVID-19 drug discovery has been decreased by an order of magnitude, and a virtual laboratory has been deployed at scale on up to 27,612 GPUs on the Summit supercomputer, allowing an average molecular docking of 19,028 compounds per second. Over one billion compounds were docked to two SARS- CoV-2 protein structures with full optimization of ligand position and 20 poses per docking, each in under 24 hours. GPU acceleration and high-throughput optimizations of the docking program produced 350x mean speedup over the CPU version (50x speedup per node). GPU acceleration of both feature calculation for machine-learning based scoring and distributed database queries reduced processing of the 2.4 TB output by orders of magnitude. The resulting 58x speedup for the full pipeline reduces an initial 51 day runtime to 21 hours per protein for providing high-scoring compounds to experimental collaborators for validation assays. AI-Driven Multiscale Simulations Illuminate Mechanisms of SARS-CoV-2 Spike Dynamics Lorenzo Casalino (University of California, San Diego), Abigail Dommer (University of California, San Diego), Zied Gaieb (University of California, San Diego), Emilia P. Barros (University of California, San Diego), Terra Stzain (University of California, San Diego), Surl-Hee Ahn (University of California, San Diego), Anda Trifan (University of Illinois), Alexander Brace (Argonne National Laboratory (ANL)), Anthony Bogetti (University of Pittsburgh), Heng Ma (Argonne National Laboratory (ANL)), Hyungro Lee (Rutgers University), Matteo Turilli (Rutgers University), Syma Khalid (University of Southampton), Lillian Chong (University of Pittsburgh), Carlos Simmerling (Stony Brook University), David Hardy (University of Illinois), Julio Maia (University of Illinois), James Phillips (University of Illinois), Thorsten Kurth (Nvidia Corporation), Abraham Stern (Nvidia Corporation), Lei Huang (University of Texas), John McCalpain (University of Texas), Mahidhar Tatineni (San Diego Supercomputer Center), Tom Gibbs (Nvidia Corporation), John Stone (University of Illinois), Shantenu Jha (Brookhaven National Laboratory), Arvind Ramanathan (Argonne National Laboratory (ANL)), Rommie E Amaro (University of California, San Diego) We develop a generalizable AI-driven workflow that leverages heterogeneous HPC resources to explore the time-dependent dynamics of molecular systems. We use this workflow to investigate the mechanisms of infectivity of the SARS-CoV-2 spike protein, the main viral infection machinery. Our workflow enables more efficient investigation of spike dynamics in a variety of complex environments, including within a complete SARS-CoV-2 viral envelope simulation, which contains 305 million atoms and shows strong scaling on ORNL Summit using NAMD. We present several novel scientific discoveries, including the elucidation of the spike’s full glycan shield, the role of spike glycans in modulating the infectivity of the virus, and the characterization of the flexible interactions between the spike and the human ACE2 receptor. We also demonstrate how AI can accelerate conformational sampling across different systems and pave the way for the future application of such methods to additional studies in SARS-CoV-2 and other molecular systems. A Population Data-Driven Workflow for COVID-19 Modeling and Learning Jonathan Ozik (Argonne National Laboratory (ANL)), Justin M. Wozniak (Argonne National Laboratory (ANL)), Nicholson Collier (Argonne National Laboratory (ANL)), Charles M. Macal (Argonne National Laboratory (ANL)), Mickael Binois (French Institute for Research in Computer Science and Automation (INRIA)) CityCOVID is a detailed agent-based model (ABM) that represents the behaviors and social interactions of 2.7 million residents of Chicago as they move between and colocate in 1.2 million distinct places, including households, schools, workplaces and hospitals, as determined by individual hourly activity schedules and dynamic behaviors such as isolating because of symptom onset. Disease progression dynamics incorporated within each agent track transitions between possible COVID-19 disease states, based on heterogeneous agent attributes, exposure through colocation, and effects of self-protective behaviors on viral transmissibility. Throughout the COVID- 19 epidemic, CityCOVID model outputs have been provided to city, county and state stakeholders in response to evolving decision-making priorities, incorporating emerging information on SARS- CoV-2 epidemiology. Here we demonstrate our efforts in integrating our high-performance epidemiological simulation model with large-scale machine learning to develop a generalizable, flexible and performant analytical platform for planning and crisis response. ACM Gordon Bell Finalist (back to top) Wednesday, November 18th 3:00 pm - 4:30 pm Gordon Bell Prize Finalist Session 1 Session Description: Toward Realization of Numerical Towing-Tank Tests by Wall-Resolved Large Eddy Simulation Based on 32 Billion Grid Finite-Element Computation Chisachi Kato (University of Tokyo), Yoshinobu Yamade (Mizuho Information and Research Institute Inc), Katsuhiro Nagano (Mizuho Information and Research Institute Inc), Kiyoshi Kumahata (RIKEN Center for Computational Science (R-CCS)), Kazuo Minami (RIKEN Center for Computational Science (R-CCS)), Tatsuo Nishikawa (Shipbuilding Research Centre of Japan) To realize numerical towing-tank tests by substantially shortening the time to the solution, a general-purpose finite-element flow solver, named FrontFlow/Blue (FFB), has been fully optimized so as to achieve the maximum possible sustained memory throughputs with three of its four hot kernels. As a result, a single-node sustained performance of 179.0 GFLOPS, which corresponds to 5.3% of the peak performance, has been achieved on Fugaku, the next flagship computer of Japan. A weak-scale benchmark test has confirmed FFB runs with a parallel efficiency of over 85% up to 5,505,024 compute cores, and a sustained performance of 16.7 PFLOPS has been achieved. By these, the time needed for large-eddy simulation using 32 billion grids has been significantly reduced from almost two days to only 37 min., or by a factor of 71. This has clearly indicated that a numerical towing-tank could actually be built for ship hydrodynamics in a few years. Pushing the Limit of Molecular Dynamics with Ab Initio Accuracy to 100 Million Atoms with Machine Learning Weile Jia (University of California,
Recommended publications
  • Materials Modelling and the Challenges of Petascale and Exascale
    Multiscale Materials Modelling on High Performance Computer Architectures Materials modelling and the challenges of petascale and exascale Andrew Emerson Cineca Supercomputing Centre, Bologna, Italy The project MMM@HPC is funded by the 7th Framework Programme of the European Commission within the Research Infrastructures 26/09/2013 with grant agreement number RI-261594. Contents Introduction to HPC HPC and the MMM@HPC project Petascale computing The Road to Exascale Observations 26/09/2013 A. Emerson, International Forum on Multiscale Materials Modelling, Bologna 2013 2 High Performance Computing High Performance Computing (HPC). What is it ? High-performance computing (HPC) is the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 10 12 floating- point operations per second. (http://searchenterpriselinux.techtarget.com/definition/high -performance -computing ) A branch of computer science that concentrates on developing supercomputers and software to run on supercomputers. A main area of this discipline is developing parallel processing algorithms and software: programs that can be divided into little pieces so that each piece can be executed simultaneously by separate processors. (WEBOPEDIA ) 26/09/2013 A. Emerson, International Forum on Multiscale Materials Modelling, Bologna 2013 3 High Performance Computing Advances due to HPC, e.g. Molecular dynamics early 1990s . Lysozyme, 40k atoms 2006. Satellite tobacco mosaic virus (STMV). 1M atoms, 50ns 2008. Ribosome. 3.2M atoms, 230ns. 2011 . Chromatophore, 100M atoms (SC 2011) 26/09/2013 A. Emerson, International Forum on Multiscale Materials Modelling, Bologna 2013 4 High Performance Computing Cray-1 Supercomputer (1976) 80MHz , Vector processor → 250Mflops Cray XMP (1982) 2 CPUs+vectors, 400 MFlops “FERMI”, Bluegene/Q 168,000 cores 2.1 Pflops 26/09/2013 A.
    [Show full text]
  • Raising the Bar for Using Gpus in Software Packet Processing Anuj Kalia and Dong Zhou, Carnegie Mellon University; Michael Kaminsky, Intel Labs; David G
    Raising the Bar for Using GPUs in Software Packet Processing Anuj Kalia and Dong Zhou, Carnegie Mellon University; Michael Kaminsky, Intel Labs; David G. Andersen, Carnegie Mellon University https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/kalia This paper is included in the Proceedings of the 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15). May 4–6, 2015 • Oakland, CA, USA ISBN 978-1-931971-218 Open Access to the Proceedings of the 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15) is sponsored by USENIX Raising the Bar for Using GPUs in Software Packet Processing Anuj Kalia, Dong Zhou, Michael Kaminsky∗, and David G. Andersen Carnegie Mellon University and ∗Intel Labs Abstract based implementations are far easier to experiment with) and in practice (software-based approaches are used for Numerous recent research efforts have explored the use low-speed applications and in cases such as forwarding of Graphics Processing Units (GPUs) as accelerators for within virtual switches [13]). software-based routing and packet handling applications, Our goal in this paper is to advance understanding of typically demonstrating throughput several times higher the advantages of GPU-assisted packet processors com- than using legacy code on the CPU alone. pared to CPU-only designs. In particular, noting that In this paper, we explore a new hypothesis about such several recent efforts have claimed that GPU-based de- designs: For many such applications, the benefits arise signs can be faster even for simple applications such as less from the GPU hardware itself as from the expression IPv4 forwarding [23, 43, 31, 50, 35, 30], we attempt to of the problem in a language such as CUDA or OpenCL identify the reasons for that speedup.
    [Show full text]
  • Recent Developments in Supercomputing
    John von Neumann Institute for Computing Recent Developments in Supercomputing Th. Lippert published in NIC Symposium 2008, G. M¨unster, D. Wolf, M. Kremer (Editors), John von Neumann Institute for Computing, J¨ulich, NIC Series, Vol. 39, ISBN 978-3-9810843-5-1, pp. 1-8, 2008. c 2008 by John von Neumann Institute for Computing Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above. http://www.fz-juelich.de/nic-series/volume39 Recent Developments in Supercomputing Thomas Lippert J¨ulich Supercomputing Centre, Forschungszentrum J¨ulich 52425 J¨ulich, Germany E-mail: [email protected] Status and recent developments in the field of supercomputing on the European and German level as well as at the Forschungszentrum J¨ulich are presented. Encouraged by the ESFRI committee, the European PRACE Initiative is going to create a world-leading European tier-0 supercomputer infrastructure. In Germany, the BMBF formed the Gauss Centre for Supercom- puting, the largest national association for supercomputing in Europe. The Gauss Centre is the German partner in PRACE. With its new Blue Gene/P system, the J¨ulich supercomputing centre has realized its vision of a dual system complex and is heading for Petaflop/s already in 2009. In the framework of the JuRoPA-Project, in cooperation with German and European industrial partners, the JSC will build a next generation general purpose system with very good price-performance ratio and energy efficiency.
    [Show full text]
  • Computer Architecture: Parallel Processing Basics
    Computer Architecture: Parallel Processing Basics Onur Mutlu & Seth Copen Goldstein Carnegie Mellon University 9/9/13 Today What is Parallel Processing? Why? Kinds of Parallel Processing Multiprocessing and Multithreading Measuring success Speedup Amdhal’s Law Bottlenecks to parallelism 2 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi PDL'09 © 2007-9 Goldstein5 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi Parallel PDL'09 © 2007-9 Goldstein6 Concurrent Systems Physical Geographical Cloud Parallel Geophysical +++ ++ --- --- location Relative +++ +++ + - location Faults ++++ +++ ++++ -- Number of +++ +++ + - Processors + Network varies varies fixed fixed structure Network --- --- + + connectivity 7 Concurrent System Challenge: Programming The old joke: How long does it take to write a parallel program? One Graduate Student Year 8 Parallel Programming Again?? Increased demand (multicore) Increased scale (cloud) Improved compute/communicate Change in Application focus Irregular Recursive data structures PDL'09 © 2007-9 Goldstein9 Why Parallel Computers? Parallelism: Doing multiple things at a time Things: instructions,
    [Show full text]
  • BDEC2 Poznan Japanese HPC Infrastructure Update
    BDEC2 Poznan Japanese HPC Infrastructure Update Presenter: Masaaki Kondo, Riken R-CCS/Univ. of Tokyo (on behalf of Satoshi Matsuoka, Director, Riken R-CCS) 1 Post-K: The Game Changer (2020) 1. Heritage of the K-Computer, HP in simulation via extensive Co-Design • High performance: up to x100 performance of K in real applications • Multitudes of Scientific Breakthroughs via Post-K application programs • Simultaneous high performance and ease-of-programming 2. New Technology Innovations of Post-K Global leadership not just in High Performance, esp. via high memory BW • the machine & apps, but as Performance boost by “factors” c.f. mainstream CPUs in many HPC & Society5.0 apps via BW & Vector acceleration cutting edge IT • Very Green e.g. extreme power efficiency Ultra Power efficient design & various power control knobs • Arm Global Ecosystem & SVE contribution Top CPU in ARM Ecosystem of 21 billion chips/year, SVE co- design and world’s first implementation by Fujitsu High Perf. on Society5.0 apps incl. AI • ARM: Massive ecosystem Architectural features for high perf on Society 5.0 apps from embedded to HPC based on Big Data, AI/ML, CAE/EDA, Blockchain security, etc. Technology not just limited to Post-K, but into societal IT infrastructures e.g. Clouds 2 Arm64fx & Post-K (to be renamed) Fujitsu-Riken design A64fx ARM v8.2 (SVE), 48/52 core CPU HPC Optimized: Extremely high package high memory BW (1TByte/s), on-die Tofu-D network BW (~400Gbps), high SVE FLOPS (~3Teraflops), various AI support (FP16, INT8, etc.) Gen purpose CPU – Linux,
    [Show full text]
  • Supercomputer Fugaku
    Supercomputer Fugaku Toshiyuki Shimizu Feb. 18th, 2020 FUJITSU LIMITED Copyright 2020 FUJITSU LIMITED Outline ◼ Fugaku project overview ◼ Co-design ◼ Approach ◼ Design results ◼ Performance & energy consumption evaluation ◼ Green500 ◼ OSS apps ◼ Fugaku priority issues ◼ Summary 1 Copyright 2020 FUJITSU LIMITED Supercomputer “Fugaku”, formerly known as Post-K Focus Approach Application performance Co-design w/ application developers and Fujitsu-designed CPU core w/ high memory bandwidth utilizing HBM2 Leading-edge Si-technology, Fujitsu's proven low power & high Power efficiency performance logic design, and power-controlling knobs Arm®v8-A ISA with Scalable Vector Extension (“SVE”), and Arm standard Usability Linux 2 Copyright 2020 FUJITSU LIMITED Fugaku project schedule 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 Fugaku development & delivery Manufacturing, Apps Basic Detailed design & General Feasibility study Installation review design Implementation operation and Tuning Select Architecture & Co-Design w/ apps groups apps sizing 3 Copyright 2020 FUJITSU LIMITED Fugaku co-design ◼ Co-design goals ◼ Obtain the best performance, 100x apps performance than K computer, within power budget, 30-40MW • Design applications, compilers, libraries, and hardware ◼ Approach ◼ Estimate perf & power using apps info, performance counts of Fujitsu FX100, and cycle base simulator • Computation time: brief & precise estimation • Communication time: bandwidth and latency for communication w/ some attributes for communication patterns • I/O time: ◼ Then, optimize apps/compilers etc. and resolve bottlenecks ◼ Estimation of performance and power ◼ Precise performance estimation for primary kernels • Make & run Fugaku objects on the Fugaku cycle base simulator ◼ Brief performance estimation for other sections • Replace performance counts of FX100 w/ Fugaku params: # of inst. commit/cycle, wait cycles of barrier, inst.
    [Show full text]
  • DB Music Shop Must Arrive 2 Months Prior to DB Cover Date
    05 5 $4.99 DownBeat.com 09281 01493 0 MAY 2010MAY U.K. £3.50 001_COVER.qxd 3/16/10 2:08 PM Page 1 DOWNBEAT MIGUEL ZENÓN // RAMSEY LEWIS & KIRK WHALUM // EVAN PARKER // SUMMER FESTIVAL GUIDE MAY 2010 002-025_FRONT.qxd 3/17/10 10:28 AM Page 2 002-025_FRONT.qxd 3/17/10 10:29 AM Page 3 002-025_FRONT.qxd 3/17/10 10:29 AM Page 4 May 2010 VOLUME 77 – NUMBER 5 President Kevin Maher Publisher Frank Alkyer Editor Ed Enright Associate Editor Aaron Cohen Art Director Ara Tirado Production Associate Andy Williams Bookkeeper Margaret Stevens Circulation Manager Kelly Grosser ADVERTISING SALES Record Companies & Schools Jennifer Ruban-Gentile 630-941-2030 [email protected] Musical Instruments & East Coast Schools Ritche Deraney 201-445-6260 [email protected] Classified Advertising Sales Sue Mahal 630-941-2030 [email protected] OFFICES 102 N. Haven Road Elmhurst, IL 60126–2970 630-941-2030 Fax: 630-941-3210 www.downbeat.com [email protected] CUSTOMER SERVICE 877-904-5299 [email protected] CONTRIBUTORS Senior Contributors: Michael Bourne, John McDonough, Howard Mandel Austin: Michael Point; Boston: Fred Bouchard, Frank-John Hadley; Chicago: John Corbett, Alain Drouot, Michael Jackson, Peter Margasak, Bill Meyer, Mitch Myers, Paul Natkin, Howard Reich; Denver: Norman Provizer; Indiana: Mark Sheldon; Iowa: Will Smith; Los Angeles: Earl Gibson, Todd Jenkins, Kirk Silsbee, Chris Walker, Joe Woodard; Michigan: John Ephland; Minneapolis: Robin James; Nashville: Robert Doerschuk; New Orleans: Erika Goldring, David Kunian; New York: Alan Bergman, Herb Boyd, Bill Douthart, Ira Gitler, Eugene Gologursky, Norm Harris, D.D.
    [Show full text]
  • DISTRIBUTED COMPUTING ENVIRONMENT ABSTRACT The
    DISTRIBUTED COMPUTING ENVIRONMENT ABSTRACT The high volume of networked computers, workstations, LANs has prompted users to move from a simple end user computing to a complex distributed computing environment. This transition is not just networking the computers, but also involves the issues of scalability, security etc. A Distributed Computing Environment herein referred to, as DCE is essentially an integration of all the services necessary to develop, support and manage a distributed computing environment. This term paper discusses the three important issues addressed by DCE in detail, Remote Procedure Calls [IRPC], Distributed File Systems [IDFS][OSF91] and Security [Isec][OSF92]. 1 INTRODUCTION The present day computing industry depends on the efficient usage of resources. So instead of duplicating the resources at every node of computing, a remote method of accessing the resources is more efficient and saves costs. This gave rise to the field of distributed computing, where not only physical resources, but also processing power was distributed. Distributed computing was driven by the following factors. a) Desire to share data and resources b) Minimize duplication of functionality c) Increase cost efficiency d) Increase reliability and availability of resources. The Open Source Foundation’s DCE (OSF DCE) has become the De facto standard for DCE applications and has the backing of IBM, COMPAQ, HP and the likes [OSFInter92]. 1.1 Why DCE? When an organization migrates from networked computing to distributed computing a lot of factors are to be taken into consideration. For example replication of files gives rise to consistency problems, clock synchronization becomes important, and security is a bigger consideration.
    [Show full text]
  • Download SC18 Final Program
    Program November 11-16, 2018 The International Conference Exhibits November 12-15, 2018 for High Performance KAY BAILEY HUTCHISON CONVENTION CENTER Computing, Networking, DALLAS, TEXAS, USA Storage, and Analysis Sponsored by: MICHAEL S. RAWLINGS Mayor of Dallas November 11, 2018 Supercomputing – SC18 Conference: As Mayor of Dallas, it is my pleasure to welcome SC18 – the ultimate international conference for high performance computing, networking, storage, and analysis – to the Kay Bailey Hutchison Convention Center! I applaud SC18 for bringing together high-performance computing (HPC) professionals and students from across the globe to share ideas, attend and participate in technical presentations, papers and workshops, all while inspiring the next generation of HPC professionals. Dallas is the No. 1 visitor destination in Texas and one of the top cities in the nation for meetings and conventions. In addition to having the resources to host major events and conventions, Dallas is a greatly diverse American city and a melting pot of cultures. This important convergence of uniqueness and differences is reflected throughout the sights and sounds of our city. Dallas' authentic arts, music, food, historic landmarks and urban lifestyle all contribute to the city's rich cultural makeup. Let the bold, vibrant spirit of Dallas, with a touch of Texas charm, guide the way to your BIG Dallas moments. Our city is filled with the unique experiences one can only have in Dallas – from cuisine by celebrity chefs, to the cultural landscape in one of the nation’s largest arts districts, to first-class shopping - make the most of your time in our great city! From transportation to hotels, restaurants, retail and entertainment venues, Dallas is ready to roll out our red carpet to host SC18.
    [Show full text]
  • Nascent Exascale Supercomputers Offer Promise, Present Challenges CORE CONCEPTS Adam Mann, Science Writer
    CORE CONCEPTS Nascent exascale supercomputers offer promise, present challenges CORE CONCEPTS Adam Mann, Science Writer Sometime next year, managers at the US Department Laboratory in NM. “We have to change our computing of Energy’s (DOE) Argonne National Laboratory in paradigms, how we write our programs, and how we Lemont, IL, will power up a calculating machine the arrange computation and data management.” size of 10 tennis courts and vault the country into That’s because supercomputers are complex a new age of computing. The $500-million main- beasts, consisting of cabinets containing hundreds of frame, called Aurora, could become the world’sfirst thousands of processors. For these processors to oper- “exascale” supercomputer, running an astounding ate as a single entity, a supercomputer needs to pass 1018, or 1 quintillion, operations per second. data back and forth between its various parts, running Aurora is expected to have more than twice the huge numbers of computations at the same time, all peak performance of the current supercomputer record while minimizing power consumption. Writing pro- holder, a machine named Fugaku at the RIKEN Center grams for such parallel computing is not easy, and the- for Computational Science in Kobe, Japan. Fugaku and orists will need to leverage new tools such as machine its calculation kin serve a vital function in modern learning and artificial intelligence to make scientific scientific advancement, performing simulations crucial breakthroughs. Given these challenges, researchers for discoveries in a wide range of fields. But the transition have been planning for exascale computing for more to exascale will not be easy.
    [Show full text]
  • Mixed Precision Block Fused Multiply-Add
    Mixed Precision Block Fused Multiply-Add: Error Analysis and Application to GPU Tensor Cores Pierre Blanchard, Nicholas Higham, Florent Lopez, Théo Mary, Srikara Pranesh To cite this version: Pierre Blanchard, Nicholas Higham, Florent Lopez, Théo Mary, Srikara Pranesh. Mixed Precision Block Fused Multiply-Add: Error Analysis and Application to GPU Tensor Cores. SIAM Journal on Scientific Computing, Society for Industrial and Applied Mathematics, 2020, 42 (3), pp.C124-C141. 10.1137/19M1289546. hal-02491076v2 HAL Id: hal-02491076 https://hal.archives-ouvertes.fr/hal-02491076v2 Submitted on 28 May 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. MIXED PRECISION BLOCK FUSED MULTIPLY-ADD: ERROR ANALYSIS AND APPLICATION TO GPU TENSOR CORES∗ PIERRE BLANCHARDy , NICHOLAS J. HIGHAMz , FLORENT LOPEZx , THEO MARY{, AND SRIKARA PRANESHk Abstract. Computing units that carry out a fused multiply-add (FMA) operation with matrix arguments, referred to as tensor units by some vendors, have great potential for use in scientific computing. However, these units are inherently mixed precision and existing rounding error analyses do not support them. We consider a mixed precision block FMA that generalizes both the usual scalar FMA and existing tensor units.
    [Show full text]
  • Arxiv:2109.00082V1 [Cs.DC] 31 Aug 2021 Threshold of Exascale Computing
    Plan-based Job Scheduling for Supercomputers with Shared Burst Buffers Jan Kopanski and Krzysztof Rzadca Institute of Informatics, University of Warsaw Stefana Banacha 2, 02-097 Warsaw, Poland [email protected] [email protected] Preprint of the pa- Abstract. The ever-increasing gap between compute and I/O perfor- per accepted at the mance in HPC platforms, together with the development of novel NVMe 27th International storage devices (NVRAM), led to the emergence of the burst buffer European Conference concept—an intermediate persistent storage layer logically positioned on Parallel and Dis- between random-access main memory and a parallel file system. De- tributed Computing spite the development of real-world architectures as well as research (Euro-Par 2021), Lis- concepts, resource and job management systems, such as Slurm, provide bon, Portugal, 2021, only marginal support for scheduling jobs with burst buffer requirements, DOI: 10.1007/978-3- in particular ignoring burst buffers when backfilling. We investigate the 030-85665-6_8 impact of burst buffer reservations on the overall efficiency of online job scheduling for common algorithms: First-Come-First-Served (FCFS) and Shortest-Job-First (SJF) EASY-backfilling. We evaluate the algorithms in a detailed simulation with I/O side effects. Our results indicate that the lack of burst buffer reservations in backfilling may significantly deteriorate scheduling. We also show that these algorithms can be easily extended to support burst buffers. Finally, we propose a burst-buffer–aware plan-based scheduling algorithm with simulated annealing optimisation, which im- proves the mean waiting time by over 20% and mean bounded slowdown by 27% compared to the burst-buffer–aware SJF-EASY-backfilling.
    [Show full text]