At the Frontiers of Extreme Computing

Total Page:16

File Type:pdf, Size:1020Kb

At the Frontiers of Extreme Computing NOVEMBER 2011 SUPER- COMPUTERS AT THE FRONTIERS OF EXTREME COMPUTING PUBLISHED IN PARTNERSHIP WITH Research and Innovation with HPC Joint SMEs Laboratory HPC At the interface of computer science and mathematics, Inria researchers have spent 40 years establishing the scientific bases of a new field of knowledge: computational science. In inte- raction with other scientific disciplines, computational science offers new concepts, languages, methods and subjects for study that open new perspectives in the understanding of complex phenomena. High Performance Computing is a The work of this laboratory focuses Eventually, in order to boost techno- strategic topic for Inria, about thirty on development of algorithms and logy transfer from public research to Inria research teams are involved. software for computers at the peta- industry, which is part of Inria’s core flop scale and beyond. The laborato- mission, the institute has launched Inria has thus established large ry’s researchers carry out their work an «SME go HPC» Program, together scale strategic partnerships with- as part of the Blue Waters project. with GENCI, OSEO and four French Bull for the design of future HPC industry clusters (Aerospace Valley, architectures and with EDF R&D fo- It is also noteworthy that several Axelera, Minalogic, Systematic). cused on high performance simulation former Inria spin-off companies have The objective of the Program is to for energy applications. developed their business on this mar- bring high level expertise to SMEs wil- ket, such as Kerlabs, Caps Enterprise, ling to move to Simulation and HPC as At the international level, Inria and the Activeon or Sysfera. University of Illinois at Urbana-Cham- a means to strengthen their compe- paign (United States) created a joint titiveness.SMEs wanting to make use laboratory for research in supercom- of high-performance computing or puting, the Joint Laboratory for simulation to develop their products and services (design, modelling, sys- Petascale Computing (JLPC), in tem, test, processing and visualisation 2009. of data) can apply on the website devoted to this HPC-SME Initiative. www.inria.fr www.initiative-hpc-pme.org Inria is the only French national public research organization fully dedicated to digital sciences and it hosts more than 1000 young researchers each year. EDITORIAL 3 OUR STAKE IN THE FUTURE SUPERCOMPUTERRS igh-performance computing, or HPC, has gradually become a part of our daily Hlives, even if we are not always aware of it. It is in our medicines, our investments, in the films we go to see at the cinema and the equipment of our favourite athletes, the cars we drive and the petrol that they run on. It makes our world a safer place, where our resources are used more wisely, and, thanks to researchers, a world we can more easily understand. Yet these giant steps forward, notably by breaking DABURON the petaflops barrier, or one million FRANÇOIS BY PHILIPPE billion operations per second, VANNIER Chairman will soon seem modest indeed and CEO of Bull. as even greater technological upheavals lie ahead. Cloud Computing is revolutionising and broadening access to scientific computing. The exaflops, 1,000 times more powerful than a petaflops, will give a new dimension to digital simulations. Today the great regions of the world, with the United States and China in the lead, have taken significant steps to ensure control of future technologies. Up until now, Europe has remained on the sidelines. We need to act quickly if we want to hold on to this know-how, which is essential for our independence, our research and our industries, and preserve our jobs. SUPERCOMPUTERS LA RECHERCHE NOVEMBER 2011 N° 457 . The TERATEC Technopole Created by a CEA initiative to develop and promote high performance simulation and computing, the TERATEC technopole is located in Bruyères-le-Chatel, in the southern part of Île-de-France, and includes all the elements of the HPC and simula- tion value chain around three entities : The CEA Very Large Computing Center (TGCC) Infrastructure dedicated to supercomputers and equipped in particular with the CCRT machines and the European PRACE machine. It is also a place for exchanges and meetings with a «conference area» including a 200-seats auditorium. The TERATEC Campus In the TERATEC Technopole and facing the CEA Very Large Computing Center, the TERATEC Campus, with more than 13,000 m² , regroups : Industrial companies (systems, software and services) and a business center plus an incubator, Industrial research laboratories: Exascale Computing Research Lab (Intel / CEA / GENCI / UVSQ), Extreme Computing Lab (BULL / CEA)... A European HPC Training Institute, Platform Services accessible to all industrial companies and research organizations. The objective of the TERATEC Campus is to provide professionals in the field of high performance simulation and compu- ting with a dynamic and user-friendly environment to serve as a crossroads for innovation in three major areas : systems performance and architecture, software development and services. The TERATEC Association The TERATEC Association regroups more than 80 partners from industry and research, having in common advanced usage and development of systems, software or services dedicated to high-performance simulation and computing. TERATEC federates and leads the HPC community to promote and develop numerical design and simulation, and facilitates exchanges and collaborations between participants. Each year, TERATEC organizes the TERATEC Forum, the major event in this domain in France and in Europe (next edition planned on June 26 and 27, 2012 - more on : www.teratec.eu) If you are interested by joining the TERATEC Campus, contact TERATEC : [email protected] or +33(0)1 69 26 61 76. NEW HORIZONS 5 Supplement 2 of “La Recherche” cannot be sold separately from supplement 1 (LR N° 457). “La Recherche” is published 06 AN ONGOING 11 INRIA IS LEADING by Sophia Publications, a subsidiary CHALLENGE FOR THE WAY IN HPC of Financière Tallandier. SUPERCOMPUTERS Digital simulation on SOPHIA PUBLICATIONS Since the end of nuclear supercomputers is driving France 74, avenue du Maine 75014 Paris Tel.: +33 (0)1 44 10 10 10 testing, the CEA is taking up in the race to Exascale. Editorial office email: the challenge of ensuring [email protected] the reliability and security 12 TERA 100: A LEADER IN EFFICIENCY CEO AND PUBLICATION MANAGER of nuclear weapons through Philippe Clerget simulations alone. Tera 100 is 7 times more SUPERCOMPUTERS MANAGEMENT ADVISOR energy-efficient than Jean-Michel Ghidaglia 08 HIGH-PERFORMANCE its predecessor Tera 10. COMPUTING FOR ALL! To contact a member of the editorial Genci intends to provide 14 TRI-GATE 3D TRANSISTORS team directly by phone, dial IN THE RACE TO EXASCALE +33 (0)1 44 10 followed by the four digits all scientists access to high- after to his or her name performance computing. The development of EDITORIAL DIRECTOR Aline Richard exaflopic computers will depend on major technological EDITOR-IN-CHIEF Luc Allemand breakthroughs. DEPUTY EDITOR-IN-CHIEF FOR SUPPLEMENT 2 Thomas Guillemain MAJOR CHALLENGES EDITORIAL ASSISTANT FOR SUPPLEMENT 2 16 MODELLING MOLECULES 28 UNDERSTANDING Jean-Marc Denis FOR MORE EFFECTIVE HOW A STAR IS BORN ARTWORK AND LAYOUT TREATMENTS A noir, +33 (0)1 48 06 22 22 Analysing what happens when galaxies collide and how stars are PRODUCTION Simulation should orient research Christophe Perrusson (1378) towards new drugs. born. SALES, ADVERTISING 30 THE PHYSICS OF SHOCKS AND DEVELOPMENT 20 USING SUPERCOMPUTERS Caroline Nourry (1396) TO IMPROVE TSUNAMI ON AN ATOMIC SCALE CUSTOMER RELATIONS WARNING SYSTEMS The mechanics of materials must Laurent Petitbon (1212) CONTENTS The effects of submarine be understood at the atomic level. ADMINISTRATIVE earthquakes on coastlines could 32 MARTENSITIC AND FINANCE DIRECTOR be predicted in just 15 minutes! Dounia Ammor DEFORMATIONS SEEN SALES AND PROMOTION 22 FUTURE NUCLEAR THROUGH THE PROCESSOR Évelyne Miont (1380) REACTORS ALREADY PRISM BENEFIT FROM HPC Headings, subheadings, presentation Metal alloys can spring back to texts and captions are written National security also relies on their initial shape after a major by the editorial office. The law of March 11, 1957 prohibits copying or reproduction three-dimensional modelling. transformation. intended for collective use. Any representation or reproduction in full 24 WATCHING MATERIALS 34 USING GRAPHICS or in part made without the consent PROCESSORS TO VISUALISE of the author, or of his assigns GROW, ONE ATOM AT A TIME or assignees, is unlawful (article L.122-4 LIGHT of the French intellectual property Code). Simulating growth at the Any duplication must be approved atomic level will lead to mastery Or the eternal question of how by the French copyright agency (CFC, 20, of nanoelectronics. laser beams behave… rue des Grands-Augustins, 75006 Paris, France. Tel.: +33 (0)1 44 07 47 70, Fax: +33 (0)1 46 34 67 19). The editor 26 CALCULATING NUCLEAR reserves the right to refuse any insert DISSUASION that would be deemed contrary to the moral or material interests of the Modelling and simulations publication. are the key tools in nuclear design. Supplement 2 of “La Recherche” Joint commission of Press Publications and Agencies: 0909 K 85863 THE FUTURE: EXASCALE COMPUTING ISSN 0029-5671 PRINTED IN ITALY BY 41 CORRECTING ERRORS G. Canale & C., Via Liguria 24, 35 THE NEXT CHALLENGE: 10071 Borgaro, Torino. CONTROLLING ENERGY IS A TOP PRIORITY Copyright deposit . CONSUMPTION In the run-up to Exascale, © 2011 SOPHIA PUBLICATIONS. Improving the energy efficiency simulation should help of memories and processors researchers confirm calculations, is a real challenge for tomorrow’s even in the event of failures. machines… SUPERCOMPUTERS LA RECHERCHE NOVEMBER 2011 N° 457 . z NEW HORIZONS 6 NEW HORIZONS CEA CADAM 1996: FRANCE DECIDES TO DEFINITIVELY STOP ALL NUCLEAR TESTING. THIS MEANS A NEW CHALLENGE FOR THE CEA: GUARANTEEING, BY 2011, THE RELIABILITY AND SECURITY OF NUCLEAR WEAPONS EXCLUSIVELY VIA SIMULATIONS. THE FOLLOWING IS A RECAP OF THIS FIF- TEEN-YEAR INDUSTRIAL AND RESEARCH ADVENTURE, WITH JEAN GONNORD FROM THE CEA.
Recommended publications
  • Materials Modelling and the Challenges of Petascale and Exascale
    Multiscale Materials Modelling on High Performance Computer Architectures Materials modelling and the challenges of petascale and exascale Andrew Emerson Cineca Supercomputing Centre, Bologna, Italy The project MMM@HPC is funded by the 7th Framework Programme of the European Commission within the Research Infrastructures 26/09/2013 with grant agreement number RI-261594. Contents Introduction to HPC HPC and the MMM@HPC project Petascale computing The Road to Exascale Observations 26/09/2013 A. Emerson, International Forum on Multiscale Materials Modelling, Bologna 2013 2 High Performance Computing High Performance Computing (HPC). What is it ? High-performance computing (HPC) is the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 10 12 floating- point operations per second. (http://searchenterpriselinux.techtarget.com/definition/high -performance -computing ) A branch of computer science that concentrates on developing supercomputers and software to run on supercomputers. A main area of this discipline is developing parallel processing algorithms and software: programs that can be divided into little pieces so that each piece can be executed simultaneously by separate processors. (WEBOPEDIA ) 26/09/2013 A. Emerson, International Forum on Multiscale Materials Modelling, Bologna 2013 3 High Performance Computing Advances due to HPC, e.g. Molecular dynamics early 1990s . Lysozyme, 40k atoms 2006. Satellite tobacco mosaic virus (STMV). 1M atoms, 50ns 2008. Ribosome. 3.2M atoms, 230ns. 2011 . Chromatophore, 100M atoms (SC 2011) 26/09/2013 A. Emerson, International Forum on Multiscale Materials Modelling, Bologna 2013 4 High Performance Computing Cray-1 Supercomputer (1976) 80MHz , Vector processor → 250Mflops Cray XMP (1982) 2 CPUs+vectors, 400 MFlops “FERMI”, Bluegene/Q 168,000 cores 2.1 Pflops 26/09/2013 A.
    [Show full text]
  • Recent Developments in Supercomputing
    John von Neumann Institute for Computing Recent Developments in Supercomputing Th. Lippert published in NIC Symposium 2008, G. M¨unster, D. Wolf, M. Kremer (Editors), John von Neumann Institute for Computing, J¨ulich, NIC Series, Vol. 39, ISBN 978-3-9810843-5-1, pp. 1-8, 2008. c 2008 by John von Neumann Institute for Computing Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above. http://www.fz-juelich.de/nic-series/volume39 Recent Developments in Supercomputing Thomas Lippert J¨ulich Supercomputing Centre, Forschungszentrum J¨ulich 52425 J¨ulich, Germany E-mail: [email protected] Status and recent developments in the field of supercomputing on the European and German level as well as at the Forschungszentrum J¨ulich are presented. Encouraged by the ESFRI committee, the European PRACE Initiative is going to create a world-leading European tier-0 supercomputer infrastructure. In Germany, the BMBF formed the Gauss Centre for Supercom- puting, the largest national association for supercomputing in Europe. The Gauss Centre is the German partner in PRACE. With its new Blue Gene/P system, the J¨ulich supercomputing centre has realized its vision of a dual system complex and is heading for Petaflop/s already in 2009. In the framework of the JuRoPA-Project, in cooperation with German and European industrial partners, the JSC will build a next generation general purpose system with very good price-performance ratio and energy efficiency.
    [Show full text]
  • Nascent Exascale Supercomputers Offer Promise, Present Challenges CORE CONCEPTS Adam Mann, Science Writer
    CORE CONCEPTS Nascent exascale supercomputers offer promise, present challenges CORE CONCEPTS Adam Mann, Science Writer Sometime next year, managers at the US Department Laboratory in NM. “We have to change our computing of Energy’s (DOE) Argonne National Laboratory in paradigms, how we write our programs, and how we Lemont, IL, will power up a calculating machine the arrange computation and data management.” size of 10 tennis courts and vault the country into That’s because supercomputers are complex a new age of computing. The $500-million main- beasts, consisting of cabinets containing hundreds of frame, called Aurora, could become the world’sfirst thousands of processors. For these processors to oper- “exascale” supercomputer, running an astounding ate as a single entity, a supercomputer needs to pass 1018, or 1 quintillion, operations per second. data back and forth between its various parts, running Aurora is expected to have more than twice the huge numbers of computations at the same time, all peak performance of the current supercomputer record while minimizing power consumption. Writing pro- holder, a machine named Fugaku at the RIKEN Center grams for such parallel computing is not easy, and the- for Computational Science in Kobe, Japan. Fugaku and orists will need to leverage new tools such as machine its calculation kin serve a vital function in modern learning and artificial intelligence to make scientific scientific advancement, performing simulations crucial breakthroughs. Given these challenges, researchers for discoveries in a wide range of fields. But the transition have been planning for exascale computing for more to exascale will not be easy.
    [Show full text]
  • Arxiv:2109.00082V1 [Cs.DC] 31 Aug 2021 Threshold of Exascale Computing
    Plan-based Job Scheduling for Supercomputers with Shared Burst Buffers Jan Kopanski and Krzysztof Rzadca Institute of Informatics, University of Warsaw Stefana Banacha 2, 02-097 Warsaw, Poland [email protected] [email protected] Preprint of the pa- Abstract. The ever-increasing gap between compute and I/O perfor- per accepted at the mance in HPC platforms, together with the development of novel NVMe 27th International storage devices (NVRAM), led to the emergence of the burst buffer European Conference concept—an intermediate persistent storage layer logically positioned on Parallel and Dis- between random-access main memory and a parallel file system. De- tributed Computing spite the development of real-world architectures as well as research (Euro-Par 2021), Lis- concepts, resource and job management systems, such as Slurm, provide bon, Portugal, 2021, only marginal support for scheduling jobs with burst buffer requirements, DOI: 10.1007/978-3- in particular ignoring burst buffers when backfilling. We investigate the 030-85665-6_8 impact of burst buffer reservations on the overall efficiency of online job scheduling for common algorithms: First-Come-First-Served (FCFS) and Shortest-Job-First (SJF) EASY-backfilling. We evaluate the algorithms in a detailed simulation with I/O side effects. Our results indicate that the lack of burst buffer reservations in backfilling may significantly deteriorate scheduling. We also show that these algorithms can be easily extended to support burst buffers. Finally, we propose a burst-buffer–aware plan-based scheduling algorithm with simulated annealing optimisation, which im- proves the mean waiting time by over 20% and mean bounded slowdown by 27% compared to the burst-buffer–aware SJF-EASY-backfilling.
    [Show full text]
  • The TOP500 List and Progress in High- Performance Computing
    COVER FEATURE GRAND CHALLENGES IN SCIENTIFIC COMPUTING The TOP500 List and Progress in High- Performance Computing Erich Strohmaier, Lawrence Berkeley National Laboratory Hans W. Meuer, University of Mannheim Jack Dongarra, University of Tennessee Horst D. Simon, Lawrence Berkeley National Laboratory For more than two decades, the TOP list has enjoyed incredible success as a metric for supercomputing performance and as a source of data for identifying technological trends. The project’s editors refl ect on its usefulness and limitations for guiding large-scale scientifi c computing into the exascale era. he TOP list (www.top.org) has served TOP500 ORIGINS as the de ning yardstick for supercomput- In the mid-s, coauthor Hans Meuer started a small ing performance since . Published twice a and focused annual conference that has since evolved year, it compiles the world’s largest instal- into the prestigious International Supercomputing Con- Tlations and some of their main characteristics. Systems ference (www.isc-hpc.com). During the conference’s are ranked according to their performance of the Lin- opening session, Meuer presented statistics about the pack benchmark, which solves a dense system of linear numbers, locations, and manufacturers of supercomput- equations. Over time, the data collected for the list has ers worldwide collected from vendors and colleagues in enabled the early identi cation and quanti cation of academia and industry. many important technological and architectural trends Initially, it was obvious that the supercomputer label related to high-performance computing (HPC).− should be reserved for vector processing systems from Here, we brie y describe the project’s origins, the companies such as Cray, CDC, Fujitsu, NEC, and Hitachi principles guiding data collection, and what has made that each claimed to have the fastest system for scienti c the list so successful during the two-decades-long tran- computation by some selective measure.
    [Show full text]
  • PETASCALE COMPUTING: Algorithms and Applications Edited by David A
    PETASCALE COMPUTING ALGORITHMS AND APPLICATIONS C9098_FM.indd 1 11/15/07 1:38:55 PM Chapman & Hall/CRC Computational Science Series SERIES EDITOR Horst Simon Associate Laboratory Director, Computing Sciences Lawrence Berkeley National Laboratory Berkeley, California, U.S.A. AIMS AND SCOPE This series aims to capture new developments and applications in the field of computational sci- ence through the publication of a broad range of textbooks, reference works, and handbooks. Books in this series will provide introductory as well as advanced material on mathematical, sta- tistical, and computational methods and techniques, and will present researchers with the latest theories and experimentation. The scope of the series includes, but is not limited to, titles in the areas of scientific computing, parallel and distributed computing, high performance computing, grid computing, cluster computing, heterogeneous computing, quantum computing, and their applications in scientific disciplines such as astrophysics, aeronautics, biology, chemistry, climate modeling, combustion, cosmology, earthquake prediction, imaging, materials, neuroscience, oil exploration, and weather forecasting. PUBLISHED TITLES PETASCALE COMPUTING: Algorithms and Applications Edited by David A. Bader C9098_FM.indd 2 11/15/07 1:38:55 PM PETASCALE COMPUTING ALGORITHMS AND APPLICATIONS EDITED BY DAVID A. BADER Georgia Institute of Technology Atlanta, U.S.A. C9098_FM.indd 3 11/15/07 1:38:56 PM Chapman & Hall/CRC Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2008 by Taylor & Francis Group, LLC Chapman & Hall/CRC is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-13: 978-1-58488-909-0 (Hardcover) This book contains information obtained from authentic and highly regarded sources.
    [Show full text]
  • SEVENTH FRAMEWORK PROGRAMME Research Infrastructures PRACE-2IP PRACE Second Implementation Project D5.1 Preliminary Guidance On
    SEVENTH FRAMEWORK PROGRAMME Research Infrastructures INFRA-2011-2.3.5 – Second Implementation Phase of the European High Performance Computing (HPC) service PRACE PRACE-2IP PRACE Second Implementation Project Grant Agreement Number: RI-283493 D5.1 Preliminary Guidance on Procurements and Infrastructure Final Version: 1.0 Author(s): Guillermo Aguirre, BSC Date: 21.02.2013 D5.1 Preliminary Guidance on Procurements and Infrastructure Project and Deliverable Information Sheet PRACE Project Project Ref. №: RI-283493 Project Title: PRACE Second Implementation Project Project Web Site: http://www.prace-project.eu Deliverable ID: < D5.1> Deliverable Nature: Report Deliverable Level: Contractual Date of Delivery: PU 28/02/2013 Actual Date of Delivery: 28/02/2013 EC Project Officer: Leonardo Flores Añover Document Control Sheet Title: Preliminary Guidance on Procurements and Infrastructure Document ID: D5.1 Version: <1.0 > Status: Final Available at: http://www.prace-project.eu Software Tool: Microsoft Word 2007 File(s): D5.1.docx Written by: Guillermo Aguirre, BSC Authorship Contributors: Francois Robin, CEA; Jean-Philippe Nominé, CEA; Ioannis Liabotis, GRNET; Norbert Meyer, PSNC; Radek Januszewski, PSNC; Andreas Johansson, SNIC-LIU; Eric Boyer, CINES; George Karagiannopoulos, GRNET; Marco Sbrighi, CINECA; Vladimir Slavnic, IPB; Gert Svensson, SNIC-KTH Reviewed by: Peter Stefan, NIIF Florian Berberich, PMO & FZJ Approved by: MB/TB Document Status Sheet Version Date Status Comments 0.1 16/01/2013 Draft First outline 0.2 22/01/2013 Draft Added contributions
    [Show full text]
  • IBM US Nuke-Lab Beast 'Sequoia' Is Top of the Flops (Petaflops, That Is) | Insidehpc.Com
    Advertisement insideHPC Skip to content Latest News HPC Hardware Software Tools Events inside-BigData Search Rock Stars of HPC Videos inside-Startups HPC Jobs IBM US Nuke-lab Beast ‘Sequoia’ is Top of the Flops (Petaflops, that is) 06.18.2012 Mi piace By Timothy Prickett Morgan • Get more from this author For the second time in the past two years, a new supercomputer has taken the top ranking in the Top 500 list of supercomputers – and it does not use a hybrid CPU-GPU architecture. But the question everyone will be asking at the International Super Computing conference in Hamburg, Germany today is whether this is the last hurrah for such monolithic parallel machines and whether the move toward hybrid machines where GPUs or other kinds of coprocessors do most of the work is inevitable. No one can predict the future, of course, even if they happen to be Lawrence Livermore National Laboratory (LLNL) and even if they happen to have just fired up IBM’s “Sequoia” BlueGene/Q beast, which has been put through the Linpack benchmark paces, delivering 16.32 petaflops of sustained performance running across the 1.57 million PowerPC cores inside the box. Sequoia has a peak theoretical performance of 20.1 petaflops, so 81.1 per cent of the possible clocks in the box that could do work running Linpack did so when the benchmark test was done. LLNL was where the original BlueGene/L super was commercialized, so that particular Department of Energy nuke lab knows how to tune the massively parallel Power machine better than anyone on the planet, meaning the efficiency is not a surprise.
    [Show full text]
  • Exascale Computing Study: Technology Challenges in Achieving Exascale Systems
    ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems Peter Kogge, Editor & Study Lead Keren Bergman Shekhar Borkar Dan Campbell William Carlson William Dally Monty Denneau Paul Franzon William Harrod Kerry Hill Jon Hiller Sherman Karp Stephen Keckler Dean Klein Robert Lucas Mark Richards Al Scarpelli Steven Scott Allan Snavely Thomas Sterling R. Stanley Williams Katherine Yelick September 28, 2008 This work was sponsored by DARPA IPTO in the ExaScale Computing Study with Dr. William Harrod as Program Manager; AFRL contract number FA8650-07-C-7724. This report is published in the interest of scientific and technical information exchange and its publication does not constitute the Government’s approval or disapproval of its ideas or findings NOTICE Using Government drawings, specifications, or other data included in this document for any purpose other than Government procurement does not in any way obligate the U.S. Government. The fact that the Government formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. APPROVED FOR PUBLIC RELEASE, DISTRIBUTION UNLIMITED. This page intentionally left blank. DISCLAIMER The following disclaimer was signed by all members of the Exascale Study Group (listed below): I agree that the material in this document reects the collective views, ideas, opinions and ¯ndings of the study participants only, and not those of any of the universities, corporations, or other institutions with which they are a±liated. Furthermore, the material in this document does not reect the o±cial views, ideas, opinions and/or ¯ndings of DARPA, the Department of Defense, or of the United States government.
    [Show full text]
  • June 2012 | TOP500 Supercomputing Sites
    PROJECT LISTS STATISTICS RESOURCES NEWS CONTACT SUBMISSIONS LINKS HOME Home Lists June 2012 MANNHEIM, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.—For the first time since November 2009, a United Contents States supercomputer sits atop the TOP500 list of the world’s top supercomputers. Named Sequoia, the IBM BlueGene/Q system installed at the Department of Energy’s Lawrence Livermore National Laboratory achieved an impressive 16.32 Release petaflop/s on the Linpack benchmark using 1,572,864 cores. Top500 List Sequoia is also one of the most energy efficient systems on Press Release (PDF) the list, which will be released Monday, June 18, at the 2012 Press Release International Supercomputing Conference in Hamburg, Germany. This will mark the 39th edition of the list, which is List highlights compiled twice each year. Performance Development On the latest list, Fujitsu’s “K Computer” installed at the RIKEN Related Files Advanced Institute for Computational Science (AICS) in Kobe, Japan, is now the No. 2 system with 10.51 Pflop/s on the TOP500 List (XML) Linpack benchmark using 705,024 SPARC64 processing TOP500 List (Excel) A 1.044 persone piace cores. The K Computer held the No. 1 spot on the previous TOP500 Poster Mi piace two lists. questo elemento. Di' che Poster in PDF piace anche a te, prima di The new Mira supercomputer, an IBM BlueGene/Q system at tutti i tuoi amici. Argonne National Laboratory in Illinois, debuted at No. 3, with Drilldown 8.15 petaflop/s on the Linpack benchmark using 786,432 Performance Development cores. The other U.S.
    [Show full text]
  • SC20-Final-Program-V2.Pdf
    Table of Contents ACM Gordon Bell COVID Finalist Keynote ACM Gordon Bell Finalist More Than HPC Plenary ACM Student Research Competition: Panel Graduate Posters Paper ACM Student Research Competition: Research Posters Undergraduate Posters Scientific Visualization Awards Presentation & Data Analytics Showcase Birds of a Feather SCinet Booth Sessions State of the Practice Talk Doctoral Showcase Students@SC Early Career Program Test of Time Exhibitor Forum Tutorial Exhibits Virtual Student Cluster Competition Invited Talk Workshop Job Posting ACM Gordon Bell COVID Finalist (back to top) Thursday, November 19th 10:00 am - 12:00 pm Gordon Bell COVID-19 Prize Finalist Session 1 Session Description: Enabling Rapid COVID-19 Small Molecule Drug Design Through Scalable Deep Learning of Generative Models Sam Ade Jacobs (Lawrence Livermore National Laboratory), Tim Moon (Lawrence Livermore National Laboratory), Kevin McLoughlin (Lawrence Livermore National Laboratory), Derek Jones (Lawrence Livermore National Laboratory), David Hysom (Lawrence Livermore National Laboratory), Dong H. Ahn (Lawrence Livermore National Laboratory), John Gyllenhaal (Lawrence Livermore National Laboratory), Pythagoras Watson (Lawrence Livermore National Laboratory), Felice C. Lightsone (Lawrence Livermore National Laboratory), Jonathan E. Allen (Lawrence Livermore National Laboratory), Ian Karlin (Lawrence Livermore National Laboratory), Brian Van Essen (Lawrence Livermore National Laboratory) We improved the quality and reduced the time to produce machine-learned models for use in small molecule antiviral design. Our globally asynchronous multi-level parallel training approach strong scales to all of Sierra with up to 97.7% efficiency. We trained a novel, character-based Wasserstein autoencoder that produces a higher quality model trained on 1.613 billion compounds in 23 minutes while the previous state-of-the-art takes a day on 1 million compounds.
    [Show full text]
  • OLCF AR 2016-17 FINAL 9-7-17.Pdf
    Oak Ridge Leadership Computing Facility Annual Report 2016–2017 1 Outreach manager – Katie Bethea Writers – Eric Gedenk, Jonathan Hines, Katie Jones, and Rachel Harken Designer – Jason Smith Editor – Wendy Hames Photography – Jason Richards and Carlos Jones Stock images – iStockphoto™ Oak Ridge Leadership Computing Facility Oak Ridge National Laboratory P.O. Box 2008, Oak Ridge, TN 37831-6161 Phone: 865-241-6536 Email: [email protected] Website: https://www.olcf.ornl.gov Facebook: https://www.facebook.com/oakridgeleadershipcomputingfacility Twitter: @OLCFGOV The research detailed in this publication made use of the Oak Ridge Leadership Computing Facility, a US Department of Energy Office of Science User Facility located at DOE’s Oak Ridge National Laboratory. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. 2 Contents LETTER In a Record 25th Year, We Celebrate the Past and Look to the Future 4 SCIENCE Streamlining Accelerated Computing for Industry 6 A Seismic Mapping Milestone 8 The Shape of Melting in Two Dimensions 10 A Supercomputing First for Predicting Magnetism in Real Nanoparticles 12 Researchers Flip Script for Lithium-Ion Electrolytes to Simulate Better Batteries 14 A Real CAM-Do Attitude 16 FEATURES Big Data Emphasis and New Partnerships Highlight the Path to Summit 18 OLCF Celebrates 25 Years of HPC Leadership 24 PEOPLE & PROGRAMS Groups within the OLCF 28 OLCF User Group and Executive Board 30 INCITE, ALCC, DD 31 SYSTEMS & SUPPORT Resource Overview 32 User Experience 34 Education, Outreach, and Training 35 ‘TitanWeek’ Recognizes Contributions of Nation’s Premier Supercomputer 36 Selected Publications 38 Acronyms 41 3 In a Record 25th Year, We Celebrate the Past and Look to the Future installed at the turn of the new millennium—to the founding of the Center for Computational Sciences at the US Department of Energy’s Oak Ridge National Laboratory.
    [Show full text]