20190714 Hipeac ACACES Keynote

Total Page:16

File Type:pdf, Size:1020Kb

20190714 Hipeac ACACES Keynote Images courtesy of The PRACE Scientific Steering Committee, “The Scientific Case for Computing in Europe 2018-2026” [1] ASCI Red (Intel Teraflops) 100.00 [2] Red(memory upgrade) [3] ASCI Blue Pacific (IBM RS/6000) [14] 10.00 [13] [4] ASCI White [11] [5] Cray XT3 [7] [10] [6] IBM Roadrunner 1.00 [12] [6] [9] [7] IBM Sequoia (BLUGENE/q) [5] [8] [8] Tianhe-1A 0.10 [9] Nebulae [10] PRIMEHPC FX10 [3] 0.01 [11] Eurora GFlops/Watts (log sclae) (log GFlops/Watts [4] [12] Fujitsu PRIMEHPC FX100 (SPARC64 Xifx) [1] 0.00 [2] [13] Sunway TaihuLight 1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 2017 [14] IBM Summit Mainframe Era (circa 1953 - circa 1972) First rise of HW acceleration: vector computers (1974-1993) Rise of massive homogeneous parallelism (1994-2007) The renaissance of acceleration units (2008 - …? ) Sense Analyze Transmit Short range, BW µController MEMS Microphone L2 Memory Zurich e.g.Accelerators CortexM ULP Imager Low rate (periodic) data IOs Benini,ETH SW update, commands EMG/ECG/EIT 11 ÷ ÷100025 MOPS MOPS 1 ÷ 10 mW Long range, low BW Luca of Courtesy • Idle: ~1µW 100 µW ÷ 2 mW Active: ~ 10mW Need a supercomputer chip on the sensor node 2016 2017 2018 2019 2020 2021 2022 2023 Budget Summit Aurora 21 200 PFlop/s El Capitan US 1 EFlop/s Power 9 + >1 EFlop/s 4.2 B $ Intel+Cray Volta (2016-2023) Sierra Frontier ANL system? 125 PFlop/s >1.5 EFlop >1 EFlop/s Power 9 + AMD+Cray 2013 Volta Shuguang/Sugon China Tianhe-2 Sunway 1 EFlop/s 33 PFlop/s 125 PFlop/s Sunway processors 700 M $ Intel Sunway (2016-2020) processors processors Tianhe-3 1 EFlop/s ARM Processors 2011 Japan K ABCI Post-K computer 0.55 EFlop/s computer 1Bn $ 10 PFlop/s (DL) 1 EFlop/s Fujitsu 37 Pflop Fujitsu+ARM processors Intel+NVIDIA Europe >200 PFlop/s >1 EFlop/s 1,4Bn $ >200 PFlop/s >1 EFlop/s Chip Design Manuf. IBM POWER9 NVIDIA Volta GV100 Sunway SW26010 Intel Xeon E5 • Courtesy of Denis Dutoit Images courtesy of Reuters.com and European Processor Initiative EPI SGA1 (ARM +RISC V) SGA2 (ARM +RISC V) EPI TO EXA (ARM +RISC V) Other RIA investment activities Extreme escale act CoE CoC GEANT SME Pre-Exa Procurement Pre-Exa Deployment Exa Procurement Exa Deployment 2019 2020 2021 2022 2023 2024+ MareNostrum4 Total peak performance: 12,7 Pflops General Purpose Cluster: 11.15 Pflops (1.07.2017) CTE1-P9+Volta: 1.57 Pflops (1.03.2018) • 3 “pilot” Pre-Exascale systems • 5 Peta-scale systems MareNostrum 1 MareNostrumMareNostrum2 MareNostrum5 3 MareNostrum 4 2004 – 42,3 Tflops 2006 – 94,2 Tflopscirca 220 Million Euros budget2012 – 1,1 Pflops 2017 – 11,1 Pflops 1st Europe / 4th World 1st Europe / 5th WorldPre-Exascale system12th Europe / 36th World 2nd Europe / 13th World New technologies New technologiesIncluding new technologies New technologies EUROPEAN PROCESSOR INITIATIVE * Pre-ExaScale level with general-purpose CPU core in the first EPI GPP chip * Develop acceleration technologies for better DP GFLOPS/Watt performance * Inclusion of MPPA for real-time application acceleration * Adopt Arm general-purpose CPU core with SVE / vector acceleration in the first EPI chip * Supply sufficient Memory Bandwidth (Byte/FLOP) to support the GPP application * in SGA1, focus on programming models to include accelerations. Codesign, Architecture, System software and key S1 - Common Stream technologies for the Common Platform Design and implement the processor chip(s) S2 - GPP Processor and PoC system Foster acceleration technologies and S3 - Acceleration create building blocks S4 - Automotive Address automotive market needs and create a pilot eHPC system S5 - Administration Manage and support activities EUROPEAN PROCESSOR INITIATIVE Rhea Chronos CPU GPP GPP GPP ACCEL. rev1 rev2 rev3 ACCEL TEST only ACCEL. CHIP 2018 2019 2020 2021 2022 2023 2024 2025 SGA 1&2 SGA 3 PCIe gen5 HSL links links D2D links to adjacent chiplets ARM MPPA eFPGA EPAC HBM memories DDR memories EPAC VPU Bridge to GPP STX VRP Bridge to GPP DDR DDR HBM HBM HBM HBM DDR DDR Scalar Vector STX core lanes units SerDes Scalar Vector STX High speed core lanes units GPP NoC Bridge GPP NoC Bridge L2$ L2$ Accelerator NOC DMA L1$ SPM VRF VRFVector LSU VLSU VectorVector Vectorlanelane Vectorlanelane Scalar core lanes NTX SNTXTX Specialized LANE Units INTERCONNECT Vector Processor ACCELERATOR TILE 푇퐸푋퐸퐶 = 푁 ∙ 퐶푃퐼 ∙ 푇퐶퐾 Applications 2 푃 = 퐶푒푓푓 ∙ 푉 ∙ 1Τ푇퐶퐾 Circuits Runtime Architecture Move the future Exascale Future ? Exascale Future ? Flat paradigm Hierarchy Homogeneity Heterogeneity Fixed pattern Decoupling Legacy Pipelining Locality (BW) Long vectors Established Abstraction vendor Established Malleability vendor Established Low voltage vendor Chip-let technology Can we get there at the right time? EU Actors Follower Lots of money Difficult to reach 26 EPI Instruction extension Long vectors Circuit aware architecture Runtime aware architecture Sagrada Familia 1975 Special thanks to • Mateo Valero • Jesus Labarta Thank you for attending • Adrian Cristal Kestelman • Osman Unsal • Peter Hsu • Luca Benini [email protected] • Ying Chih Yang [email protected] • Philippe Notton • Denis Dutoit • Eugene Griffiths • Fabrizio Gagliardi 33.
Recommended publications
  • 2017 HPC Annual Report Team Would Like to Acknowledge the Invaluable Assistance Provided by John Noe
    sandia national laboratories 2017 HIGH PERformance computing The 2017 High Performance Computing Annual Report is dedicated to John Noe and Dino Pavlakos. Building a foundational framework Editor in high performance computing Yasmin Dennig Contributing Writers Megan Davidson Sandia National Laboratories has a long history of significant contributions to the high performance computing Mattie Hensley community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraflop barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing capabilities to gain a tremendous competitive edge in the marketplace. Contributing Editor Laura Sowko As part of our continuing commitment to provide modern computing infrastructure and systems in support of Sandia’s missions, we made a major investment in expanding Building 725 to serve as the new home of high performance computer (HPC) systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) prototype Design platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing Stacey Long advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment, and application code performance.
    [Show full text]
  • Safety and Security Challenge
    SAFETY AND SECURITY CHALLENGE TOP SUPERCOMPUTERS IN THE WORLD - FEATURING TWO of DOE’S!! Summary: The U.S. Department of Energy (DOE) plays a very special role in In fields where scientists deal with issues from disaster relief to the keeping you safe. DOE has two supercomputers in the top ten supercomputers in electric grid, simulations provide real-time situational awareness to the whole world. Titan is the name of the supercomputer at the Oak Ridge inform decisions. DOE supercomputers have helped the Federal National Laboratory (ORNL) in Oak Ridge, Tennessee. Sequoia is the name of Bureau of Investigation find criminals, and the Department of the supercomputer at Lawrence Livermore National Laboratory (LLNL) in Defense assess terrorist threats. Currently, ORNL is building a Livermore, California. How do supercomputers keep us safe and what makes computing infrastructure to help the Centers for Medicare and them in the Top Ten in the world? Medicaid Services combat fraud. An important focus lab-wide is managing the tsunamis of data generated by supercomputers and facilities like ORNL’s Spallation Neutron Source. In terms of national security, ORNL plays an important role in national and global security due to its expertise in advanced materials, nuclear science, supercomputing and other scientific specialties. Discovery and innovation in these areas are essential for protecting US citizens and advancing national and global security priorities. Titan Supercomputer at Oak Ridge National Laboratory Background: ORNL is using computing to tackle national challenges such as safe nuclear energy systems and running simulations for lower costs for vehicle Lawrence Livermore's Sequoia ranked No.
    [Show full text]
  • NNSA — Weapons Activities
    Corporate Context for National Nuclear Security Administration (NS) Programs This section on Corporate Context that is included for the first time in the Department’s budget is provided to facilitate the integration of the FY 2003 budget and performance measures. The Department’s Strategic Plan published in September 2000 is no longer relevant since it does not reflect the priorities laid out in President Bush’s Management Agenda, the 2001 National Energy Policy, OMB’s R&D project investment criteria or the new policies that will be developed to address an ever evolving and challenging terrorism threat. The Department has initiated the development of a new Strategic Plan due for publication in September 2002, however, that process is just beginning. To maintain continuity of our approach that links program strategic performance goals and annual targets to higher level Departmental goals and Strategic Objectives, the Department has developed a revised set of Strategic Objectives in the structure of the September 2000 Strategic Plan. For more than 50 years, America’s national security has relied on the deterrent provided by nuclear weapons. Designed, built, and tested by the Department of Energy (DOE) and its predecessor agencies, these weapons helped win the Cold War, and they remain a key component of the Nation’s security posture. The Department’s National Nuclear Security Administration (NNSA) now faces a new and complex set of challenges to its national nuclear security missions in countering the threats of the 21st century. One of the most critical challenges is being met by the Stockpile Stewardship program, which is maintaining the effectiveness of our nuclear deterrent in the absence of underground nuclear testing.
    [Show full text]
  • The Artisanal Nuke, 2014
    The Artisanal Nuke Mary C. Dixon US Air Force Center for Unconventional Weapons Studies Maxwell Air Force Base, Alabama THE ARTISANAL NUKE by Mary C. Dixon USAF Center for Unconventional Weapons Studies 125 Chennault Circle Maxwell Air Force Base, Alabama 36112-6427 July 2014 Disclaimer The opinions, conclusions, and recommendations expressed or implied in this publication are those of the author and do not necessarily reflect the views of the Air University, Air Force, or Department of Defense. ii Table of Contents Chapter Page Disclaimer ................................................................................................... ii Table of Contents ....................................................................................... iii About the Author ......................................................................................... v Acknowledgements ..................................................................................... vi Abstract ....................................................................................................... ix 1 Introduction .............................................................................................. 1 2 Background ............................................................................................ 19 3 Stockpile Stewardship ........................................................................... 27 4 Opposition & Problems ......................................................................... 71 5 Milestones & Accomplishments ..........................................................
    [Show full text]
  • The Case for the Comprehensive Nuclear Test Ban Treaty
    An Arms Control Association Briefing Book Now More Than Ever The Case for The Comprehensive nuClear TesT Ban TreaTy February 2010 Tom Z. Collina with Daryl G. Kimball An Arms Control Association Briefing Book Now More Than Ever The CAse for The Comprehensive nuCleAr TesT BAn Treaty February 2010 Tom Z. Collina with Daryl G. Kimball About the Authors Tom Z. Collina is Research Director at the Arms Control Association. He has over 20 years of professional experience in international security issues, previously serving as Director of the Global Security Program at the Union of Concerned Scientists and Executive Director of the Institute for Science and International Security. He was actively involved in national efforts to end U.S. nuclear testing in 1992 and international negotiations to conclude the CTBT in 1996. Daryl G. Kimball is Executive Director of the Arms Control Association. Previously he served as Executive Director of the Coalition to Reduce Nuclear Dangers, a consortium of 17 of the largest U.S. non-governmental organizations working together to strengthen national and international security by reducing the threats posed by nuclear weapons. He also worked as Director of Security Programs for Physicians for Social Responsibility, where he helped spearhead non-governmental efforts to win congressional approval for the 1992 nuclear test moratorium legislation, U.S. support for a “zero-yield” test ban treaty, and the U.N.’s 1996 endorsement of the CTBT. Acknowledgements The authors wish to thank our colleagues Pierce Corden, David Hafemeister, Katherine Magraw, and Benn Tannenbaum for sharing their expertise and reviewing draft text.
    [Show full text]
  • Technical Issues in Keeping the Nuclear Stockpile Safe, Secure, and Reliable
    Technical Issues in Keeping the Nuclear Stockpile Safe, Secure, and Reliable Marvin L. Adams Texas A&M University Sidney D. Drell Stanford University ABSTRACT The United States has maintained a safe, secure, and reliable nuclear stockpile for 16 years without relying on underground nuclear explosive tests. We argue that a key ingredient of this success so far has been the expertise of the scientists and engineers at the national labs with the responsibility for the nuclear arsenal and infrastructure. Furthermore for the foreseeable future this cadre will remain the critical asset in sustaining the nuclear enterprise on which the nation will rely, independent of the size of the stockpile or the details of its purpose, and to adapt to new requirements generated by new findings and changes in policies. Expert personnel are the foundation of a lasting and responsive deterrent. Thus far, the United States has maintained a safe, secure, and reliable nuclear stockpile by adhering closely to the designs of existing “legacy” weapons. It remains to be determined whether this is the best course to continue, as opposed to introducing new designs (as envisaged in the original RRW program), re-using previously designed and tested components, or maintaining an evolving combination of new, re-use, and legacy designs. We argue the need for a stewardship program that explores a broad spectrum of options, includes strong peer review in its assessment of options, and provides the necessary flexibility to adjust to different force postures depending on evolving strategic developments. U.S. decisions and actions about nuclear weapons can be expected to affect the nuclear policy choices of other nations – non-nuclear as well as nuclear – on whose cooperation we rely in efforts to reduce the global nuclear danger.
    [Show full text]
  • W80-4 LRSO Missile Warhead
    Description Accomplishments As part of the effort to keep the nuclear In 2015, the W80-4 LEP entered into Phase 6.2—feasibility study and down-select— arsenal safe, secure, and effective, the to establish program planning, mature technical requirements, and propose a full U.S. Air Force is developing a nuclear- suite of design options. In October 2017, the program entered Phase 6.2A—design capable Long-Range Standoff (LRSO) definition and cost study—to establish detailed cost estimates for all options and cruise missile. The LRSO is designed to a baseline from which the core cost estimate and detailed schedule are generated, replace the aging AGM-86 Air-Launched with appropriate risk mitigation. The W80-4 team is making significant progress: Cruise Missile, operational since 1982. The U.S. is extending the lifetime of its An effective team that includes the National Nuclear Security Administration warhead, the W80-1, through the W80-4 and the U.S. Air Force is in place; coordination is robust. Life Extension Program (LEP). The W80-1 will be refurbished and adapted to the new An extensive range of full-system tests, together with hundreds of small-scale tests, are in progress to provide confidence in the performance of W80-4 LEP LRSO missile as the W80-4. As the lead design options that include additively manufactured components. design agency for its nuclear explosive package, Lawrence Livermore, working LLNL leads DOE’s effort to develop and qualify the insensitive high explosive jointly with Sandia National Laboratories, formulated using newly produced constituents. Manufacturing of production- is developing options for the W80-4 that scale quantities of the new explosives is underway and on schedule.
    [Show full text]
  • A Comparison of the Current Top Supercomputers
    A Comparison of the Current Top Supercomputers Russell Martin Xavier Gallardo Top500.org • Started in 1993 to maintain statistics on the top 500 performing supercomputers in the world • Started by Hans Meuer, updated twice annually • Uses LINPACK as main benchmark November 2012 List 1. Titan, Cray, USA 2. Sequoia, IBM, USA 3. K Computer, Fujitsu, Japan 4. Mira, IBM, USA 5. JUQueen, IBM, Germany 6. SuperMUC, IBM, Germany Performance Trends http://top500.org/statistics/perfdevel/ Trends • 84.8% use 6+ processor cores • 76% Intel, 12% AMD Opteron, 10.6% IBM Power • Most common Interconnects - Infiniband, Gigabit Ethernet • 251 in USA, 123 in Asia, 105 in Europe • Current #1 Computer • Oak Ridge National Laboratory Oak Ridge, Tennessee • Operational Oct 29, 2012 • Scientific research • AMD Opteron / NVIDIA Tesla • 18688 Nodes, 560640 Cores • Cray Gemini Interconnect • Theoretical 27 PFLOPS Sequoia • Ranked #2 • Located at the Lawrence Livermore National Library in Livermore, CA • Designed for "Uncertainty Quantification Studies" • Fully classified work starting April 17 2013 Sequoia • Cores: 1572864 • Nodes: 98304 • RAM: 1572864 GB • Interconnect: Custom 5D Torus • Max Performance: 16324.8 Tflops • Theoretical Max: 20132.7 Tflops • Power Consumption: 7890 kW Sequoia • IBM BlueGene/Q Architecture o Started in 1999 for protein folding o Custom SoC Layout 16 PPC Cores per Chip, each 1.6Ghz, 0.8V • Built for OpenMP and POSIX programs • Automatic SIMD exploitation Sequoia • #3 on Top 500, #1 June 2011 • RIKEN Advanced Institute for Computational
    [Show full text]
  • R00456--FM Getting up to Speed
    GETTING UP TO SPEED THE FUTURE OF SUPERCOMPUTING Susan L. Graham, Marc Snir, and Cynthia A. Patterson, Editors Committee on the Future of Supercomputing Computer Science and Telecommunications Board Division on Engineering and Physical Sciences THE NATIONAL ACADEMIES PRESS Washington, D.C. www.nap.edu THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Gov- erning Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engi- neering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for ap- propriate balance. Support for this project was provided by the Department of Energy under Spon- sor Award No. DE-AT01-03NA00106. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the organizations that provided support for the project. International Standard Book Number 0-309-09502-6 (Book) International Standard Book Number 0-309-54679-6 (PDF) Library of Congress Catalog Card Number 2004118086 Cover designed by Jennifer Bishop. Cover images (clockwise from top right, front to back) 1. Exploding star. Scientific Discovery through Advanced Computing (SciDAC) Center for Supernova Research, U.S. Department of Energy, Office of Science. 2. Hurricane Frances, September 5, 2004, taken by GOES-12 satellite, 1 km visible imagery. U.S. National Oceanographic and Atmospheric Administration. 3. Large-eddy simulation of a Rayleigh-Taylor instability run on the Lawrence Livermore National Laboratory MCR Linux cluster in July 2003.
    [Show full text]
  • Kull: Llnl's Asci Inertial Confinement Fusion Simulation Code
    KULL: LLNL'S ASCI INERTIAL CONFINEMENT FUSION SIMULATION CODE James A. Rathkopf, Douglas S. Miller, John M. Owen, Linda M. Stuart, Michael R. Zika, Peter G. Eltgroth, Niel K. Madsen, Kathleen P. McCandless, Paul F. Nowak, Michael K. Nemanic, Nicholas A. Gentile, and Noel D. Keen Lawrence Livermore National Laboratory P.O. Box 808, L-18 Livermore, California 94551-0808 [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected] Todd S. Palmer Department of Nuclear Engineering Oregon State University Corvallis, Oregon 97331 [email protected] ABSTRACT KULL is a three dimensional, time dependent radiation hydrodynamics simulation code under development at Lawrence Livermore National Laboratory. A part of the U.S. Department of Energy’s Accelerated Strategic Computing Initiative (ASCI), KULL’s purpose is to simulate the physical processes in Inertial Confinement Fusion (ICF) targets. The National Ignition Facility, where ICF experiments will be conducted, and ASCI are part of the experimental and computa- tional components of DOE’s Stockpile Stewardship Program. This paper provides an overview of ASCI and describes KULL, its hydrodynamic simulation capability and its three methods of simulating radiative transfer. Particular emphasis is given to the parallelization techniques essen- tial to obtain the performance required of the Stockpile Stewardship Program and to exploit the massively parallel processor machines that ASCI is procuring. 1. INTRODUCTION With the end of underground nuclear testing, the United States must rely solely on non-nuclear experiments and numerical simulations, together with archival underground nuclear test data, to certify the continued safety, performance, and reliability of the nation’s nuclear stockpile.
    [Show full text]
  • Newsline Year in Review 2020
    NEWSLINE YEAR IN REVIEW 2020 LAWRENCE LIVERMORE NATIONAL LABORATORY YELLOW LINKS ARE ACCESSIBLE ON THE LAB’S INTERNAL NETWORK ONLY. 1 BLUE LINKS ARE ACCESSIBLE ON BOTH THE INTERNAL AND EXTERNAL LAB NETWORK. IT WAS AN EXCEPTIONAL YEAR, DESPITE RESTRICTIONS hough 2020 was dominated by events surrounding LLNL scientists released “Getting to Neutral: Options for the COVID pandemic — whether it was adapting Negative Carbon Emissions in California,” identifying a to social distancing and the need to telecommute, robust suite of technologies to help California clear the T safeguarding employees as they returned to last hurdle and become carbon neutral — and ultimately conduct mission-essential work or engaging in carbon negative — by 2045. COVID-related research — the Laboratory managed an exceptional year in all facets of S&T and operations. The Lab engineers and biologists developed a “brain-on-a- Lab delivered on all missions despite the pandemic and chip” device capable of recording the neural activity of its restrictions. living brain cell cultures in three dimensions, a significant advancement in the realistic modeling of the human brain The year was marked by myriad advances in high outside of the body. Scientists paired 3D-printed, living performance computing as the Lab moved closer to human brain vasculature with advanced computational making El Capitan and exascale computing a reality. flow simulations to better understand tumor cell attachment to blood vessels and developed the first living Lab researchers completed assembly and qualification 3D-printed aneurysm to improve surgical procedures and of 16 prototype high-voltage, solid state pulsed-power personalize treatments. drivers to meet the delivery schedule for the Scorpius radiography project.
    [Show full text]
  • IBM US Nuke-Lab Beast 'Sequoia' Is Top of the Flops (Petaflops, That Is) | Insidehpc.Com
    Advertisement insideHPC Skip to content Latest News HPC Hardware Software Tools Events inside-BigData Search Rock Stars of HPC Videos inside-Startups HPC Jobs IBM US Nuke-lab Beast ‘Sequoia’ is Top of the Flops (Petaflops, that is) 06.18.2012 Mi piace By Timothy Prickett Morgan • Get more from this author For the second time in the past two years, a new supercomputer has taken the top ranking in the Top 500 list of supercomputers – and it does not use a hybrid CPU-GPU architecture. But the question everyone will be asking at the International Super Computing conference in Hamburg, Germany today is whether this is the last hurrah for such monolithic parallel machines and whether the move toward hybrid machines where GPUs or other kinds of coprocessors do most of the work is inevitable. No one can predict the future, of course, even if they happen to be Lawrence Livermore National Laboratory (LLNL) and even if they happen to have just fired up IBM’s “Sequoia” BlueGene/Q beast, which has been put through the Linpack benchmark paces, delivering 16.32 petaflops of sustained performance running across the 1.57 million PowerPC cores inside the box. Sequoia has a peak theoretical performance of 20.1 petaflops, so 81.1 per cent of the possible clocks in the box that could do work running Linpack did so when the benchmark test was done. LLNL was where the original BlueGene/L super was commercialized, so that particular Department of Energy nuke lab knows how to tune the massively parallel Power machine better than anyone on the planet, meaning the efficiency is not a surprise.
    [Show full text]