Getting the Performance Into High Performance Computing

Total Page:16

File Type:pdf, Size:1020Kb

Getting the Performance Into High Performance Computing Getting the Performance Out Of High Performance Computing Jack Dongarra Innovative Computing Lab University of Tennessee and Computer Science and Math Division Oak Ridge National Lab http://www.cs.utk.edu/~dongarra/ 1 Getting the Performance Out Of High Performance Computing Jack Dongarra Innovative Computing Lab University of Tennessee and Computer Science and Math Division Oak Ridge National Lab http://www.cs.utk.edu/~dongarra/ 2 1 3 Getting the Performance into High Performance Computing Jack Dongarra Innovative Computing Lab University of Tennessee and Computer Science and Math Division Oak Ridge National Lab http://www.cs.utk.edu/~dongarra/ 4 2 Technology Trends: Microprocessor Capacity Gordon Moore (co-founder of Intel) predicted in 1965 that the transistor density of semiconductor Microprocessors have chips would double roughly every become smaller, denser, and 18 months. more powerful. 2X transistors/Chip Every 1.5 years Not just processors, storage, Called “Moore’s Law” internet bandwidth, etc 5 Moore’s Law Moore’s Law Super Scalar/Vector/Parallel 1 PFlop/s Earth Parallel Simulator ASCI White ASCI Red 1 TFlop/s Pacific TMC CM-5 Cray T3D Vector TMC CM-2 Cray 2 1 GFlop/s Cray X-MP Super Scalar Cray 1 CDC 7600 IBM 360/195 1 MFlop/s Scalar CDC 6600 IBM 7090 1 KFlop/s UNIVAC 1 EDSAC 1 6 1950 1960 1970 1980 1990 2000 2010 3 Next Generation: IBM Blue Gene/L and ASCI Purple ¨ Announced 11/19/02 ØOne of 2 machines for LLNL Ø360 TFlop/s Ø130,000 proc ØLinux ØFY 2005 Plus ASCI Purple IBM Power 5 based 7 12K proc, 100 TFlop/s To Be Provocative… Citation in the Press, March 10th, 2008 ¨ How could this happen? Ø Complexity of programming these machines were DOE Supercomputers Sit Idle underestimated WASHINGTON, Mar. 10, 2008 Ø Users were unprepared for GAO reports that after almost the lack of reliability of 5 years of effort and several the hardware and software hundreds of M$’s spent at Ø Little effort was spent to the DOE labs, the carry out medium and long high performance term research activities to solve problems that were computers recently foreseen 5 years ago in purchased did not the areas of applications, meet users’ expectation algorithm, middleware, and are sitting idle…Alan programming models, and 8 Laub head of the DOE efforts computer architectures, … reports that the computer equ 4 Software Technology & Performance ¨ Tendency to focus on the hardware ¨ Software required to bridge an ever widening gap ¨ Gaps between potential and delivered performance is very steep Ø Performance only if the data and controls are setup just right Ø Otherwise, dramatic performance degradations, very unstable situation Ø Will become more unstable as systems change and become more complex ¨ Challenge for applications, libraries, and tools is formidable with Tflop/s level, even greater with Pflops, some might say insurmountable. 9 Linpack (100x100) Analysis, The Machine on My Desk 12 Years Ago and Today ¨ Compaq 386/SX20 SX with FPA - .16 Mflop/s ¨ Pentium IV – 2.8 GHz – 1317 Mflop/s ¨ 12 years è we see a factor of ~ 8231 Ø Doubling in less than 12 months, for 12 years ¨ Moore’s Law gives us a factor of 256. ¨ How do we get a factor > 8000? vComplex set of interaction between ØApplication Ø Clock speed increase = 128x ØAlgorithms Ø External Bus Width & Caching – ØProgramming language ØCompiler Ø 16 vs. 64 bits = 4x ØMachine instructions Ø Floating Point - ØHardware vMany layers of translation from Ø 4/8 bits multi vs. 64 bits (1 clock) = 8x the application to the hardware Ø Compiler Technology = 2x vChanging with each generation ¨ However the potential for that Pentium 4 is 5.6 Gflop/s and here we are getting 1.32 Gflop/s Ø Still a factor of 4.25 off of peak 10 5 Where Does Much of Our Lost Performance Go? or Why Should I Care About the Memory Hierarchy? Processor-DRAM Memory Gap (latency) µProc 60%/yr. 1000000 (2X/1.5yr) “Moore’s Law” CPU 10000 Processor-Memory Performance Gap: 100 Performance (grows 50% / year) DRAM DRAM 9%/yr. 1 (2X/10 yrs) 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 Year 11 Optimizing Computation and Memory Use ¨ Computational optimizations Ø Theoretical peak:(# fpus)*(flops/cycle)*cycle time Ø Pentium 4: (1 fpu)*(2 flops/cycle)*(2.8 Ghz) = 5600 MFLOP/s ¨ Operations like: Ø y = a x + y : 3 operands (24 Bytes) needed for 2 flops; 5600 Mflop/s requires 8400 MWord/s bandwidth from memory ¨ Memory optimization Ø Theoretical peak: (bus width) * (bus speed) Ø Pentium 4: (32 bits)*(533 Mhz) = 2132 MB/s = 266 MWord/s Off by a factor of 30 from what’s required to drive 12 the processor from memory to peak performance 6 Memory Hierarchy ¨ By taking advantage of the principle of locality: Ø Present the user with as much memory as is available in the cheapest technology. Ø Provide access at the speed offered by the fastest technology. Processor Tertiary Storage Secondary (Disk/Tape) Control Storage (Disk) Level Main Registers On Cache 2 and 3 Memory Distributed Remote Datapath - Chip Cache (DRAM) Memory Cluster (SRAM) Memory Speed (ns): 1s 10s 100s 10,000,000s 10,000,000,000s (10s ms) Size (bytes): 100s (10s sec) Ks Ms 100,000 s 10,000,000 s (.1s ms) (10s ms) 13 Gs Ts Tool To Help Understand What’s Going On In the Processor ¨ Complex system with many filters ¨ Need to identify bottlenecks ¨ Prioritize optimization ¨ Focus on important aspects 14 7 Tools for Performance Analysis, Modeling and Optimization ¨ PAPI ¨ Dynist ¨ SVPablo ROSE: Compiler Framework Recognition of high-level 15 abstractions Specification of Transformations Example PERC on Climate Model ¨ Interaction with the SciDAC Climate development effort ¨ Profiling Ø Identifying performance bottleneck and prioritizing enhancements ¨ Evaluation of code over time 9 month period ¨ Produced 400% improvement via decreased overhead and increased scalability 16 8 Signatures: Key Factors in Applications and System that Affect Performance ¨ Application Signatures ¨ Hardware Signatures Ø Performance capabilities of Machine Ø Latencies and bandwidth of memory Ø Characterization of operations hierarchy needed to be performed by Ø Local to node & to remote node application Ø Instruction issue rates Ø Description of application demands Ø Cache size on resources Ø TLB size Ø Algorithm Signatures Ø Opts counts Parallel or Distributed Application Ø Memory ref patterns Ø Data dependencies Ø I/O characteristics Performance Monitor Ø Software Signatures Ø Sync points Feedback Live Performance Data Ø Thread level parallelism (Degree of Similarity) Ø Inst level parallelism Observation and Generation of Ø Ratio of mem ref to flpt ops Model Signature Observation Ø Predict application behavior and Comparison Signature performance Application Signature §Execution signature 17 §combine application and machine signatures to provide accurate performance models Model Signature . Algorithms vs Applications Lattice Quantum Weather Comp Fluid Adjustment Inverse Structural Electronic Circuit Gauge Chemistry Simulation Dynamics of Geodetic Problems Mechanics Device Simulation (QCD) Networks Simulation Sparse Linear System Solvers X X X X X X Linear Least Squares X X Nonlinear Algebraic System X X X X Solvers Sparse Eigenvalue Problems X X X FFT X X X Rapid Elliptic Problem Solvers X X X Multigrid Schemes X X Stiff ODE Solvers X X Monte Carlo Schemes X X Integral Transformations X 18 From: Supercomputing Tradeoffs and the Cedar System, E. Davidson, D. Kuck, D. Lawrie, and A. Sameh, in High - Speed Computing, Scientific Applications and Algorithm Design, Ed R. Wilhelmson, U of I Press, 1986. 9 Update to Sameh’s Table? Application Performance Matrix ¨ Next step by looking http://www.krellinst.org/matrix/ at: Ø Application Signatures Ø Algorithms choices Ø Software profile Ø Architecture (Machine) ¨ Data mine to extract information ¨ Need signatures for A3S 19 Performance Tuning ¨ Motivation: performance of many applications dominated by a few kernels ¨ Conventional approach: handtuning by user or vendor ØVery time consuming and tedious work ØEven with intimate knowledge of architecture and compiler, performance hard to predict ØGrowing list of kernels to tune ØMust be redone for every architecture, compiler ØCompiler technology often lags architecture ØNot just a compiler problem: ØBest algorithm may depend on input, so some tuning at run - time. Ø Not all algorithms semantically or mathematically equivalent 20 10 Automatic Performance Tuning to Hide Complexity ¨ Approach: for each kernel 1. Identify and generate a space of algorithms 2. Search for the fastest one, by running them ¨ What is a space of algorithms? Ø Depending on kernel and input, may vary Ø instruction mix and order Ø memory access patterns Ø data structures Ø mathematical formulation ¨ When do we search? Ø Once per kernel and architecture Ø At compile time Ø At run time Ø All of the above 21 Some Automatic Tuning Projects ¨ ATLAS (www.netlib.org/atlas) (Dongarra, Whaley) used in Matlab and many SciDAC and ASCI projects ¨ PHIPAC (www.icsi.berkeley.edu/~bilmes/phipac) (Bilmes,Asanovic,Vuduc,Demmel) ¨ Sparsity (www.cs.berkeley.edu/~yelick/sparsity) (Yelick, Im) ¨ Self Adapting Linear Algebra Software (SALAS) (Dongarra, Eijkhout, Gropp, Keyes) ¨ FFTs and Signal Processing Ø FFTW (www.fftw.org) Ø Won 1999 Wilkinson Prize for Numerical Software Ø SPIRAL (www.ece.cmu.edu/~spiral) Ø Extensions to other 3500.0 Vendor BLAS Ø transforms, DSPs ATLAS BLAS 3000.0 F77 BLAS Ø UHFFT Ø Extensions to higher 2500.0 Ø dimension, parallelism 2000.0 MFLOP/S 1500.0 1000.0 500.0 0.0 DEC ev56-533 DEC ev6-500 AMD Athlon-600 HP9000/735/135 IBM PPC604-112 IBM Power2-160 IBM Power3-200 22 Intel P-III 933 MHz SGI R10000ip28-200SGI R12000ip30-270 Sun UltraSparc2-200 Architectures Intel P-4 2.53 GHz w/SSE2 11 Futures for High Performance Scientific Computing ¨ Numerical software will be adaptive, exploratory, and intelligent ¨ Determinism in numerical computing will be gone.
Recommended publications
  • The Reliability Wall for Exascale Supercomputing Xuejun Yang, Member, IEEE, Zhiyuan Wang, Jingling Xue, Senior Member, IEEE, and Yun Zhou
    . IEEE TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * * 1 The Reliability Wall for Exascale Supercomputing Xuejun Yang, Member, IEEE, Zhiyuan Wang, Jingling Xue, Senior Member, IEEE, and Yun Zhou Abstract—Reliability is a key challenge to be understood to turn the vision of exascale supercomputing into reality. Inevitably, large-scale supercomputing systems, especially those at the peta/exascale levels, must tolerate failures, by incorporating fault- tolerance mechanisms to improve their reliability and availability. As the benefits of fault-tolerance mechanisms rarely come without associated time and/or capital costs, reliability will limit the scalability of parallel applications. This paper introduces for the first time the concept of “Reliability Wall” to highlight the significance of achieving scalable performance in peta/exascale supercomputing with fault tolerance. We quantify the effects of reliability on scalability, by proposing a reliability speedup, defining quantitatively the reliability wall, giving an existence theorem for the reliability wall, and categorizing a given system according to the time overhead incurred by fault tolerance. We also generalize these results into a general reliability speedup/wall framework by considering not only speedup but also costup. We analyze and extrapolate the existence of the reliability wall using two representative supercomputers, Intrepid and ASCI White, both employing checkpointing for fault tolerance, and have also studied the general reliability wall using Intrepid. These case studies provide insights on how to mitigate reliability-wall effects in system design and through hardware/software optimizations in peta/exascale supercomputing. Index Terms—fault tolerance, exascale, performance metric, reliability speedup, reliability wall, checkpointing. ! 1INTRODUCTION a revolution in computing at a greatly accelerated pace [2].
    [Show full text]
  • LLNL Computation Directorate Annual Report (2014)
    PRODUCTION TEAM LLNL Associate Director for Computation Dona L. Crawford Deputy Associate Directors James Brase, Trish Damkroger, John Grosh, and Michel McCoy Scientific Editors John Westlund and Ming Jiang Art Director Amy Henke Production Editor Deanna Willis Writers Andrea Baron, Rose Hansen, Caryn Meissner, Linda Null, Michelle Rubin, and Deanna Willis Proofreader Rose Hansen Photographer Lee Baker LLNL-TR-668095 3D Designer Prepared by LLNL under Contract DE-AC52-07NA27344. Ryan Chen This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, Print Production manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the Charlie Arteago, Jr., and Monarch Print Copy and Design Solutions United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. CONTENTS Message from the Associate Director . 2 An Award-Winning Organization . 4 CORAL Contract Awarded and Nonrecurring Engineering Begins . 6 Preparing Codes for a Technology Transition . 8 Flux: A Framework for Resource Management .
    [Show full text]
  • Biomolecular Simulation Data Management In
    BIOMOLECULAR SIMULATION DATA MANAGEMENT IN HETEROGENEOUS ENVIRONMENTS Julien Charles Victor Thibault A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Biomedical Informatics The University of Utah December 2014 Copyright © Julien Charles Victor Thibault 2014 All Rights Reserved The University of Utah Graduate School STATEMENT OF DISSERTATION APPROVAL The dissertation of Julien Charles Victor Thibault has been approved by the following supervisory committee members: Julio Cesar Facelli , Chair 4/2/2014___ Date Approved Thomas E. Cheatham , Member 3/31/2014___ Date Approved Karen Eilbeck , Member 4/3/2014___ Date Approved Lewis J. Frey _ , Member 4/2/2014___ Date Approved Scott P. Narus , Member 4/4/2014___ Date Approved And by Wendy W. Chapman , Chair of the Department of Biomedical Informatics and by David B. Kieda, Dean of The Graduate School. ABSTRACT Over 40 years ago, the first computer simulation of a protein was reported: the atomic motions of a 58 amino acid protein were simulated for few picoseconds. With today’s supercomputers, simulations of large biomolecular systems with hundreds of thousands of atoms can reach biologically significant timescales. Through dynamics information biomolecular simulations can provide new insights into molecular structure and function to support the development of new drugs or therapies. While the recent advances in high-performance computing hardware and computational methods have enabled scientists to run longer simulations, they also created new challenges for data management. Investigators need to use local and national resources to run these simulations and store their output, which can reach terabytes of data on disk.
    [Show full text]
  • Measuring Power Consumption on IBM Blue Gene/P
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Springer - Publisher Connector Comput Sci Res Dev DOI 10.1007/s00450-011-0192-y SPECIAL ISSUE PAPER Measuring power consumption on IBM Blue Gene/P Michael Hennecke · Wolfgang Frings · Willi Homberg · Anke Zitz · Michael Knobloch · Hans Böttiger © The Author(s) 2011. This article is published with open access at Springerlink.com Abstract Energy efficiency is a key design principle of the Top10 supercomputers on the November 2010 Top500 list IBM Blue Gene series of supercomputers, and Blue Gene [1] alone (which coincidentally are also the 10 systems with systems have consistently gained top GFlops/Watt rankings an Rpeak of at least one PFlops) are consuming a total power on the Green500 list. The Blue Gene hardware and man- of 33.4 MW [2]. These levels of power consumption are al- agement software provide built-in features to monitor power ready a concern for today’s Petascale supercomputers (with consumption at all levels of the machine’s power distribu- operational expenses becoming comparable to the capital tion network. This paper presents the Blue Gene/P power expenses for procuring the machine), and addressing the measurement infrastructure and discusses the operational energy challenge clearly is one of the key issues when ap- aspects of using this infrastructure on Petascale machines. proaching Exascale. We also describe the integration of Blue Gene power moni- While the Flops/Watt metric is useful, its emphasis on toring capabilities into system-level tools like LLview, and LINPACK performance and thus computational load ne- highlight some results of analyzing the production workload glects the fact that the energy costs of memory references at Research Center Jülich (FZJ).
    [Show full text]
  • Blue Gene®/Q Overview and Update
    Blue Gene®/Q Overview and Update November 2011 Agenda Hardware Architecture George Chiu Packaging & Cooling Paul Coteus Software Architecture Robert Wisniewski Applications & Configurations Jim Sexton IBM System Technology Group Blue Gene/Q 32 Node Board Industrial Design BQC DD2.0 4-rack system 5D torus 3 © 2011 IBM Corporation IBM System Technology Group Top 10 reasons that you need Blue Gene/Q 1. Ultra-scalability for breakthrough science – System can scale to 256 racks and beyond (>262,144 nodes) – Cluster: typically a few racks (512-1024 nodes) or less. 2. Highest capability machine in the world (20-100PF+ peak) 3. Superior reliability: Run an application across the whole machine, low maintenance 4. Highest power efficiency, smallest footprint, lowest TCO (Total Cost of Ownership) 5. Low latency, high bandwidth inter-processor communication system 6. Low latency, high bandwidth memory system 7. Open source and standards-based programming environment – Red Hat Linux distribution on service, front end, and I/O nodes – Lightweight Compute Node Kernel (CNK) on compute nodes ensures scaling with no OS jitter, enables reproducible runtime results – Automatic SIMD (Single-Instruction Multiple-Data) FPU exploitation enabled by IBM XL (Fortran, C, C++) compilers – PAMI (Parallel Active Messaging Interface) runtime layer. Runs across IBM HPC platforms 8. Software architecture extends application reach – Generalized communication runtime layer allows flexibility of programming model – Familiar Linux execution environment with support for most
    [Show full text]
  • TMA4280—Introduction to Supercomputing
    Supercomputing TMA4280—Introduction to Supercomputing NTNU, IMF January 12. 2018 1 Outline Context: Challenges in Computational Science and Engineering Examples: Simulation of turbulent flows and other applications Goal and means: parallel performance improvement Overview of supercomputing systems Conclusion 2 Computational Science and Engineering (CSE) What is the motivation for Supercomputing? Solve complex problems fast and accurately: — efforts in modelling and simulation push sciences and engineering applications forward, — computational requirements drive the development of new hardware and software. 3 Computational Science and Engineering (CSE) Development of computational methods for scientific research and innovation in engineering and technology. Covers the entire spectrum of natural sciences, mathematics, informatics: — Scientific model (Physics, Biology, Medical, . ) — Mathematical model — Numerical model — Implementation — Visualization, Post-processing — Validation ! Feedback: virtuous circle Allows for larger and more realistic problems to be simulated, new theories to be experimented numerically. 4 Outcome in Industrial Applications Figure: 2004: “The Falcon 7X becomes the first aircraft in industry history to be entirely developed in a virtual environment, from design to manufacturing to maintenance.” Dassault Systèmes 5 Evolution of computational power Figure: Moore’s Law: exponential increase of number of transistors per chip, 1-year rate (1965), 2-year rate (1975). WikiMedia, CC-BY-SA-3.0 6 Evolution of computational power
    [Show full text]
  • Introduction to the Practical Aspects of Programming for IBM Blue Gene/P
    Introduction to the Practical Aspects of Programming for IBM Blue Gene/P Alexander Pozdneev Research Software Engineer, IBM June 22, 2015 — International Summer Supercomputing Academy IBM Science and Technology Center Established in 2006 • Research projects I HPC simulations I Security I Oil and gas applications • ESSL math library for IBM POWER • IBM z Systems mainframes I Firmware I Linux on z Systems I Software • IBM Rational software • Smarter Commerce for Retail • Smarter Cities 2 c 2015 IBM Corporation Outline 1 Blue Gene architecture in the HPC world 2 Blue Gene/P architecture 3 Conclusion 4 References 3 c 2015 IBM Corporation Outline 1 Blue Gene architecture in the HPC world Brief history of Blue Gene architecture Blue Gene in TOP500 Blue Gene in Graph500 Scientific applications Gordon Bell awards Blue Gene installation sites 2 Blue Gene/P architecture 3 Conclusion 4 References 4 c 2015 IBM Corporation Brief history of Blue Gene architecture • 1999 US$100M research initiative, novel massively parallel architecture, protein folding • 2003, Nov Blue Gene/L first prototype TOP500, #73 • 2004, Nov 16 Blue Gene/L racks at LLNL TOP500, #1 • 2007 Second generation Blue Gene/P • 2009 National Medal of Technology and Innovation • 2012 Third generation Blue Gene/Q • Features: I Low frequency, low power consumption I Fast interconnect balanced with CPU I Light-weight OS 5 c 2015 IBM Corporation Blue Gene in TOP500 Date # System Model Nodes Cores Jun’14 / Nov’14 3 Sequoia Q 96k 1.5M Jun’13 / Nov’13 3 Sequoia Q 96k 1.5M Nov’12 2 Sequoia
    [Show full text]
  • The Blue Gene/Q Compute Chip
    The Blue Gene/Q Compute Chip Ruud Haring / IBM BlueGene Team © 2011 IBM Corporation Acknowledgements The IBM Blue Gene/Q development teams are located in – Yorktown Heights NY, – Rochester MN, – Hopewell Jct NY, – Burlington VT, – Austin TX, – Bromont QC, – Toronto ON, – San Jose CA, – Boeblingen (FRG), – Haifa (Israel), –Hursley (UK). Columbia University . University of Edinburgh . The Blue Gene/Q project has been supported and partially funded by Argonne National Laboratory and the Lawrence Livermore National Laboratory on behalf of the United States Department of Energy, under Lawrence Livermore National Laboratory Subcontract No. B554331 2 03 Sep 2012 © 2012 IBM Corporation Blue Gene/Q . Blue Gene/Q is the third generation of IBM’s Blue Gene family of supercomputers – Blue Gene/L was announced in 2004 – Blue Gene/P was announced in 2007 . Blue Gene/Q – was announced in 2011 – is currently the fastest supercomputer in the world • June 2012 Top500: #1,3,7,8, … 15 machines in top100 (+ 101-103) – is currently the most power efficient supercomputer architecture in the world • June 2012 Green500: top 20 places – also shines at data-intensive computing • June 2012 Graph500: top 2 places -- by a 7x margin – is relatively easy to program -- for a massively parallel computer – and its FLOPS are actually usable •this is not a GPGPU design … – incorporates innovations that enhance multi-core / multi-threaded computing • in-memory atomic operations •1st commercial processor to support hardware transactional memory / speculative execution •… – is just a cool chip (and system) design • pushing state-of-the-art ASIC design where it counts • innovative power and cooling 3 03 Sep 2012 © 2012 IBM Corporation Blue Gene system objectives .
    [Show full text]
  • R00456--FM Getting up to Speed
    GETTING UP TO SPEED THE FUTURE OF SUPERCOMPUTING Susan L. Graham, Marc Snir, and Cynthia A. Patterson, Editors Committee on the Future of Supercomputing Computer Science and Telecommunications Board Division on Engineering and Physical Sciences THE NATIONAL ACADEMIES PRESS Washington, D.C. www.nap.edu THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Gov- erning Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engi- neering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for ap- propriate balance. Support for this project was provided by the Department of Energy under Spon- sor Award No. DE-AT01-03NA00106. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the organizations that provided support for the project. International Standard Book Number 0-309-09502-6 (Book) International Standard Book Number 0-309-54679-6 (PDF) Library of Congress Catalog Card Number 2004118086 Cover designed by Jennifer Bishop. Cover images (clockwise from top right, front to back) 1. Exploding star. Scientific Discovery through Advanced Computing (SciDAC) Center for Supernova Research, U.S. Department of Energy, Office of Science. 2. Hurricane Frances, September 5, 2004, taken by GOES-12 satellite, 1 km visible imagery. U.S. National Oceanographic and Atmospheric Administration. 3. Large-eddy simulation of a Rayleigh-Taylor instability run on the Lawrence Livermore National Laboratory MCR Linux cluster in July 2003.
    [Show full text]
  • IBM Research Report Designing a Highly-Scalable Operating System
    RC24037 (W0608-087) August 28, 2006 Computer Science IBM Research Report Designing a Highly-Scalable Operating System: The Blue Gene/L Story José Moreira1, Michael Brutman2, José Castaños1, Thomas Engelsiepen3, Mark Giampapa1, Tom Gooding2, Roger Haskin3, Todd Inglett2, Derek Lieber1, Pat McCarthy2, Mike Mundy2, Jeff Parker2, Brian Wallenfelt4 1IBM Research Division Thomas J. Watson Research Center P.O. Box 218 Yorktown Heights, NY 10598 2IBM Systems and Technology Group Rochester, MN 55901 3IBM Research Division Almaden Research Center 650 Harry Road San Jose, CA 95120-6099 4GeoDigm Corporation Chanhassen, MN 55317 Research Division Almaden - Austin - Beijing - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g. , payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . Designing a Highly-Scalable Operating System: The Blue Gene/L Story José Moreira*, Michael Brutman**, José Castaños*, Thomas Engelsiepen***, Mark Giampapa*, Tom Gooding**, Roger Haskin***, Todd Inglett**, Derek Lieber*, Pat McCarthy**, Mike Mundy**, Jeff Parker**, Brian Wallenfelt**** *{jmoreira,castanos,giampapa,lieber}@us.ibm.com ***{engelspn,roger}@almaden.ibm.com IBM Thomas J.
    [Show full text]
  • CFD) Applications on HPC Systems
    Performance Portability Strategies for Computational Fluid Dynamics (CFD) Applications on HPC Systems A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Pei-Hung Lin IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy Professor Pen-Chung Yew, Advisor Professor Paul R. Woodward, Co-Advisor June, 2013 c Pei-Hung Lin 2013 ALL RIGHTS RESERVED Acknowledgements First and foremost, I would like to thank my advisors, professor Paul R. Woodward and professor Pen-Chung Yew for their guidance in my doctoral program. Professor Woodward continuously inspired and encouraged me to pursue my research career. Professor Yew with his solid background in architecture and compiler guided me and taught me in academic research. They selflessly advised me conducting researches and shared their experiences in research, in work, and also in their lives. I also would like to thank my thesis committee members { professor Antonia Zhai and professor Eric Van Wyk for their insight and efforts in reviewing my dissertation. Furthermore, I would like to thank my colleagues and mentors in my graduate study. Dr. Dan Quinlan, and Dr. Chunhua Liao provided me the internship opportunities to work at Lawrence Livermore National Laboratory and will continue to support me in my post-doctoral career. Dr. Jagan Jayaraj (now at Sandia National Laboratory) and Michael Knox were my long-term colleagues in both study and work and we exchanged lots ideas in achieving our research goal. Dr. Ben Bergen and Dr. William Dai at Los Alamos National Laboratory provided me strong support and great assistant during my internship at LANL.
    [Show full text]
  • 2011 INCITE Awards Type: Renewal Title
    Type: Renewal Title: ―Ab Initio Dynamical Simulations for the Prediction of Bulk Properties‖ Principal Investigator: Theresa Windus, Iowa State University Co-Investigators: Brett Bode, Iowa State University Graham Fletcher, Argonne National Laboratory Mark Gordon, Iowa State University Monica Lamm, Iowa State University Michael Schmidt, Iowa State University Scientific Discipline: Chemistry: Physical INCITE Allocation: 10,000,000 processor hours Site: Argonne National Laboratory Machine (Allocation): IBM Blue Gene/P (10,000,000 processor hours) Research Summary: This project uses high-quality electronic structure theory, statistical mechanical methods, and massively parallel computers to address the prediction of bulk properties of water systems that require high fidelity. These molecular-scale problems are of critical importance to national priority scientific issues such as global warming, the environment, and alternative fuels. Many critical problems in the chemical sciences are so computationally demanding that the only perceived option for addressing them is to employ low-level methods whose reliability is suspect. The electronic structure community already has well-established protocols for assessing the reliability of methods applied to small and medium systems, but this project will address extended systems whose sizes require the use of massive computational resources. Driving the need for petascale computing in this community are a vast array of problems, ranging from simulations of liquids with unquestionable accuracy to studies of reaction mechanisms in the condensed phase to ground and excited state studies of polymers, biomolecules, and other extended systems. This work is focusing on understanding the molecular level dynamics of water, the formation of aerosols important in cloud formation, and the interactions of dendrimers with ligands of environmental importance.
    [Show full text]