Improving Programmability, Portability and Performance In

Total Page:16

File Type:pdf, Size:1020Kb

Improving Programmability, Portability and Performance In Programmability, Portability and Performance in Heterogeneous Accelerator-Based Systems R. Govindarajan High Performance Computing Lab. SERC & CSA, IISc [email protected] Dec. 2015 © RG@SERC,IISc Overview • Introduction • Challenges in Accelerator-Based Systems • Addressing Programming Challenges – Managing parallelism across Accelerator cores – Managing data transfer between CPU and GPU – Managing Synergistic Execution across CPU and GPU cores • Implementation and Evaluation • Conclusions © RG@SERC,IISc 2 Moore’s Law No. of Transistors doubles every 18 months Doubling of Performance every 18 months Source: Univ. of Wisconsin 3 Moore’s Law: Processor Architecture Roadmap ILP ProcessorsEPIC Super- scalar © RG@SERC,IISc 4 Multicores : The “Right Turn” 1 Core 1 6 GHz GHz 6 3 GHz 3 GHz 3 GHz 2 Core 4 Core 16 Core 1 Core 1 3 GHz 3 GHz Performance 1 Core 1 1 GHz GHz 1 Source: LinusTechTips 5 Moore’s Law: Processor Architecture Roadmap Accele- rators ILP ProcessorsEPIC Super- scalar © RG@SERC,IISc 6 Accelerators © RG@SERC,IISc 7 Accelerator Architecture: Fermi S2050 Source: icrontic.com © RG@SERC,IISc 8 Comparing CPU and GPU • 8 CPU cores @ 3 GHz • 2880 CUDA cores @ 1.67 GHz • 380 GFLOPS • 1500 GFLOPS C0 C1 C7 SLC IMC QPI PCI-e Memory Memory CPU © RG@SERC,IISc GPU 9 Accelerators: Hype or Reality? Rmax Power Rank Site Manufacturer Computer Country Cores [Pflops] [MW] National Tianhe-2, 1 SuperComputer Center NUDT China 3,120,000 33.86 17.80 Xeon E5 2691 and Xeon Phi 31S1 in Tianjin Titan Cray XK7, Oak Ridge National 2 Cray Opteron 6274 (2.2GHz) + USA 560,640 17.59 8.20 Labs NVIDIA Kepler K-20 Lawrence Livermore 3 IBM Sequoia – BlueGene/Q USA 1,572,864 17.17 7.89 Labs RIKEN Advanced K Computer 4 Institute for Fujitsu SPARC64 VIIIfx 2.0GHz, Japan 705,024 10.51 12.66 Computational Science Tofu Interconnect BlueGene/Q 5 DOE/SC/ANL IBM USA 786,432 8.58 3.94 Power BQC 16C/1.6 GHz Xeon E5-2698 v3 (2.3GHz) + 6 DOE/LANL/SNL Cray USA 301,056 8.10 Aries Interconnect Swiss National Xeon E5-2670 (2.6GHz) + 7 Cray Swiss 115,984 6.27 2.35 Computing Centre Nvidia Kepler K20x Xeon E5-2680v3 (2.5GHz) + 8 HLRS, Stuttgart Cray Germany 196,608 5.64 Aries Interconnect Top 500 Nov 2015 List : www.top500.org 10 Emergence of Accelerators 104 Systems in Top500 in Nov 2015 Source: Nicole Hemsoth, HPC Wire, June 2014 11 Overview • Introduction • Challenges in Accelerator-Based Systems • Addressing Programming Challenges – Managing parallelism across Accelerator cores – Managing data transfer between CPU and GPU – Managing Synergistic Execution across CPU and GPU cores • Implementation and Evaluation • Conclusions © RG@SERC,IISc 12 Accelerator Programming: Good News • Emergence of Programming Languages for GPUs/Accelerators CUDA, OpenCL, OpenACC – Efficient, but low programmability • Growing collection of code base – CUDAzone – Packages supporting GPUs by Software Vendors • Impressive performance • What about Programmability and Portability while retaining Performance? © RG@SERC,IISc 13 Handling the Multicore Challenge • Shared and Distributed Memory Programming Languages OpenMP, MPI – Scaling w.r.t. no. of cores • New Parallel Languages (partitioned global address space languages) X10, UPC, Chapel, … – Higher programmability - but, perf. and applicability to accelerators are issues Source: Top500.org 14 Accelerator Programming: Boon or Bane • Challenges in programming Accelerators – Managing parallelism across various cores • Task, data, and thread-level parallelism – Efficient partitioning of work across different devices – Transfer of data between CPU and Accelerator – Managing CPU-Accelerator memory bandwidth efficiently – Efficient use of different types of memory (Device memory, Shared Memory, Constant and Texture Memory, … ) – Synchronization across multiple cores/devices © RG@SERC,IISc 15 What Parallelism(s) to Exploit? © RG@SERC,IISc 16 Our Approach Array Lang. OpenMP MPI Streaming (Matlab) Lang. CUDA/ Parallel OpenCL Lang. Compiler/ Runtime System Synergistic Execution on Multiple Hetergeneous Cores Xeon Multicores Phi SSE GPUs CellBE © RG@SERC,IISc 17 Overview • Introduction • Challenges in Accelerator-Based Systems • Addressing Programming Challenges – Managing parallelism across Accelerator cores – Managing data transfer between CPU and GPU – Managing Synergistic Execution across CPU and GPU cores • Implementation and Evaluation • Conclusions © RG@SERC,IISc 18 Programming Heterogeneous Systems: Our Contributions 1. Managing parallelism across Accelerator cores Identifying and exposing implicit parallelism (MATLAB, StreamIT) Software Pipelining in StreamIT 2. Managing data transfer between CPU and GPU Compiler Scheme (MATLAB) Hybrid Scheme (X10-CUDA) 3. Managing Synergistic Execution across CPU and GPU cores Profile-based partitioning (StreamIT, MATLAB) Runtime mechanism for OpenCL (FluidiCL) © RG@SERC,IISc 19 1. Managing Parallelism across Cores StreamIT Language • Program is a hierarchical composition of three A basic constructs – Pipeline B – SplitJoin C – Feedback Loop • Program reprsents infinite stream of data and D computation on them © RG@SERC,IISc 20 1. Managing Parallelism across Cores • Multithreading – Stream of data as multiple instances of thread – data parallelism – Software pipelining of filters or tasks – pipelined parallelism • Task partition between GPU and CPU cores, work scheduling and processor assignment – Profile-based approach for task mapping – Takes communication bandwidth restrictions into account – Task parallelism © RG@SERC,IISc 21 Mapping StreamIt on GPUs Resource constraints and Dependency Stream Graph constraints to be satisfied Task A T A1 A2 i Parallelism m B1 B2 e C1 C2 A3 A4 B C D1 D2 B3 B4 C3 C4 Pipeline D Data ParallelismD3 D4 Parallelism Optimal Schedule using SoftwareInteger Pipelined Linear Program Executionor near -optimal Heuristic Schedule for efficiency © RG@SERC,IISc [CGO 2009] 22 Mapping StreamIt on GPUs Schedule accounts for Stream Graph Data Transfer A GPU DMA Channel CPU Task T A1 A2 i Parallelism B1 B2 m An Dn-1 Bn-1 Cn-1 B C e C1 C2 A3 A4 D1 D2 B3 B4 Cn An C3 C4 Bn Dn-1 D Data Pipeline Parallelism ParallelismD3 D4 Software Pipelined Execution © RG@SERC,IISc [LCTES 2009] 23 MEGHA: MATLAB Execution on GPUs Frontend MATLAB Program Code Simplification SSA Construction array A[N][N]; array B[N][N]; Type Inference array C[N][N] 1: tempVar0 = (B + C); Kernel Identification 2: tempVar1 = (A + C); 3: A_1 = tempVar0 + tempVar1; Mapping and Scheduling 4: C_1 = A_1 * C; Data Transfer Insertion Code Generation Backend © RG@SERC,IISc 24 Task Partitioning and Mapping in MATLAB Execution 1: tempVar0 = (B + C); Kernel Composition : combining IR 2: tempVar1 = (A + C); statements into “common loops” 3: A_1 = tempVar0 + tempVar1; under memory and register constraints 4: C_1 = A_1 * C; using Clustering Methods Kernel 1 1: tempVar0 2: tempVar1 = B+C = A+C • Type and shape inferencing • ScalarKernel Reduction 1 to reduce 1: tempVar0 = (B + C); storage2: andtempVar1 Kernel = Call(A + C); overheads 3: 3: A_1 = tempVar0 + tempVar1; A_1=tempVar0+tem pVar1 Data Flow Graph Construction // for GPUCPU Execution -– coalescingcache locality for ji = 1:N Kernel 2 for ij = 1:N tempVar0_s = B(i, j) + C(i, j); tempVar1_s = A(i, j) + C(i, j); 4: C_1=A_1*C A_1(i, j) = tempVar0_s+tempVar1_s; endfor endfor © RG@SERC,IISc [PLDI 2011] 25 Overview • Introduction • Challenges in Accelerator-Based Systems • Addressing Programming Challenges – Managing parallelism across Accelerator cores – Managing data transfer between CPU and GPU – Managing Synergistic Execution across CPU and GPU cores • Implementation and Evaluation • Conclusions © RG@SERC,IISc 26 2. Data Transfer Insertion 1: n = 100; b1 1: n = 100; (CPU) 2: A = rand(n, n); 2: A = rand(n, n); (CPU) CPU 3: for i=1:k Data transfer TransferToGPU(A) required 4: A = A * A; (GPU) GPU b2 5: end 4: A = A * A; 6: print(A); Data transfer (CPU) GPU required TransferToCPU(A) CPU 6: print(A); b3 CPU • Data flow analysis to determine the locations of variables at the start of each basicNo data block transfer required • Edge splitting to insert necessary transfers © RG@SERC,IISc 27 Hybrid Compiler-Runtime Approach for Memory Consistency Write A ; Read B Object-Level EfficientTrue hybrid compilerFalse Memory- runtime consistencyGPUTrfr. checkAscheme to GPUConsistency A to at avoid redundant data transferGPU Page-Level K(A) MemoryRead A // Writes A Consistency at CPU CTrfrPU. checkA to CPU A Trfr. B to CPU Read A; Read B; [PACT 2012] 28 Overview • Introduction • Challenges in Accelerator-Based Systems • Addressing Programming Challenges – Managing parallelism across Accelerator cores – Managing data transfer between CPU and GPU – Managing Synergistic Execution across CPU and GPU cores • Implementation and Evaluation • Conclusions © RG@SERC,IISc 29 3. Synergistic Execution on Heterogeneous Devices Compile-time Approaches • Profile based scheduling T A1 A2 of tasks on CPU and GPU i cores m B1 B2 e C1 C2 • Compiler analysis of CPU- A3 A4 D1 D2 and GPU-friendly codes B3 B4 C3 C4 • Compiler analysis for data D3 D4 partition and transfer Software Pipelined Execution © RG@SERC,IISc 30 3. Synergistic Execution on using Runtime Methods • Different kernels execute better (faster) on different devices (CPU, GPU, Xeon Phi, …) • OpenCL allows different kernels to be executed on different devices • But require programmers to specify the device for kernel execution • Can a single OpenCL kernel transparently utilize all devices? FluidiCL: A Runtime System for Cooperative and Transparent Execution of OpenCL Programs on Heterogeneous Devices © RG@SERC,IISc 31 FluidiCL Runtime © RG@SERC,IISc 32 Kernel Execution • Kernel launch is modified with a wrapper call • Data transfers to both devices CPU • Two additional buffers on the GPU: one for data coming from the CPU, one to hold an original copy Host • Entire grid launched on GPU GPUData Twin CPUData • In addition, a few work- GPU groups, starting from end, are launched on CPU © RG@SERC,IISc [CGO 2014] 33 FluidiCL: CPU+GPU+Xeon Phi • Consider CPU + Xeon Phi as a single device and GPU as a device • Use FluidiCL's two-device solution between the merged device and the GPU • Use FluidiCL's two-device solution for work distribution betwn.
Recommended publications
  • Petaflops for the People
    PETAFLOPS SPOTLIGHT: NERSC housands of researchers have used facilities of the Advanced T Scientific Computing Research (ASCR) program and its EXTREME-WEATHER Department of Energy (DOE) computing predecessors over the past four decades. Their studies of hurricanes, earthquakes, NUMBER-CRUNCHING green-energy technologies and many other basic and applied Certain problems lend themselves to solution by science problems have, in turn, benefited millions of people. computers. Take hurricanes, for instance: They’re They owe it mainly to the capacity provided by the National too big, too dangerous and perhaps too expensive Energy Research Scientific Computing Center (NERSC), the Oak to understand fully without a supercomputer. Ridge Leadership Computing Facility (OLCF) and the Argonne Leadership Computing Facility (ALCF). Using decades of global climate data in a grid comprised of 25-kilometer squares, researchers in These ASCR installations have helped train the advanced Berkeley Lab’s Computational Research Division scientific workforce of the future. Postdoctoral scientists, captured the formation of hurricanes and typhoons graduate students and early-career researchers have worked and the extreme waves that they generate. Those there, learning to configure the world’s most sophisticated same models, when run at resolutions of about supercomputers for their own various and wide-ranging projects. 100 kilometers, missed the tropical cyclones and Cutting-edge supercomputing, once the purview of a small resulting waves, up to 30 meters high. group of experts, has trickled down to the benefit of thousands of investigators in the broader scientific community. Their findings, published inGeophysical Research Letters, demonstrated the importance of running Today, NERSC, at Lawrence Berkeley National Laboratory; climate models at higher resolution.
    [Show full text]
  • Unlocking the Full Potential of the Cray XK7 Accelerator
    Unlocking the Full Potential of the Cray XK7 Accelerator ∗ † Mark D. Klein , and, John E. Stone ∗National Center for Supercomputing Application, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA †Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA Abstract—The Cray XK7 includes NVIDIA GPUs for ac- [2], [3], [4], and for high fidelity movie renderings with the celeration of computing workloads, but the standard XK7 HVR volume rendering software.2 In one example, the total system software inhibits the GPUs from accelerating OpenGL turnaround time for an HVR movie rendering of a trillion- and related graphics-specific functions. We have changed the operating mode of the XK7 GPU firmware, developed a custom cell inertial confinement fusion simulation [5] was reduced X11 stack, and worked with Cray to acquire an alternate driver from an estimate of over a month for data transfer to and package from NVIDIA in order to allow users to render and rendering on a conventional visualization cluster down to post-process their data directly on Blue Waters. Users are able just one day when rendered locally using 128 XK7 nodes to use NVIDIA’s hardware OpenGL implementation which has on Blue Waters. The fully-graphics-enabled GPU state is many features not available in software rasterizers. By elim- inating the transfer of data to external visualization clusters, currently considered an unsupported mode of operation by time-to-solution for users has been improved tremendously. In Cray, and to our knowledge Blue Waters is presently the one case, XK7 OpenGL rendering has cut turnaround time only Cray system currently running in this mode.
    [Show full text]
  • This Is Your Presentation Title
    Introduction to GPU/Parallel Computing Ioannis E. Venetis University of Patras 1 Introduction to GPU/Parallel Computing www.prace-ri.eu Introduction to High Performance Systems 2 Introduction to GPU/Parallel Computing www.prace-ri.eu Wait, what? Aren’t we here to talk about GPUs? And how to program them with CUDA? Yes, but we need to understand their place and their purpose in modern High Performance Systems This will make it clear when it is beneficial to use them 3 Introduction to GPU/Parallel Computing www.prace-ri.eu Top 500 (June 2017) CPU Accel. Rmax Rpeak Power Rank Site System Cores Cores (TFlop/s) (TFlop/s) (kW) National Sunway TaihuLight - Sunway MPP, Supercomputing Center Sunway SW26010 260C 1.45GHz, 1 10.649.600 - 93.014,6 125.435,9 15.371 in Wuxi Sunway China NRCPC National Super Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Computer Center in Cluster, Intel Xeon E5-2692 12C 2 Guangzhou 2.200GHz, TH Express-2, Intel Xeon 3.120.000 2.736.000 33.862,7 54.902,4 17.808 China Phi 31S1P NUDT Swiss National Piz Daint - Cray XC50, Xeon E5- Supercomputing Centre 2690v3 12C 2.6GHz, Aries interconnect 3 361.760 297.920 19.590,0 25.326,3 2.272 (CSCS) , NVIDIA Tesla P100 Cray Inc. DOE/SC/Oak Ridge Titan - Cray XK7 , Opteron 6274 16C National Laboratory 2.200GHz, Cray Gemini interconnect, 4 560.640 261.632 17.590,0 27.112,5 8.209 United States NVIDIA K20x Cray Inc. DOE/NNSA/LLNL Sequoia - BlueGene/Q, Power BQC 5 United States 16C 1.60 GHz, Custom 1.572.864 - 17.173,2 20.132,7 7.890 4 Introduction to GPU/ParallelIBM Computing www.prace-ri.eu How do
    [Show full text]
  • Hardware Complexity and Software Challenges
    Trends in HPC: hardware complexity and software challenges Mike Giles Oxford University Mathematical Institute Oxford e-Research Centre ACCU talk, Oxford June 30, 2015 Mike Giles (Oxford) HPC Trends June 30, 2015 1 / 29 1 { 10? 10 { 100? 100 { 1000? Answer: 4 cores in Intel Core i7 CPU + 384 cores in NVIDIA K1100M GPU! Peak power consumption: 45W for CPU + 45W for GPU Question ?! How many cores in my Dell M3800 laptop? Mike Giles (Oxford) HPC Trends June 30, 2015 2 / 29 Answer: 4 cores in Intel Core i7 CPU + 384 cores in NVIDIA K1100M GPU! Peak power consumption: 45W for CPU + 45W for GPU Question ?! How many cores in my Dell M3800 laptop? 1 { 10? 10 { 100? 100 { 1000? Mike Giles (Oxford) HPC Trends June 30, 2015 2 / 29 Question ?! How many cores in my Dell M3800 laptop? 1 { 10? 10 { 100? 100 { 1000? Answer: 4 cores in Intel Core i7 CPU + 384 cores in NVIDIA K1100M GPU! Peak power consumption: 45W for CPU + 45W for GPU Mike Giles (Oxford) HPC Trends June 30, 2015 2 / 29 Top500 supercomputers Really impressive { 300× more capability in 10 years! Mike Giles (Oxford) HPC Trends June 30, 2015 3 / 29 My personal experience 1982: CDC Cyber 205 (NASA Langley) 1985{89: Alliant FX/8 (MIT) 1989{92: Stellar (MIT) 1990: Thinking Machines CM5 (UTRC) 1987{92: Cray X-MP/Y-MP (NASA Ames, Rolls-Royce) 1993{97: IBM SP2 (Oxford) 1998{2002: SGI Origin (Oxford) 2002 { today: various x86 clusters (Oxford) 2007 { today: various GPU systems/clusters (Oxford) 2011{15: GPU cluster (Emerald @ RAL) 2008 { today: Cray XE6/XC30 (HECToR/ARCHER @ EPCC) 2013 { today: Cray XK7 with GPUs (Titan @ ORNL) Mike Giles (Oxford) HPC Trends June 30, 2015 4 / 29 Top500 supercomputers Power requirements are raising the question of whether this rate of improvement is sustainable.
    [Show full text]
  • CHABBI-DOCUMENT-2015.Pdf
    ABSTRACT Software Support for Efficient Use of Modern Computer Architectures by Milind Chabbi Parallelism is ubiquitous in modern computer architectures. Heterogeneity of CPU cores and deep memory hierarchies make modern architectures difficult to program efficiently. Achieving top performance on supercomputers is difficult due to complex hardware, software, and their interactions. Production software systems fail to achieve top performance on modern architectures broadly due to three main causes: resource idleness, parallel overhead,anddata movement overhead.Thisdissertationpresentsnovelande↵ectiveperformance analysis tools, adap- tive runtime systems,andarchitecture-aware algorithms to understand and address these problems. Many future high performance systems will employ traditional multicore CPUs aug- mented with accelerators such as GPUs. One of the biggest concerns for accelerated systems is how to make best use of both CPU and GPU resources. Resource idleness arises in a parallel program due to insufficient parallelism and load imbalance, among other causes. To assess systemic resource idleness arising in GPU-accelerated architectures, we developed efficient profiling and tracing capabilities. We introduce CPU-GPU blame shifting—a novel technique to pinpoint and quantify the causes of resource idleness in GPU-accelerated archi- tectures. Parallel overheads arise due to synchronization constructs such as barriers and locks used in parallel programs. We developed a new technique to identify and eliminate redundant barriers at runtime in Partitioned Global Address Space programs. In addition, we developed a set of novel mutual exclusion algorithms that exploit locality in the memory hierarchy to improve performance on Non-Uniform Memory Access architectures. In modern architectures, inefficient or unnecessary memory accesses can severely degrade program performance.
    [Show full text]
  • ORNL Site Report
    ORNL Site Report Slurm User Group 2019 Salt Lake City, Utah ORNL is managed by UT-Battelle LLC for the US Department of Energy Topics • Overview of ORNL/NCCS/OLCF • Rhea and DTN Slurm migration • NOAA/NCRC Slurm Environment • Air Force Weather • Slurm on Frontier • Discussion topics about Slurm 2 Our vision: Sustain ORNL’s leadership and scientific impact in computing and computational sciences • Provide the world’s most powerful open resources for: – Scalable computing and simulation – Data and analytics at any scale – Scalable cyber-secure infrastructure for science • Follow a well-defined path for maintaining world leadership in these critical areas • Deliver leading-edge science relevant to missions of DOE and key federal and state agencies • Build and exploit cross-cutting partnerships • Attract the brightest talent • Invest in education and training 3 National Center for Computational Sciences Home to the OLCF, including Summit • 65,000 ft2 of DC space – Distributed across 4 controlled-access areas – 31,000 ft2: Very high load bearing (³625 lb/ft2) • 40 MW of high-efficiency highly reliable power – Tightly integrated into TVA’s 161 kV transmission system – Diverse medium-voltage distribution system – High-efficiency 3.0/4.0 MVA transformers • 6,600 tons of chilled water – 20 MW cooling capacity at 70 °F – 7,700 tons evaporative cooling • Expansion potential: Power infrastructure to 60 MW, cooling infrastructure to 70 MW 4 ORNL has systematically delivered a series of leadership-class systems On scope • On budget • Within schedule
    [Show full text]
  • Technological Forecasting of Supercomputer Development: the March to Exascale Computing
    Portland State University PDXScholar Engineering and Technology Management Faculty Publications and Presentations Engineering and Technology Management 10-2014 Technological Forecasting of Supercomputer Development: The March to Exascale Computing Dong-Joon Lim Portland State University Timothy R. Anderson Portland State University, [email protected] Tom Shott Portland State University, [email protected] Follow this and additional works at: https://pdxscholar.library.pdx.edu/etm_fac Part of the Engineering Commons Let us know how access to this document benefits ou.y Citation Details Lim, Dong-Joon; Anderson, Timothy R.; and Shott, Tom, "Technological Forecasting of Supercomputer Development: The March to Exascale Computing" (2014). Engineering and Technology Management Faculty Publications and Presentations. 46. https://pdxscholar.library.pdx.edu/etm_fac/46 This Post-Print is brought to you for free and open access. It has been accepted for inclusion in Engineering and Technology Management Faculty Publications and Presentations by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. Technological forecasting of supercomputer development: The march to Exascale computing Dong-Joon Lim *, Timothy R. Anderson, Tom Shott Dept. of Engineering and Technology Management, Portland State University, USA Abstract- Advances in supercomputers have come at a steady pace over the past 20 years. The next milestone is to build an Exascale computer however this requires not only speed improvement but also significant enhancements for energy efficiency and massive parallelism. This paper examines technological progress of supercomputer development to identify the innovative potential of three leading technology paths toward Exascale development: hybrid system, multicore system and manycore system.
    [Show full text]
  • OLCF AR 2016-17 FINAL 9-7-17.Pdf
    Oak Ridge Leadership Computing Facility Annual Report 2016–2017 1 Outreach manager – Katie Bethea Writers – Eric Gedenk, Jonathan Hines, Katie Jones, and Rachel Harken Designer – Jason Smith Editor – Wendy Hames Photography – Jason Richards and Carlos Jones Stock images – iStockphoto™ Oak Ridge Leadership Computing Facility Oak Ridge National Laboratory P.O. Box 2008, Oak Ridge, TN 37831-6161 Phone: 865-241-6536 Email: [email protected] Website: https://www.olcf.ornl.gov Facebook: https://www.facebook.com/oakridgeleadershipcomputingfacility Twitter: @OLCFGOV The research detailed in this publication made use of the Oak Ridge Leadership Computing Facility, a US Department of Energy Office of Science User Facility located at DOE’s Oak Ridge National Laboratory. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. 2 Contents LETTER In a Record 25th Year, We Celebrate the Past and Look to the Future 4 SCIENCE Streamlining Accelerated Computing for Industry 6 A Seismic Mapping Milestone 8 The Shape of Melting in Two Dimensions 10 A Supercomputing First for Predicting Magnetism in Real Nanoparticles 12 Researchers Flip Script for Lithium-Ion Electrolytes to Simulate Better Batteries 14 A Real CAM-Do Attitude 16 FEATURES Big Data Emphasis and New Partnerships Highlight the Path to Summit 18 OLCF Celebrates 25 Years of HPC Leadership 24 PEOPLE & PROGRAMS Groups within the OLCF 28 OLCF User Group and Executive Board 30 INCITE, ALCC, DD 31 SYSTEMS & SUPPORT Resource Overview 32 User Experience 34 Education, Outreach, and Training 35 ‘TitanWeek’ Recognizes Contributions of Nation’s Premier Supercomputer 36 Selected Publications 38 Acronyms 41 3 In a Record 25th Year, We Celebrate the Past and Look to the Future installed at the turn of the new millennium—to the founding of the Center for Computational Sciences at the US Department of Energy’s Oak Ridge National Laboratory.
    [Show full text]
  • Leadership Computing Directions at Oak Ridge National Laboratory: Navigating the Transition to Heterogeneous Architectures
    Leadership Computing Directions at Oak Ridge National Laboratory: Navigating the Transition to Heterogeneous Architectures International Symposium on Computing in Atmospheric Sciences September 8-12 , 2013 James J. Hack, Director National Center for Computational Sciences Oak Ridge National Laboratory Arthur S. “Buddy” Bland, OLCF Project Director Bronson Messer, OLCF Scientific Computing James Rogers, NCCS Director of Operations Jack Wells, NCCS Director of Science U.S. Department of Energy strategic priorities Incremental changes to existing technologies cannot meet these challenges – Transformational advances in energy technologies are needed – Transformational adaptation strategies will need to be implemented – Transformational changes to tools that allow a predictive explorations of paths forward Innovation Energy Security Investing in science, Providing clean, secure Safeguarding nuclear discovery and innovation energy and promoting and radiological materials, to provide solutions economic prosperity advancing responsible to pressing energy through energy efficiency legacy cleanup, and challenges and domestic maintaining nuclear forms of energy deterrence 2 Managed by UT-Battelle for the U.S. Department of Energy 3 Managed by UT-Battelle New for the York U.S. Department Magazine of Energy ExampleEnergy is of our the defining virtualization challenge challenge And a great example of success Big storm early next week (Oct 29) with wind and rain??? A number of computer models today were hinting at that, suggesting a tropical system now in the Caribbean may merge with a cold front in that time table, close to the East Coast. At the same time, 8 days is an eternity in storm forecasting, so plenty of time to watch this. Consider this an early “heads-up” that we’ll have an interesting an weather feature to monitor this week..
    [Show full text]
  • What Is a Core Hour on Titan?
    What is a Core Hour on Titan? or 2013 the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program awarded 1.84 billion coreF hours on Titan, a hybrid-architecture high-performance computing (HPC) system that sports central processing units (CPUs) and graphics processing units (GPUs), which are radically different, in the same machine. So what exactly does it mean to allocate a core hour on Titan? The Oak Ridge Leadership Computing Facility (OLCF) operates Titan based on nodes, or functional units in a message-passing network that can be assigned to work together on a specific task, such as calculating how the Earth’s climate might change over a decade. Each node on Titan runs a single copy of the Cray Linux operating system. Likewise, a node’s cores can work together to run a simulation. Because communication is faster within the components of a single node than between nodes, data is structured so calculations The OLCF is home to Titan, the world’s most powerful stay within nodes as much as possible. For individual projects running on Titan, different nodes can be supercomputer for open assigned to run concurrently. science with a theoretical peak performance exceeding Whereas nodes are the functional units of supercomputers, cores are used to explain how users get 20 petaflops (quadrillion “charged” for time on Titan. For each hour a node runs calculations on Titan, the OLCF assesses users calculations per second). an accounting charge of 30 core hours. The number 30 is the sum of the 16 x86 CPU cores on each That kind of computational capability—almost node’s CPU microprocessor and the 14 streaming multiprocessors on each node’s GPU processor.
    [Show full text]
  • A Failure Detector for HPC Platforms
    Article The International Journal of High Performance Computing Applications 1–20 A failure detector for HPC platforms © The Author(s) 2017 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/1094342017711505 journals.sagepub.com/home/hpc George Bosilca1, Aurelien Bouteiller1, Amina Guermouche2, Thomas Herault1, Yves Robert1,3, Pierre Sens4 and Jack Dongarra1,5,6 Abstract Building an infrastructure for exascale applications requires, in addition to many other key components, a stable and effi- cient failure detector. This article describes the design and evaluation of a robust failure detector that can maintain and distribute the correct list of alive resources within proven and scalable bounds. The detection and distribution of the fault information follow different overlay topologies that together guarantee minimal disturbance to the applications. A virtual observation ring minimizes the overhead by allowing each node to be observed by another single node, providing an unobtrusive behavior. The propagation stage uses a nonuniform variant of a reliable broadcast over a circulant graph overlay network and guarantees a logarithmic fault propagation. Extensive simulations, together with experiments on the Titan Oak Ridge National Laboratory supercomputer, show that the algorithm performs extremely well and exhibits all the desired properties of an exascale-ready algorithm. Keywords MPI, failure detection, fault tolerance 1. Introduction overall performance of a fault-tolerant HPC solution. A major factor to assess the efficacy of an FD algo- Failure detection (FD) is a prerequisite to failure miti- rithm is the trade-off that it achieves between scalabil- gation and a key component of any infrastructure that ity and the speed of information propagation in the requires resilience.
    [Show full text]
  • Scalability-Driven Approaches to Key Aspects of the Message Passing Interface for Next Generation Supercomputing
    Scalability-Driven Approaches to Key Aspects of the Message Passing Interface for Next Generation Supercomputing by Ayi Judicael Zounmevo A thesis submitted to the Department of Electrical and Computer Engineering in conformity with the requirements for the degree of Doctor of Philosophy Queen’s University Kingston, Ontario, Canada May 2014 Copyright c Ayi Judicael Zounmevo, 2014 � Abstract The Message Passing Interface (MPI), which dominates the supercomputing programming environment, is used to orchestrate and fulfill communication in High Performance Computing (HPC). How far HPC programs can scale depends in large part on the ability to achieve fast communication; and to overlap communication with computation or communication with communication. This dissertation proposes a new asynchronous solution to the nonblocking Rendezvous protocol used between pairs of processes to transfer large payloads. On top of enforcing communication/computation overlapping in a comprehensive way, the proposal trumps existing network device-agnostic asynchronous solutions by being memory-scalable and by avoiding brute force strategies. Achieving overlapping between communication and computation is important; but each communication is also expected to generate minimal latency. In that respect, the processing of the queues meant to hold messages pending reception inside the MPI middleware is ex- pected to be fast. Currently though, that processing slows down when program scales grow. This research presents a novel scalability-driven message queue whose processing skips alto- gether large portions of queue items that are deterministically guaranteed to lead to unfruitful searches. For having little sensitivity to program sizes, the proposed message queue maintains a very good performance, on top of displaying a low andflattening memory footprint growth pattern.
    [Show full text]