Accelerating Science with the NERSC Burst Buffer Early User Program
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
An Operational Perspective on a Hybrid and Heterogeneous Cray XC50 System
An Operational Perspective on a Hybrid and Heterogeneous Cray XC50 System Sadaf Alam, Nicola Bianchi, Nicholas Cardo, Matteo Chesi, Miguel Gila, Stefano Gorini, Mark Klein, Colin McMurtrie, Marco Passerini, Carmelo Ponti, Fabio Verzelloni CSCS – Swiss National Supercomputing Centre Lugano, Switzerland Email: {sadaf.alam, nicola.bianchi, nicholas.cardo, matteo.chesi, miguel.gila, stefano.gorini, mark.klein, colin.mcmurtrie, marco.passerini, carmelo.ponti, fabio.verzelloni}@cscs.ch Abstract—The Swiss National Supercomputing Centre added depth provides the necessary space for full-sized PCI- (CSCS) upgraded its flagship system called Piz Daint in Q4 e daughter cards to be used in the compute nodes. The use 2016 in order to support a wider range of services. The of a standard PCI-e interface was done to provide additional upgraded system is a heterogeneous Cray XC50 and XC40 system with Nvidia GPU accelerated (Pascal) devices as well as choice and allow the systems to evolve over time[1]. multi-core nodes with diverse memory configurations. Despite Figure 1 clearly shows the visible increase in length of the state-of-the-art hardware and the design complexity, the 37cm between an XC40 compute module (front) and an system was built in a matter of weeks and was returned to XC50 compute module (rear). fully operational service for CSCS user communities in less than two months, while at the same time providing significant improvements in energy efficiency. This paper focuses on the innovative features of the Piz Daint system that not only resulted in an adaptive, scalable and stable platform but also offers a very high level of operational robustness for a complex ecosystem. -
A Scheduling Policy to Improve 10% of Communication Time in Parallel FFT
A scheduling policy to improve 10% of communication time in parallel FFT Samar A. Aseeri Anando Gopal Chatterjee Mahendra K. Verma David E. Keyes ECRC, KAUST CC, IIT Kanpur Dept. of Physics, IIT Kanpur ECRC, KAUST Thuwal, Saudi Arabia Kanpur, India Kanpur, India Thuwal, Saudi Arabia [email protected] [email protected] [email protected] [email protected] Abstract—The fast Fourier transform (FFT) has applications A. FFT Algorithm in almost every frequency related studies, e.g. in image and signal Forward Fourier transform is given by the following equa- processing, and radio astronomy. It is also used to solve partial differential equations used in fluid flows, density functional tion. 3 theory, many-body theory, and others. Three-dimensional N X (−ikxx) (−iky y) (−ikz z) 3 f^(k) = f(x; y; z)e e e (1) FFT has large time complexity O(N log2 N). Hence, parallel algorithms are made to compute such FFTs. Popular libraries kx;ky ;kz 3 perform slab division or pencil decomposition of N data. None The beauty of this sum is that k ; k ; and k can be summed of the existing libraries have achieved perfect inverse scaling x y z of time with (T −1 ≈ n) cores because FFT requires all-to-all independent of each other. These sums are done at different communication and clusters hitherto do not have physical all- to-all connections. With advances in switches and topologies, we now have Dragonfly topology, which physically connects various units in an all-to-all fashion. -
View Annual Report
Fellow Shareholders, Headlined by strong growth in revenue and profitability, we had one of our best years ever in 2015, executing across each of our major focus areas and positioning our company for continued growth into the future. We achieved another year of record revenue, growing by nearly 30 percent compared to 2014. In fact, our revenue in 2015 was more than three times higher than just four years prior — driven by growth in both our addressable market and market share. Over the last five years, we have transformed from a company solely focused on the high-end of the supercomputing market — where we are now the clear market leader — to a company with multiple product lines serving multiple markets. We provide our customers with powerful computing, storage and analytics solutions that give them the tools to advance their businesses in ways never before possible. During 2015, we installed supercomputing and storage solutions at a number of customers around the world. In the U.S., we completed the first phase of the massive new “Trinity” system at Los Alamos National Laboratory. This Cray XC40 supercomputer with Sonexion storage serves as the National Nuclear Security Administration’s flagship supercomputer, supporting all three of the NNSA’s national laboratories. We installed the first petaflop supercomputer in India at the Indian Institute of Science. In Europe, we installed numerous XC and storage solutions, including a significant expansion of the existing XC40 supercomputer at the University of Stuttgart in Germany. This new system nearly doubles their computing capacity and is currently the fastest supercomputer in Germany. -
TECHNICAL GUIDELINES for APPLICANTS to PRACE 17Th CALL
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 17th CALL (T ier-0) The contributing sites and the corresponding computer systems for this call are: System Architecture Site (Country) Core Hours Minimum (node hours) request Joliot Curie - Bull Sequana X1000 GENCI@CEA 134 million 15 million core SKL (FR) (2.8 million) hours Joliot Curie - BULL Sequana GENCI@CEA 72 million 15 million core KNL X1000 (FR) (1,1 million) hours Hazel Hen Cray XC40 System GCS@HLRS 70 million 35 million core (DE) (2.9 million) hours JUWELS Multicore cluster GCS@JSC (DE) 70 million 35 million core (1.5 million) hours Marconi- Lenovo System CINECA (IT) 36 million 15 million core Broadwell (1 million) hours Marconi-KNL Lenovo System CINECA (IT) 612 million 30 million core (9 million) hours MareNostrum Lenovo System BSC (ES) 240 million 15 million core (5 million) hours Piz Daint Cray XC50 System CSCS (CH) 510 million 68 million core (7.5 million) hours Use of GPUs SuperMUC Lenovo NextScale/ GCS@LRZ (DE) 105 million 35 million core SuperMUC-NG Lenovo ThinkSystem (3.8 million) hours The site selection is done together with the specification of the requested computing time by the two sections at the beginning of the online form. The applicant can choose one or several machines as execution system, as long as proper benchmarks and resource request justification are provided on each of the requested systems. The parameters are listed in tables. The first column describes the field in the web online form to be filled in by the applicant. The remaining columns specify the range limits for each system. -
Cray XC40 Power Monitoring and Control for Knights Landing
Cray XC40 Power Monitoring and Control for Knights Landing Steven J. Martin, David Rush Matthew Kappel, Michael Sandstedt, Joshua Williams Cray Inc. Cray Inc. Chippewa Falls, WI USA St. Paul, MN USA {stevem,rushd}@cray.com {mkappel,msandste,jw}@cray.com Abstract—This paper details the Cray XC40 power monitor- This paper is organized as follows: In section II, we ing and control capabilities for Intel Knights Landing (KNL) detail blade-level Hardware Supervisory System (HSS) ar- based systems. The Cray XC40 hardware blade design for Intel chitecture changes introduced to enable improved telemetry KNL processors is the first in the XC family to incorporate enhancements directly related to power monitoring feedback gathering capabilities on Cray XC40 blades with Intel KNL driven by customers and the HPC community. This paper fo- processors. Section III provides information on updates to cuses on power monitoring and control directly related to Cray interfaces available for power monitoring and control that blades with Intel KNL processors and the interfaces available are new for Cray XC40 systems and the software released to users, system administrators, and workload managers to to support blades with Intel KNL processors. Section IV access power management features. shows monitoring and control examples. Keywords-Power monitoring; power capping; RAPL; energy efficiency; power measurement; Cray XC40; Intel Knights II. ENHANCED HSS BLADE-LEVEL MONITORING Landing The XC40 KNL blade incorporates a number of en- hancements over previous XC blade types that enable the I. INTRODUCTION collection of power telemetry in greater detail and with Cray has provided advanced power monitoring and control improved accuracy. -
Leveraging Burst Buffer Coordination to Prevent I/O Interference
Leveraging Burst Buffer Coordination to Prevent I/O Interference Anthony Kougkas∗y, Matthieu Doriery, Rob Lathamy, Rob Rossy, and Xian-He Sun∗ ∗Illinois Institute of Technology, Department of Computer Science, Chicago, IL [email protected], [email protected] yArgonne National Laboratory, Mathematics and Computer Science Division, Lemont, IL fmdorier, robl, [email protected] Abstract—Concurrent accesses to the shared storage resources use cases is to increase the total I/O bandwidth available to in current HPC machines lead to severe performance degradation the applications and optimize the input/output operations per caused by I/O contention. In this study, we identify some key second (i.e., IOPS). In addition to serving as a pure storage challenges to efficiently handling interleaved data accesses, and we option, the notion of a burst buffer can make storage solutions propose a system-wide solution to optimize global performance. smarter and more active. We implemented and tested several I/O scheduling policies, including prioritizing specific applications by leveraging burst In this paper, we address the problem of cross-application buffers to defer the conflicting accesses from another application I/O interference by coordinating burst buffer access to prevent and/or directing the requests to different storage servers inside such I/O degradation. We propose several ways to mitigate the the parallel file system infrastructure. The results show that we effects of interference by preventing applications from access- mitigate the negative effects of interference and optimize the ing the same file system resources at the same time. We build performance up to 2x depending on the selected I/O policy. -
Exploring Novel Burst Buffer Management on Extreme-Scale HPC Systems Teng Wang
Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2017 Exploring Novel Burst Buffer Management on Extreme-Scale HPC Systems Teng Wang Follow this and additional works at the DigiNole: FSU's Digital Repository. For more information, please contact [email protected] FLORIDA STATE UNIVERSITY COLLEGE OF ARTS AND SCIENCES EXPLORING NOVEL BURST BUFFER MANAGEMENT ON EXTREME-SCALE HPC SYSTEMS By TENG WANG A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2017 Copyright c 2017 Teng Wang. All Rights Reserved. Teng Wang defended this dissertation on March 3, 2017. The members of the supervisory committee were: Weikuan Yu Professor Directing Dissertation Gordon Erlebacher University Representative David Whalley Committee Member Andy Wang Committee Member Sarp Oral Committee Member The Graduate School has verified and approved the above-named committee members, and certifies that the dissertation has been approved in accordance with university requirements. ii ACKNOWLEDGMENTS First and foremost, I would like to express my special thanks to my advisor Dr. Weikuan Yu for his ceaseless encouragement and continuous research guidance. I came to study in U.S. with little research background. At the beginning, I had a hard time to follow the classes and understand the research basics. While everything seemed unfathomable to me, Dr. Yu kept encouraging me to position myself better, and spent plenty of time talking with me on how to quickly adapt myself to the course and research environment in the university. I cannot imagine a better advisor on those moments we talked with each other. -
Hpc in Europe
HPC IN EUROPE Organisation of public HPC resources Context • Focus on publicly-funded HPC resources provided primarily to enable scientific research and development at European universities and other publicly-funded research institutes • These resources are also intended to benefit industrial / commercial users by: • facilitating access to HPC • providing HPC training • sponsoring academic-industrial collaborative projects to exchange expertise and accelerate efficient commercial exploitation of HPC • Do not consider private sector HPC resources owned and managed internally by companies, e.g. in aerospace design & manufacturing, oil & gas exploration, fintech (financial technology), etc. European HPC Infrastructure • Structured provision of European HPC facilities: • Tier-0: European Centres (> petaflop machines) • Tier-1: National Centres • Tier-2: Regional/University Centres • Tiers planned as part of an EU Research Infrastructure Roadmap • This is coordinated through “PRACE” – http://prace-ri.eu PRACE Partnership foR Advanced Computing in Europe • International non-profit association (HQ office in Brussels) • Established in 2010 following ESFRI* roadmap to create a persistent pan-European Research Infrastructure (RI) of world-class supercomputers • Mission: enable high-impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. *European Strategy Forum on Reseach Infrastructures PRACE Partnership foR Advanced Computing in Europe Aims: • Provide access to leading-edge computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level • Provide pan-European HPC education and training • Strengthen the European users of HPC in industry A Brief History of PRACE PRACE Phases & Objectives • Preparation and implementation of the PRACE RI was supported by a series of projects funded by the EU’s FP7 and Horizon 2020 funding programmes • 530 M€ of funding for the period 2010-2015. -
On the Role of Burst Buffers in Leadership-Class Storage Systems
On the Role of Burst Buffers in Leadership-Class Storage Systems Ning Liu,∗y Jason Cope,y Philip Carns,y Christopher Carothers,∗ Robert Ross,y Gary Grider,z Adam Crume,x Carlos Maltzahn,x ∗Rensselaer Polytechnic Institute, Troy, NY 12180, USA fliun2,[email protected] yArgonne National Laboratory, Argonne, IL 60439, USA fcopej,carns,[email protected] zLos Alamos National Laboratory, Los Alamos, NM 87545, USA [email protected] xUniversity of California at Santa Cruz, Santa Cruz, CA 95064, USA fadamcrume,[email protected] Abstract—The largest-scale high-performance (HPC) systems problem for storage system designers. System designers must are stretching parallel file systems to their limits in terms of find a solution that allows applications to minimize time spent aggregate bandwidth and numbers of clients. To further sustain performing I/O (i.e., achieving a high perceived I/O rate) the scalability of these file systems, researchers and HPC storage architects are exploring various storage system designs. One and ensure data durability in the event of failures. However, proposed storage system design integrates a tier of solid-state approaching this in the traditional manner—by providing a burst buffers into the storage system to absorb application I/O high-bandwidth external storage system—will likely result requests. In this paper, we simulate and explore this storage in a storage system that is underutilized much of the time. system design for use by large-scale HPC systems. First, we This underutilization has already been observed on current examine application I/O patterns on an existing large-scale HPC system to identify common burst patterns. -
Piz Daint”, One of the Most Powerful Supercomputers
The Swiss High-Performance Computing and Networking Thomas C. Schulthess | T. Schulthess !1 What is considered success in HPC? | T. Schulthess !2 High-Performance Computing Initiative (HPCN) in Switzerland High-risk & high-impact projects (www.hp2c.ch) Application driven co-design Phase III of pre-exascale supercomputing ecosystem Three pronged approach of the HPCN Initiative 2017 1. New, flexible, and efficient building 2. Efficient supercomputers 2016 Monte Rosa 3. Efficient applications Pascal based hybrid Cray XT5 2015 14’762 cores Upgrade to Phase II Cray XE6 K20X based hybrid 2014 Upgrade 47,200 cores Hex-core upgrade Phase I 22’128 cores 2013 Development & Aries network & multi-core procurement of 2012 petaflop/s scale supercomputer(s) 2011 2010 New 2009 building Begin construction complete of new building | T. Schulthess !3 FACT SHEET “Piz Daint”, one of the most powerful supercomputers A hardware upgrade in the final quarter of 2016 Thanks to the new hardware, researchers can run their simula- saw “Piz Daint”, Europe’s most powerful super- tions more realistically and more efficiently. In the future, big computer, more than triple its computing perfor- science experiments such as the Large Hadron Collider at CERN mance. ETH Zurich invested around CHF 40 million will also see their data analysis support provided by “Piz Daint”. in the upgrade, so that simulations, data analysis and visualisation can be performed more effi- ETH Zurich has invested CHF 40 million in the upgrade of “Piz ciently than ever before. Daint” – from a Cray XC30 to a Cray XC40/XC50. The upgrade in- volved replacing two types of compute nodes as well as the in- With a peak performance of seven petaflops, “Piz Daint” has tegration of a novel technology from Cray known as DataWarp. -
Integration of Burst Buffer in High-Level Parallel I/O Library for Exa-Scale Computing Era
2018 IEEE/ACM 3rd International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems (PDSW-DISCS) Integration of Burst Buffer in High-level Parallel I/O Library for Exa-scale Computing Era Kaiyuan Hou Reda Al-Bahrani Esteban Rangel Ankit Agrawal Northwestern University Northwestern University Northwestern University Northwestern University Evanston, IL, USA Evanston, IL, USA Evanston, IL, USA Evanston, IL, USA [email protected] [email protected] [email protected] [email protected] Robert Latham Robert Ross Alok Choudhary Wei-keng Liao Argonne National Laboratory Argonne National Laboratory Northwestern University Northwestern University Lemont, IL, USA Lemont, IL, USA Evanston, IL, USA Evanston, IL, USA [email protected] [email protected] [email protected] [email protected] Abstract— While the computing power of supercomputers have been proposed [2,3,4,5]. The most common strategy is the continues to improve at an astonishing rate, companion I/O use of Solid State Drives (SSDs) a new type of storage device systems are struggling to keep up in performance. To mitigate the that is faster than the hard disk. Modern HPC systems are now performance gap, several supercomputing systems have been starting to incorporate SSDs as a burst buffer that is accessible configured to incorporate burst buffers into their I/O stack; the to users. exact role of which, however, still remains unclear. In this paper, we examine the features of burst buffers and study their impact on While burst buffers are now becoming more common on application I/O performance. Our goal is to demonstrate that supercomputers, the use of this new technology has not been burst buffers can be utilized by parallel I/O libraries to fully explored. -
Analysis of the Characteristics and Development Trends of the Next-Generation of Supercomputers in Foreign Countries
This study was carried out for RIKEN by Special Study Analysis of the Characteristics and Development Trends of the Next-Generation of Supercomputers in Foreign Countries Earl C. Joseph, Ph.D. Robert Sorensen Steve Conway Kevin Monroe IDC OPINION Leadership-class supercomputers have contributed enormously to advances in fundamental and applied science, national security, and the quality of life. Advances made possible by this class of supercomputers have been instrumental for better predicting severe weather and earthquakes that can devastate lives and property, for designing new materials used in products, for making new energy sources pragmatic, for developing and testing methodologies to handle "big data," and for many more beneficial uses. The broad range of leadership-class supercomputers examined during this study make it clear that there are a number of national programs planned and already in place to not only build pre-exascale systems to meet many of today’s most aggressive research agendas but to also develop the hardware and software necessary to produce sustained exascale systems in the 2020 timeframe and beyond. Although our studies indicate that there is no single technology strategy that will emerge as the ideal, it is satisfying to note that the wide range of innovative and forward leaning efforts going on around the world almost certainly ensures that the push toward more capable leadership-class supercomputers will be successful. IDC analysts recognize, however, that for almost every HPC development project examined here, the current effort within each organization is only their latest step in a long history of HPC progress and use.