Aurora the Ultimate Accelerated Computing Solution

Total Page:16

File Type:pdf, Size:1020Kb

Aurora the Ultimate Accelerated Computing Solution Aurora The Ultimate Accelerated Computing Solution Achieve top results with uncompromised acceleration. Modularity / flexibility. Allows the best fit to business needs thanks to a choice of configurable modules The Aurora® Hi√e HPC systems are optimized to accelerate Bridge to the future. Intel and ARM-64 based nodes workloads, offering performance, energy efficiency, Superb RAS. Based on Eurotech embedded/ruggedized density and flexibility like never before. technology. They allow the best adaptation to applications, accelerating them with the configuration that minimizes time to solution. Why Aurora Hi√e? Features Optimized for accelerated workloads. Designed Highest density. >1 PFlops / rack to fit the application needs, the Hi√e supports multiple Extreme efficiency > 5 GFlops/Watt accelerated configurations that push the workload speed up to the top Best Flexibility. A choice of modules for Energy efficiency. Optimized architecture and direct different configurations hot liquid cooling to maximize Flops/watt and minimize the Low power CPU. Intel and ARM-64 based nodes data center PUE. Acceleration. Multiple accelerators per CPU Liquid cooling. Entirely cooled with the Aurora Direct Hot Water Cooling Scalability. Scale over hundreds of accelerators Silence. No fans needed : High Velocity HPC Systems Aurora Hi√e is a line of complete and robust HPC systems, built on an innovative supercomputing architecture that allows acceleration, optimization and flexibility. Entirely hot water cooled, highly dense and compact, provided with software stack and monitoring tools, the Hi√e series delivers a quality, reliable HPC solution. Applications Typical applications running on Hi√e include: Oil&Gas (seismic), Life Science (Molecular Dynamics, Quantum Chemistry, NGS), Medical Imaging, Rendering, Deep Learning, Computational Finance, Data Analytics, CAE, Physics (LQCD). The Hi√e architecture The system building block is the Aurora Hi√e module, an innovative form factor, hot water cooled enclosure supporting different configurations of the components. Modules provide computation and control functionality (Intel or ARM processor), acceleration (Intel Phi or Nvidia GPU) and additional functionality of storage and visualization, using sub-modules in the same form factor. The modules are logical nodes of a large system. The nodes are hosted in the Aurora Hi√e rack in any combination up to 128 per cabinet. The High Speed Interconnect allows scaling to any size system with density over 1 PFlop/s DP per m2 AURORA SOFTWARE STACK Intel Cluster Studio Programming tools MPSS, Bright Cluster Manager Cluster Management and libraries NVIDIA CUDA xCAT* GNU Compiler Collection* Intel MPI Lustre Communication libraries Open MPI File systems BeeGFS MVAPICH2 NFS Nagioss Ganglia Mathematical libraries FFTW* Monitoring and security Aurora monitoring Eurotech ESS CentOS* Total View Scientific Linux* Debuggers Intel debuggers Operating system Red Hat GNU GDB SUSE Ubuntu* Altair PBS Professional Aurora Drivers Resource managers TORQUE/MAUI Drivers Accelerator Drivers and schedulers SLURM* OFED *supported on ARM Save time energy and space EXAMPLES OF CONFIGURATION Acceleration Hi√e supports different combinations of PCIe modules that can add accelerations, storage, video processing and Hi√e can be delivered with 4 GPUs or 4 coprocessors per other capabilities to the system. single CPU node, a configuration that has been proven suitable and capable to accelerate a wide range of Processor 1 x Intel Xeon E3-12xx v3 applications. Accelerators 4 x NVIDIA Tesla Energy efficency Interconnects 1 x IB FDR (2 ports) Hi√e uses low power Intel E3-12xx v3 processors or Applied Micro X-Gene ARM-64 processors combined with Processor 1 x Intel Xeon E3-12xx v3 GPUs or coprocessors, with all components hot water Accelerators 4 x Intel Phi cooled to maximize efficiency at system and data center level. Interconnects 1 x IB FDR (2 ports) Density Processor 1 x Applied Micro 64 bit ARM Thanks to direct water cooling, the innovative architecture Accelerators 4 x Intel Nvidia Tesla K40 and the barebones assembly, Aurora Hi√e boasts an extraordinary computational density with more than 1 Interconnects 1 x IB FDR (2 ports) PFlop/s per rack. Processor Intel Xeon E3-12xx v3 or ARM 64 bit PCIe cards of different functionality (NVMe, Sub-modules accelerators, storage, video cards...) New water cooling technology Silent, lighter and more compact Aurora Hi√e is entirely water cooled with the improved 2nd generation of the acknowledged Aurora Direct Hot Water Cooling. This new water cooling technology, lighter and more compact, allows higher packaging density and higher effectiveness in heat extraction, maximizing efficiency and minimizing infrastructural costs. Thanks to water cooling, the systems contain no fans or other moving parts and are completely silent. Specifications SYSTEM Energy efficiency > 5 GFlops/Watt Up to 750 TFlop/s DP per rack (Nvidia K40) - 880 TFlop/s per rack with GPU Boost Peak Performance DP Up to 1 PFlop/s DP per rack (Nvidia K80) - 1.5 PFlop/s per rack with GPU Boost Up to 4.5 PFlop/s SP per rack (Nvidia K80) - 1.5 PFlop/s per rack with GPU Boost Architecture Up to 128 nodes per rack Cooling Aurora Direct Liquid Cooling Soldered memory, no fans, no hot spots Reliable, Available, Monitoring of system and cooling loop Serviceable (RAS) Hot swap nodes Eurotech ESS safety software Power (peak) 166 kW fully loaded Rack dimensions (H x W x D) 2200 x 880 x 1300 mm NODE Up to 5.9 TFlop/s DP (Nvidia K40) - 6.8 TFlop/s with GPU Boost Peak Performance DP Up to 7.7 TFlop/s DP (Nvidia K80) - 11.8 TFlop/s with GPU Boost Up to 23 TFlop/s SP (Nvidia K80) - 35 TFlop/s with GPU Boost E3-12xx v3 – TDP up to 84W Processor Applied Micro ARM 64-bit processor NVIDIA® Tesla® K40, K80, M60 Intel® Xeon Phi™ 7120x Coprocessors and Accelerators AMD® Firepro™ NVIDIA GeForce GTX M980 NVMe storage cards Soldered high reliability memory 32 GB DDR3 (8GB per processor core) Storage 1 x 1 TB half slim 1.8” SATA SSD Interconnect and Networks 2 x FDR = 112 Gbit/s 2 x 1GigE (1 x 10 GigE on ARM version) 2 x USB I/O front panel 1 x VGA 2 x FDR Infiniband Information in this document is provided in connection with Eurotech products. Except as provided in Eurotech’s terms and conditions of sale for such products, Eurotech assumes no liability whatsoever, and Eurotech disclaims any express or implied warranty relating to sale and/or use of Eurotech products, including liability or warranties relating to fitness for a particular purpose, merchantability, or infringement of any patent, copyright, or other intellectual property right. Specifications and features subject to change without notice. All trademarks and tradenames are the property of their respective owners. Copyright © 2014 EUROTECH. All rights reserved. Hi√e For queries, quotations and ordering a development kit, please contact sales at: [email protected] www.eurotech.com/aurora.
Recommended publications
  • Petaflops for the People
    PETAFLOPS SPOTLIGHT: NERSC housands of researchers have used facilities of the Advanced T Scientific Computing Research (ASCR) program and its EXTREME-WEATHER Department of Energy (DOE) computing predecessors over the past four decades. Their studies of hurricanes, earthquakes, NUMBER-CRUNCHING green-energy technologies and many other basic and applied Certain problems lend themselves to solution by science problems have, in turn, benefited millions of people. computers. Take hurricanes, for instance: They’re They owe it mainly to the capacity provided by the National too big, too dangerous and perhaps too expensive Energy Research Scientific Computing Center (NERSC), the Oak to understand fully without a supercomputer. Ridge Leadership Computing Facility (OLCF) and the Argonne Leadership Computing Facility (ALCF). Using decades of global climate data in a grid comprised of 25-kilometer squares, researchers in These ASCR installations have helped train the advanced Berkeley Lab’s Computational Research Division scientific workforce of the future. Postdoctoral scientists, captured the formation of hurricanes and typhoons graduate students and early-career researchers have worked and the extreme waves that they generate. Those there, learning to configure the world’s most sophisticated same models, when run at resolutions of about supercomputers for their own various and wide-ranging projects. 100 kilometers, missed the tropical cyclones and Cutting-edge supercomputing, once the purview of a small resulting waves, up to 30 meters high. group of experts, has trickled down to the benefit of thousands of investigators in the broader scientific community. Their findings, published inGeophysical Research Letters, demonstrated the importance of running Today, NERSC, at Lawrence Berkeley National Laboratory; climate models at higher resolution.
    [Show full text]
  • Ushering in a New Era: Argonne National Laboratory & Aurora
    Ushering in a New Era Argonne National Laboratory’s Aurora System April 2015 ANL Selects Intel for World’s Biggest Supercomputer 2-system CORAL award extends IA leadership in extreme scale HPC Aurora Argonne National Laboratory >180PF Trinity NNSA† April ‘15 Cori >40PF NERSC‡ >30PF July ’14 + Theta Argonne National Laboratory April ’14 >8.5PF >$200M ‡ Cray* XC* Series at National Energy Research Scientific Computing Center (NERSC). † Cray XC Series at National Nuclear Security Administration (NNSA). 2 The Most Advanced Supercomputer Ever Built An Intel-led collaboration with ANL and Cray to accelerate discovery & innovation >180 PFLOPS (option to increase up to 450 PF) 18X higher performance† >50,000 nodes Prime Contractor 13MW >6X more energy efficient† 2018 delivery Subcontractor Source: Argonne National Laboratory and Intel. †Comparison of theoretical peak double precision FLOPS and power consumption to ANL’s largest current system, MIRA (10PFs and 4.8MW) 3 Aurora | Science From Day One! Extreme performance for a broad range of compute and data-centric workloads Transportation Biological Science Renewable Energy Training Argonne Training Program on Extreme- Scale Computing Aerodynamics Biofuels / Disease Control Wind Turbine Design / Placement Materials Science Computer Science Public Access Focus Areas Focus US Industry and International Co-array Fortran Batteries / Solar Panels New Programming Models 4 Aurora | Built on a Powerful Foundation Breakthrough technologies that deliver massive benefits Compute Interconnect File System 3rd Generation 2nd Generation Intel® Xeon Phi™ Intel® Omni-Path Intel® Lustre* Architecture Software >17X performance† >20X faster† >3X faster† FLOPS per node >500 TB/s bi-section bandwidth >1 TB/s file system throughput >12X memory bandwidth† >2.5 PB/s aggregate node link >5X capacity† bandwidth >30PB/s aggregate >150TB file system capacity in-package memory bandwidth Integrated Intel® Omni-Path Architecture Processor code name: Knights Hill Source: Argonne National Laboratory and Intel.
    [Show full text]
  • Architectural Trade-Offs in a Latency Tolerant Gallium Arsenide Microprocessor
    Architectural Trade-offs in a Latency Tolerant Gallium Arsenide Microprocessor by Michael D. Upton A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Electrical Engineering) in The University of Michigan 1996 Doctoral Committee: Associate Professor Richard B. Brown, CoChairperson Professor Trevor N. Mudge, CoChairperson Associate Professor Myron Campbell Professor Edward S. Davidson Professor Yale N. Patt © Michael D. Upton 1996 All Rights Reserved DEDICATION To Kelly, Without whose support this work may not have been started, would not have been enjoyed, and could not have been completed. Thank you for your continual support and encouragement. ii ACKNOWLEDGEMENTS Many people, both at Michigan and elsewhere, were instrumental in the completion of this work. I would like to thank my co-chairs, Richard Brown and Trevor Mudge, first for attracting me to Michigan, and then for allowing our group the freedom to explore many different ideas in architecture and circuit design. Their guidance and motivation combined to make this a truly memorable experience. I am also grateful to each of my other dissertation committee members: Ed Davidson, Yale Patt, and Myron Campbell. The support and encouragement of the other faculty on the project, Karem Sakallah and Ron Lomax, is also gratefully acknowledged. My friends and former colleagues Mark Rossman, Steve Sugiyama, Ray Farbarik, Tom Rossman and Kendall Russell were always willing to lend their assistance. Richard Oettel continually reminded me of the valuable support of friends and family, and the importance of having fun in your work. Our corporate sponsors: Cascade Design Automation, Chronologic, Cadence, and Metasoft, provided software and support that made this work possible.
    [Show full text]
  • TOP500 Supercomputer Sites
    7/24/2018 News | TOP500 Supercomputer Sites HOME | SEARCH | REGISTER RSS | MY ACCOUNT | EMBED RSS | SUPER RSS | Contact Us | News | TOP500 Supercomputer Sites http://top500.org/blog/category/feature-article/feeds/rss Are you the publisher? Claim or contact Browsing the Latest Browse All Articles (217 Live us about this channel Snapshot Articles) Browser Embed this Channel Description: content in your HTML TOP500 News Search Report adult content: 04/27/18--03:14: UK Commits a 0 0 Billion Pounds to AI Development click to rate The British government and the private sector are investing close to £1 billion Account: (login) pounds to boost the country’s artificial intelligence sector. The investment, which was announced on Thursday, is part of a wide-ranging strategy to make the UK a global leader in AI and big data. More Channels Under the investment, known as the “AI Sector Deal,” government, industry, and academia will contribute £603 million in new funding, adding to the £342 million already allocated in existing budgets. That brings the grand total to Showcase £945 million, or about $1.3 billion at the current exchange rate. The UK RSS Channel Showcase 1586818 government is also looking to increase R&D spending across all disciplines by 2.4 percent, while also raising the R&D tax credit from 11 to 12 percent. This is RSS Channel Showcase 2022206 part of a broader commitment to raise government spending in this area from RSS Channel Showcase 8083573 around £9.5 billion in 2016 to £12.5 billion in 2021. RSS Channel Showcase 1992889 The UK government policy paper that describes the sector deal meanders quite a bit, describing a lot of programs and initiatives that intersect with the AI investments, but are otherwise free-standing.
    [Show full text]
  • Partner Directory Wind River Partner Program
    PARTNER DIRECTORY WIND RIVER PARTNER PROGRAM The Internet of Things (IoT), cloud computing, and Network Functions Virtualization are but some of the market forces at play today. These forces impact Wind River® customers in markets ranging from aerospace and defense to consumer, networking to automotive, and industrial to medical. The Wind River® edge-to-cloud portfolio of products is ideally suited to address the emerging needs of IoT, from the secure and managed intelligent devices at the edge to the gateway, into the critical network infrastructure, and up into the cloud. Wind River offers cross-architecture support. We are proud to partner with leading companies across various industries to help our mutual customers ease integration challenges; shorten development times; and provide greater functionality to their devices, systems, and networks for building IoT. With more than 200 members and still growing, Wind River has one of the embedded software industry’s largest ecosystems to complement its comprehensive portfolio. Please use this guide as a resource to identify companies that can help with your development across markets. For updates, browse our online Partner Directory. 2 | Partner Program Guide MARKET FOCUS For an alphabetical listing of all members of the *Clavister ..................................................37 Wind River Partner Program, please see the Cloudera ...................................................37 Partner Index on page 139. *Dell ..........................................................45 *EnterpriseWeb
    [Show full text]
  • Technological Forecasting of Supercomputer Development: the March to Exascale Computing
    Portland State University PDXScholar Engineering and Technology Management Faculty Publications and Presentations Engineering and Technology Management 10-2014 Technological Forecasting of Supercomputer Development: The March to Exascale Computing Dong-Joon Lim Portland State University Timothy R. Anderson Portland State University, [email protected] Tom Shott Portland State University, [email protected] Follow this and additional works at: https://pdxscholar.library.pdx.edu/etm_fac Part of the Engineering Commons Let us know how access to this document benefits ou.y Citation Details Lim, Dong-Joon; Anderson, Timothy R.; and Shott, Tom, "Technological Forecasting of Supercomputer Development: The March to Exascale Computing" (2014). Engineering and Technology Management Faculty Publications and Presentations. 46. https://pdxscholar.library.pdx.edu/etm_fac/46 This Post-Print is brought to you for free and open access. It has been accepted for inclusion in Engineering and Technology Management Faculty Publications and Presentations by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. Technological forecasting of supercomputer development: The march to Exascale computing Dong-Joon Lim *, Timothy R. Anderson, Tom Shott Dept. of Engineering and Technology Management, Portland State University, USA Abstract- Advances in supercomputers have come at a steady pace over the past 20 years. The next milestone is to build an Exascale computer however this requires not only speed improvement but also significant enhancements for energy efficiency and massive parallelism. This paper examines technological progress of supercomputer development to identify the innovative potential of three leading technology paths toward Exascale development: hybrid system, multicore system and manycore system.
    [Show full text]
  • Performance and Energy Analysis of the Iterative Solution of Sparse Linear Systems on Multicore and Manycore Architectures
    Performance and Energy Analysis of the Iterative Solution of Sparse Linear Systems on Multicore and Manycore Architectures José I. Aliaga PPAM-PEAC – Warsaw (Poland) September, 2013 Performance and Energy Analysis of the Iterative Solution of Sparse Linear Systems on Multicore and Manycore Architectures . Universidad Jaime I (Castellón, Spain) . José I. Aliaga . Maribel Castillo . Juan C. Fernández . Germán León . Joaquín Pérez . Enrique S. Quintana-Ortí . Innovative and Computing Lab (Univ. Tennessee, USA) . Hartwig Antz PPAM-PEAC – Warsaw (Poland) September, 2013 Concurrency and energy efficiency 2010 PFLOPS (1015 flops/sec.) 2010 JUGENE . 109 core level (PowerPC 450, 850MHz → 3.4 GFLOPS) . 101 node level (Quad-Core) . 105 cluster level (73.728 nodes) PPAM-PEAC – Warsaw (Poland) September, 2013 Concurrency and energy efficiency 2010 PFLOPS (1015 flops/sec.) 2020 EFLOPS (1018 flops/sec.) 2010 JUGENE . 109 core level . 109.5 core level (PowerPC 450, 850MHz → 3.4 GFLOPS) . 101 node level . 103 node level! (Quad-Core) . 105 cluster level . 105.5 cluster level (73.728 nodes) PPAM-PEAC – Warsaw (Poland) September, 2013 Concurrency and energy efficiency . Green500/Top500 (November 2010) Rank Site, Computer #Cores MFLOPS/W LINPACK MW to (TFLOPS) EXAFLOPS? Green/Top 1/115 NNSA/SC Blue Gene/Q Prototype 8.192 1.684’20 65’35 593’75 23 2,7 39 2,7 11/1 NUDT TH MPP, X5670 2.93Ghz 6C, 186.368 635’15 2.566’00’0 1.574’43 NVIDIA GPU, FT-1000 8C Most powerful reactor under construction in France Flamanville (EDF, 2017 for US $9 billion): 1,630 MWe PPAM-PEAC – Warsaw (Poland) September, 2013 Concurrency and energy efficiency BlueGene/Q PPAM-PEAC – Warsaw (Poland) September, 2013 Concurrency and energy efficiency .
    [Show full text]
  • MPI on Aurora
    AN OVERVIEW OF AURORA, ARGONNE’S UPCOMING EXASCALE SYSTEM ALCF DEVELOPERS SESSION COLLEEN BERTONI, SUDHEER CHUNDURI www.anl.gov AURORA: An Intel-Cray System Intel/Cray machine arriving at Argonne in 2021 Sustained Performance greater than 1 Exaflops 2 AURORA: A High-level View § Hardware Architecture: § Intel Xeon processors and Intel Xe GPUs § Greater than 10 PB of total memory § Cray Slingshot network and Shasta platform § IO • Uses Lustre and Distributed Asynchronous Object Store IO (DAOS) • Greater than 230 PB of storage capacity and 25 TB/s of bandwidth § Software (Intel One API umbrella): § Intel compilers (C,C++,Fortran) § Programming models: DPC++, OpenMP, OpenCL § Libraries: oneMKL, oneDNN, oneDAL § Tools: VTune, Advisor § Python 3 Node-level Hardware The Evolution of Intel GPUs Source: Intel 5 Intel GPUs § Intel Integrated GPUs are used for over a decade in § Laptops (e.g. MacBook pro) § Desktops § Servers § Recent and upcoming integrated generations : § Gen 9 – used in Skylake based nodes § Gen 11 – used in Ice Lake based nodes § Gen 9: Double precision peak performance: 100-300 GF § Low by design due to power and space limits Layout of Architecture components for an Intel Core i7 processor 6700K for desktop systems (91 W TDP, 122 mm) § Future Intel Xe (Gen 12) GPU series will provide both integrated and discrete GPUs 6 Intel GPU Building Blocks EU: Execution Unit Subslice L2 Slice: 24 EUs SIMD FPU Dispatch&I$ L1 Shared Local Memory (64 KB/subslice) SIMD FPU Subslice: 8 EUs Send 8x EU Sampler L2 $ L3 Data Cache Branch Dataport
    [Show full text]
  • Performance Evaluation of a Vector Supercomputer SX-Aurora TSUBASA
    Performance Evaluation of a Vector Supercomputer SX-Aurora TSUBASA Kazuhiko Komatsu, S. Momose, Y. Isobe, O. Watanabe, A. Musa, M. Yokokawa, T. Aoyama, M. Sato, H. Kobayashi Tohoku University 14 November, 2018 SC18 Outline • Background • Overview of SX-Aurora TSUBASA • Performance evaluation • Benchmark performance • Application performance • Conclusions 14 November, 2018 SC18 2 Background • Supercomputers become important infrastructures • Widely used for scien8fic researches as well as various industries • Top1 Summit system reaches 143.5 Pflop/s • Big gap between theore8cal performance and sustained performance ◎ Compute-intensive applica8ons stand to benefit from high peak performance ✖Memory-intensive applica8ons are limited by lower memory performance Memory performance has gained more and more attentions 14 November, 2018 SC18 3 A new vector supercomputer SX-Aurora TSUBASA • Two important concepts of its design • High usability • High sustained performance • New memory integration technology • Realize the world’s highest memory bandwidth • New architecture • Vector host (VH) is attached to vector engines (VEs) • VE is responsible for executing an entire application • VH is used for processing system calls invoked by the applications VH VEVE 14 November, 2018 SC18 5 X86 Linux Vector processor New execution model • Conven1onal model • New execution model Host GPU VH VE Start Exe module load Start processing processing Transparent System call Kernel offload (I/O, etc) execution OS function Kernel Finish execution processing Fisnish processing
    [Show full text]
  • Products & Services of the Forum Teratec 2013 Exhibitors
    Press Release - May 2013 FORUM TERATEC 2013 25 & 26 JUNE 2013 - PALAISEAU PRODUCTS & SERVICES OF THE FORUM TERATEC 2013 EXHIBITORS During these two days, there will be an exhibition covering the whole HPC industry . Systems manufacturers and software vendors, integrators and distributors, service providers, academic and laboratory researchers, public and private sector developers will present their latest HPC innovations. • Exhibitors list ACTIVEON - ALINEOS - ALLIANCE SERVICES PLUS - ALLINEA SOFTWARE - ALTAIR ENGINEERING - ALTRAN - ALYOTECH - ANSYS France - BARCO - BULL - CAPS ENTREPRISE - CARRI SYSTEMS - CEA - CLUSTERVISION - COMMUNICATION & SYSTEMES - DATADIRECT NETWORKS - DELL - EMC - ENGIN SOFT - ESI GROUP - EUROTECH - EXASCALE COMPUTING RESEARCH LAB - FUJITSU - GENCI - HEWLETT PACKARD- IBM - IFPEN - INRIA - INTEL - IRT SYSTEMX - KALRAY - NAFEMS - NETAPP - NICE SOFTWARE - NVIDIA - OPENSIDES - OXALYA - PANASAS - RITTAL - ROGUE WAVE - SCILAB - SGI - SILKAN - SOGETI HIGH TECH - ST MICROELECTRONICS - SYSFERA - SYSTEMATIC - SYSTEMX IRT - TERATEC - TOTALINUX - TRANSTEC Here is a first outline of the products and services which you'll find "in live" on the show: ALINEOS Stand 39 Contact presse: Fabien DEVILAINE Tel: +33 (0) 1 64 78 57 65 Mel: [email protected] ALINEOS: Expert for Scientific Computing Since the creation, more than 600 HPC clusters (integrating until several thousand cores) have been installed by ALINEOS in the major European research centers and laboratories, as well as in public and private sectors. In 2012, the company has strengthened its sales and technical teams by creating a department dedicated to industrial customers. Thanks to that, it benefits today from ressources enabling ALINEOS to guide its customers in their HPC projects and disposes of its own datacenter hosting servers and clusters (Calcul on Demand and Benchmark).
    [Show full text]
  • Analysis of the Characteristics and Development Trends of the Next-Generation of Supercomputers in Foreign Countries
    This study was carried out for RIKEN by Special Study Analysis of the Characteristics and Development Trends of the Next-Generation of Supercomputers in Foreign Countries Earl C. Joseph, Ph.D. Robert Sorensen Steve Conway Kevin Monroe IDC OPINION Leadership-class supercomputers have contributed enormously to advances in fundamental and applied science, national security, and the quality of life. Advances made possible by this class of supercomputers have been instrumental for better predicting severe weather and earthquakes that can devastate lives and property, for designing new materials used in products, for making new energy sources pragmatic, for developing and testing methodologies to handle "big data," and for many more beneficial uses. The broad range of leadership-class supercomputers examined during this study make it clear that there are a number of national programs planned and already in place to not only build pre-exascale systems to meet many of today’s most aggressive research agendas but to also develop the hardware and software necessary to produce sustained exascale systems in the 2020 timeframe and beyond. Although our studies indicate that there is no single technology strategy that will emerge as the ideal, it is satisfying to note that the wide range of innovative and forward leaning efforts going on around the world almost certainly ensures that the push toward more capable leadership-class supercomputers will be successful. IDC analysts recognize, however, that for almost every HPC development project examined here, the current effort within each organization is only their latest step in a long history of HPC progress and use.
    [Show full text]
  • Leadership Computing Partnering with the ALCF Enabling
    ARGONNE LEADERSHIP ALCF COMPUTING FACILITY Accelerating Discovery The ALCF provides supercomputing resources to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. As a key player in our nation’s efforts to deliver future exascale computing capabilities, the ALCF is helping to advance scientific computing through a convergence of simulation, data science, and machine learning methods. CONNECT WITH US We encourage you to contact us if you have any questions about alcf.anl.gov getting started at the ALCF. [email protected] Leadership Computing Aurora Slated to be one of the world’s first exascale systems, Aurora will be capable of performing more than a quintillion calculations per second. Designed in collaboration with industry leaders, Intel and Cray, the ALCF’s next-generation supercomputer will help ensure continued U.S. leadership in high-end computing for scientific research. Theta Theta, the ALCF’s Intel-Cray supercomputer, is the engine that drives scientific discoveries for the ALCF user community. The system provides powerful capabilities for research involving modeling and simulation, data science, and machine learning techniques. Partnering with the ALCF ALCF resources are available to researchers in academia, industry, and government laboratories through competitive, peer-reviewed allocation programs supported by DOE and Argonne National Laboratory, including INCITE, ALCC, and the ALCF Data Science Program. A special allocation program is available for ECP projects. ALCF computational scientists, performance engineers, visualization experts, and support staff help users to maximize scientific productivity on the facility’s supercomputers. The ALCF also provides training opportunities to prepare researchers to use its leadership-class systems for future science campaigns.
    [Show full text]