SX-Aurora TSUBASA Introduction Vector Supercomputer Technology on a Pcie Card What Is Vector Processor? (1/2)

Total Page:16

File Type:pdf, Size:1020Kb

SX-Aurora TSUBASA Introduction Vector Supercomputer Technology on a Pcie Card What Is Vector Processor? (1/2) SX-Aurora TSUBASA Introduction Vector Supercomputer Technology on a PCIe Card What is Vector Processor? (1/2) Vector processor can operate large data at once and suited for fast processing of large scale data General Processor Vector Processor Suited for processing data in small Suited for processing data in large units such as business operation and units at once such as simulation,AI, web servers and Bigdata data data 256 Scalar Vector calculation calculation 256 output output 2 © NEC Corporation 2019 What is Vector Processor? (2/2) ① Many small cores vs small number of large cores ② Balance of computation performance and data access performance ③ Software development environment GPU-like Processors Vector Processors ① Many small cores ① Small number of large cores ② Larger size of computation circuits ② Balanced size of computation circuits and ③ Special language (such as CUDA) data access circuits ③ Standard language (C/C++/Fortran) Cores Cores Data access Data access Memory Memory 3 © NEC Corporation 2018 Vector Processor – History & Future ▌Vector Processor has traditionally been used to process big data, much earlier than the term big data was coined. ▌The very first vector processor based machine, Cray-1, was built by Seymour Cray in 1976. NEC made its first vector-supercomputer, the SX-2, in 1981. SX-2 was the first ever CPU to exceed 1 Gflops of peaK performance. Soon, Fujitsu, Hitachi followed NEC’s footsteps in the high-end HPC Technology segment. ▌However, in 1990s, the computer industry changed drastically with the advent of affordable x86 processors. The eventual dominance of x86 played a Key-role in democratization of HPC across academia & industry. ▌Soon due to economic pressure, Cray bailed out of maKing vector supercomputers, followed by Fujitsu & Hitachi. ▌NEC is the only remaining vendor that is still committed to develop & enhance pure vector processors. 4 © NEC Corporation 2020 NEC SX-Series of Vector Supercomputers Good, but… High Bytes/Flops has been the core • large • expensive feature of NEC SX-Series of vector • special like dinosaurs supercomputers -Aurora TSUBASA SX Vector Engine Earth Simulator 3 Hardware innovationsEarth Simulator 2 Performance • Fast SX-ACE • Strong Compact Earth Simulator SX-9 • • Economical SX-8 like falcons SX-7 SX-6 SX-5 SX-4 Software innovationsVector technology experience accumulated over 35 years SX-3 packed into PCIe card SX-2 1990 2000 2010 5 © NEC Corporation 2019 Vector Processor on PCIe Card (World’s highest Memory Capacity & Bandwidth Processor) n8 cores / processor n1.35TB/s memory bandwidth, 48GB memory (Very High Memory Bandwidth) nStandard programming with Fortran/C/C++(No Special Programming Model Needed) n2.45TF performance (double precision) n4.90TF performance (single precision) 6 © NEC Corporation 2019 SX Aurora Vector Engine Design Vision Design concept n High sustained performance in real application n TCO reduction ▌High sustained performance lVector Accelerator lHigh B/F à Good balance of memory bandwidth and cpu performance) ▌TCO reduction lLow power consumption Machine room lHigh density à smaller installation space Soft Power lProductivity (programing, code maintenance) ware Hard etc ware TCO 7 © NEC Corporation 2019 Aurora Vector Engine 1E : Specification 2.45TF VE10E Specification 307GFcore core core core cores/CPU 8 core core core core core ~307GF(DP) performance ~614GF(SP) 0.4TB/s CPU ~2.45TF(DP) 3TB/s performance ~4.91TF(SP) Software controllable cache cache capacity 16MB shared 16MB memory 1.35TB/s bandwidth 1.35TB/s memory 48GB capacity HBM2 memory x 6 8 © NEC Corporation 2019 Architecture n SX-Aurora TSUBASA = Standard x86 + Vector Engine n Linux + standard language (Fortran/C/C++) n Enjoy high performance with easy programming SX-Aurora TSUBASA Hardware Architecture n Standard x86 server + Vector Engine Software Linux OS Application n Linux OS n Automatic vectorization compiler n Fortran/C/C++ x86 server Vector à No special programming like CUDA (VH) PCIe Engine(VE) Interconnect n InfiniBand for MPI n VE-VE direct communication support Automatic Easy Enjoy high vectorization programming Performance! (standard language) compiler 9 © NEC Corporation 2019 Usability Programing Environment Vector Cross Compiler automatic vectorization automatic parallelization Fortran: F2003, F2008 C/C++: C11/C++14 OpenMP: OpenMP4.5 $ vi sample.c $ ncc sample.c Library: MPI 3.1, libc, BLAS, Lapack, etc Debugger: gdb, Eclipse parallel tools platform Tools: PROGINF, FtraceViewer Execution Environment VH VE $ ./a.out execution 10 © NEC Corporation 2019 SX-Aurora TSUBASA Programming Environment Support of the latest language standards along with GNU compatibility ▌C/C++ l ISO/IEC 9899:2011 (aka C11) l ISO/IEC 14882:2014 (aka C++14) ▌Fortran l ISO/IEC 1539-1:2004 (aka Fortran 2003) l ISO/IEC 1539-1:2010 (aka Fortran 2008) ▌OpenMP l Version 4.5 ▌Libraries l libc l MPI Version 3.1 (fully tuned for Aurora architecture) l Numeric libraries (Stencil, BLAS, FFT, Lapack, etc) ▌Tools l GNU Profiler (gprof) l GNU Debugger (gdb), Eclipse Parallel Tools Platform (PTP) l FtraceViewer / PROGINF 11 © NEC Corporation 2019 NEC Numerical Library Collection (NLC) NLC is a collection of mathematical libraries that powerfully supports the development of numerical simulation programs. ASL Unified Interface BLAS / CBLAS Fourier transforms and Random number generators Basic linear algebra subprograms FFTW3 Interface LAPACK Interface library to use Fourier Transform functions of Linear algebra package ASL with FFTW (version 3.x) API ScaLAPACK ASL Scalable linear algebra package for distributed memory parallel programs Scientific library with a wide variety of algorithms for numerical/statistical calculations: Linear algebra, Fourier transforms, Spline functions, SBLAS Special functions, Approximation and interpolation, Numerical differentials and integration, Roots of equations, Basic statistics, etc. Sparse BLAS Stencil Code Accelerator HeteroSolver Stencil Code Acceleration Direct sparse solver 12 © NEC Corporation 2019 Default Execution model Accelerator(GPGPU) SX-Aurora TSUBASA Frequent data transfer will Entire application runs on Vector become performance bottleneck Engine. No data transfer bottleneck Application function function Application function function Linux OS Linux OS Accelerator Vector x86 x86 (GPGPU) Engine processor processor 13 © NEC Corporation 2019 VEOS offload models Run the application in the way it is supposed to run OS Offload VH call VEO VE x86 Application Application VE x86 Application VE Application Application VEOS VEOS VEOS Linux Linux Linux x86 Vector x86 Vector x86 Vector node Engine node Engine node Engine 14 © NEC Corporation 2019 Hybrid MPI MPI application running process on VE and VH communicating through PCIe switch P VE VH P VE P PCIe switch P VE P VE P Process 15 © NEC Corporation 2019 HPL using Hybrid MPI P P P P P P P P P P P P P P P P P P P P P P P P 8 procs on VE 1867 Gflops Hybrid MPI 16 procs on VE and VH P P P P P P P P 2830 Gflops 8 procs on VH 1430 Gflops 16 © NEC Corporation 2019 Offload I/O using Hybrid MPI Run I/O process on VH using Hybrid MPI and continue computation on VE P VE VH P VE I/O switch P VE I/OP VE I/O Process for I/O File system 17 © NEC Corporation 2019 SX-Aurora based System Providers in North America DL380 Vector Engine Apollo Card 6500 18 © NEC Corporation 2019 SX-Aurora based System Providers in North America • Over 30 years of experience in delivering custom and HPC solutions • Extensive customer base especially academia and research labs • Specialized HPC expertise • Solution design and development • HPC research and training • Hybrid system design • NEC and Colfax partnership aims to provide “personal supercomputing” power for leading-edge development 19 © NEC Corporation 2019 Performance Benchmarks DGEMM performance Aurora 1E (2019 CPU) performance is similar to A64FX (2020 CPU) DGEMM single node performance 6627 2398 2500 2104 Performance [GFLOPS] 2016 2017 2019 2020 Xeon Tesla*1 Aurora1E A64FX*2 Gold 6148 V100 10AE (1CPU) (2CPU) (1GPU) (1CPU) *1 AMD NEXT HORIZON http://ir.amd.com/static-files/ef99f84b-e1ad-4e12-8058-f3488f4c47b7 *2 The post-K project and Fujitsu ARM-SVE enabled A64FX processor https://indico.math.cnrs.fr/event/4705/attachments/2362/2942/CEA-RIKEN-school-19013.pdf 21 © NEC Corporation 2020 Himeno Benchmark Aurora 1E (2019 CPU) performance is similar to A64FX (2020 CPU) Himeno BM single node performance (size: XL) 339 346 305 82 Performance [GFLOPS] 2016 2017 2019 2020 Xeon Tesla*1 Aurora1E A64FX*2 Gold 6148 V100 10AE (1CPU) (2CPU) (1GPU) (1CPU) *1 Performance evaluation of a vector supercomputer SX-aurora TSUBASA https://dl.acm.org/citation.cfm?id=3291728 *2 Supercomputer ”Fugaku” Formerly known as Post-K https://www.fujitsu.com/global/Images/supercomputer-fugaku.pdf 22 © NEC Corporation 2020 Stream Benchmark Aurora 1E (2019 CPU) performance is more than 30% higher than competitors STREAM Triad single node performance 1084 830 830 180 Performance [GB/s] 2016 2017 2019 2020 Xeon Tesla*1 Aurora1E A64FX*2 Gold 6148 V100 10AE (1CPU) (2CPU) (1GPU) (1CPU) *1 The post-K project and Fujitsu ARM-SVE enabled A64FX processor https://indico.math.cnrs.fr/event/4705/attachments/2362/2942/CEA-RIKEN-school-19013.pdf 23 © NEC Corporation 2020 HPC Use Case: Stencil Code Acceleration for O&G Stencil Code Overview 25 © NEC Corporation 2020 Seismic Imaging ▌Reverse Time Migration (RTM) l A typical method for seismic imaging. l The most costly part is “stencil code”. l In the case of 3D RTM, 0 20 40 60 80 100 it consumes about 90% Elapsed Time Ratio [%] of the total execution time even when using 40 threads. stencil code other computation I/O 3D RTM on Xeon Gold 6148 x2 (Skylake 2.40GHz 40C) Dataset: Sandia/SEG Salt Model 45 shot subset [3D RTM seismic imaging example] 26 © NEC Corporation 2020 Stencil Code ▌What is “stencil code” ? l A procedure pattern that frequently appears in scientific simulations, image processing, signal processing, deep learning, etc.
Recommended publications
  • User's Manual
    Management Number:OUCMC-super-技術-007-06 National University Corporation Osaka University Cybermedia Center User's manual NEC First Government & Public Solutions Division Summary 1. System Overview ................................................................................................. 1 1.1. Overall System View ....................................................................................... 1 1.2. Thress types of Computiong Environments ........................................................ 2 1.2.1. General-purpose CPU environment .............................................................. 2 1.2.2. Vector computing environment ................................................................... 3 1.2.3. GPGPU computing environment .................................................................. 5 1.3. Three types of Storage Areas ........................................................................... 6 1.3.1. Features.................................................................................................. 6 1.3.2. Various Data Access Methods ..................................................................... 6 1.4. Three types of Front-end nodes ....................................................................... 7 1.4.1. Features.................................................................................................. 7 1.4.2. HPC Front-end ......................................................................................... 7 1.4.1. HPDA Front-end ......................................................................................
    [Show full text]
  • Petaflops for the People
    PETAFLOPS SPOTLIGHT: NERSC housands of researchers have used facilities of the Advanced T Scientific Computing Research (ASCR) program and its EXTREME-WEATHER Department of Energy (DOE) computing predecessors over the past four decades. Their studies of hurricanes, earthquakes, NUMBER-CRUNCHING green-energy technologies and many other basic and applied Certain problems lend themselves to solution by science problems have, in turn, benefited millions of people. computers. Take hurricanes, for instance: They’re They owe it mainly to the capacity provided by the National too big, too dangerous and perhaps too expensive Energy Research Scientific Computing Center (NERSC), the Oak to understand fully without a supercomputer. Ridge Leadership Computing Facility (OLCF) and the Argonne Leadership Computing Facility (ALCF). Using decades of global climate data in a grid comprised of 25-kilometer squares, researchers in These ASCR installations have helped train the advanced Berkeley Lab’s Computational Research Division scientific workforce of the future. Postdoctoral scientists, captured the formation of hurricanes and typhoons graduate students and early-career researchers have worked and the extreme waves that they generate. Those there, learning to configure the world’s most sophisticated same models, when run at resolutions of about supercomputers for their own various and wide-ranging projects. 100 kilometers, missed the tropical cyclones and Cutting-edge supercomputing, once the purview of a small resulting waves, up to 30 meters high. group of experts, has trickled down to the benefit of thousands of investigators in the broader scientific community. Their findings, published inGeophysical Research Letters, demonstrated the importance of running Today, NERSC, at Lawrence Berkeley National Laboratory; climate models at higher resolution.
    [Show full text]
  • DMI-HIRLAM on the NEC SX-6
    DMI-HIRLAM on the NEC SX-6 Maryanne Kmit Meteorological Research Division Danish Meteorological Institute Lyngbyvej 100 DK-2100 Copenhagen Ø Denmark 11th Workshop on the Use of High Performance Computing in Meteorology 25-29 October 2004 Outline • Danish Meteorological Institute (DMI) • Applications run on NEC SX-6 cluster • The NEC SX-6 cluster and access to it • DMI-HIRLAM - geographic areas, versions, and improvements • Strategy for utilization and operation of the system DMI - the Danish Meteorological Institute DMI’s mission: • Making observations • Communicating them to the general public • Developing scientific meteorology DMI’s responsibilities: • Serving the meteorological needs of the kingdom of Denmark • Denmark, the Faroes and Greenland, including territorial waters and airspace • Predicting and monitoring weather, climate and environmental conditions, on land and at sea Applications running on the NEC SX-6 cluster Operational usage: • Long DMI-HIRLAM forecasts 4 times a day • Wave model forecasts for the North Atlantic, the Danish waters, and for the Mediterranean Sea 4 times a day • Trajectory particle model and ozone forecasts for air quality Research usage: • Global climate simulations • Regional climate simulations • Research and development of operational and climate codes Cluster interconnect GE−int Completely closed Connection to GE−ext Switch internal network institute network Switch GE GE nec1 nec2 nec3 nec4 nec5 nec6 nec7 nec8 GE GE neci1 neci2 FC 4*1Gbit/s FC links per node azusa1 asama1 FC McData FC 4*1Gbit/s FC links per azusa fiber channel switch 8*1Gbit/s FC links per asama azusa2 asama2 FC 2*1Gbit/s FC links per lun Disk#0 Disk#1 Disk#2 Disk#3 Cluster specifications • NEC SX-6 (nec[12345678]) : 64M8 (8 vector nodes with 8 CPU each) – Desc.
    [Show full text]
  • Performance Modeling the Earth Simulator and ASCI Q
    PAL CCS-3 Performance Modeling the Earth Simulator and ASCI Q Darren J. Kerbyson Adolfy Hoisie, Harvey J. Wasserman Performance and Architectures Laboratory (PAL) Los Alamos National Laboratory April 2003 Los Alamos PAL CCS-3 “26.58Tflops on AFES … 64.9% of peak (640nodes)” “14.9Tflops on Impact-3D …. 45% of peak (512nodes)” “10.5Tflops on PFES … 44% of peak (376nodes)” 40Tflops 20Tflops Los Alamos PAL Talk Overview CCS-3 G Overview of the Earth Simulator – A (quick) view of the architecture of the Earth Simulator (and Q) – A look at its performance characteristics G Application Centric Performance Models – Method of comparing performance is to use trusted models of applications that we are interested in, e.g. SAGE and Sweep3D. – Analytical / Parameterized in system & application characteristics G Models can be used to provide: – Predicted performance prior to availability (hardware or software) – Insight into performance – Performance Comparison (which is better?) G System Performance Comparison (Earth Simulator vs ASCI Q) Los Alamos PAL Earth Simulator: Overview CCS-3 ... ... RCU RCU . RCU AP AP AP Memory Memory Memory 0-7 0-7 0-7 ... Node0 Node1 Node639 640x640 ... crossbar G 640 Nodes (Vector Processors) G interconnected by a single stage cross-bar – Copper interconnect (~3,000Km wire) G NEC markets a product – SX-6 – Not the same as an Earth Simulator Node - similar but different memory sub-system Los Alamos PAL Earth Simulator Node CCS-3 Crossbar LAN network Disks AP AP AP AP AP AP AP AP 0 1 2 3 4 5 6 7 RCU IOP Node contains: 8 vector processors (AP) 16GByte memory Remote Control Unit (RCU) I/O Processor (IOP) 2 3 4 5 6 0 1 31 M M M M M M M .
    [Show full text]
  • Project Aurora 2017 Vector Inheritance
    Vector Coding a simple user’s perspective Rudolf Fischer NEC Deutschland GmbH Düsseldorf, Germany SIMD vs. vector Input Pipeline Result Scalar SIMD people call it “vector”! Vector SX 2 © NEC Corporation 2017 Data Parallelism ▌‘Vector Loop’, data parallel do i = 1, n Real, dimension(n): a,b,c a(i) = b(i) + c(i) … end do a = b + c ▌‘Scalar Loop’, not data parallel, ex. linear recursion do i = 2, n a(i) = a(i-1) + b(i) end do ▌Reduction? do i = 1, n, VL do i_ = i, min(n,i+VL-1) do i = 1, n s_(i_) = s_(i_) + v(i_)* w(i_) s = s + v(i)* w(i) end do end do end do s = reduction(s_) (hardware!) 3 © NEC Corporation 2017 Vector Coding Paradigm ▌‘Scalar’ thinking / coding: There is a (grid-point,particle,equation,element), what am I going to do with it? ▌‘Vector’ thinking / coding: There is a certain action or operation, to which (grid-points,particles,equations,elements) am I going to apply it simultaneously? 4 © NEC Corporation 2017 Identifying the data-parallel structure ▌ Very simple case: do j = 2, m-1 do i = 2, n-1 rho_new(i,j) = rho_old(i,j) + dt * something_with_fluxes(i+/-1,j+/-1) end do end do ▌ Does it tell us something? Partial differential equations Local theories Vectorization is very “natural” 5 © NEC Corporation 2017 Identifying the data-parallel subset ▌ Simple case: V------> do j = 1, m |+-----> do i = 2, n || a(i,j) = a(i-1,j) + b(i,j) |+----- end do V------ end do ▌ The compiler will vectorise along j ▌ non-unit-stride access, suboptimal ▌ Results for a certain case: Totally scalar (directives): 25.516ms Avoid outer vector loop:
    [Show full text]
  • 2020 Global High-Performance Computing Product Leadership Award
    2020 GLOBAL HIGH-PERFORMANCE COMPUTING PRODUCT LEADERSHIP AWARD Strategic Imperatives Frost & Sullivan identifies three key strategic imperatives that impact the automation industry: internal challenges, disruptive technologies, and innovative business models. Every company that is competing in the automation space is obligated to address these imperatives proactively; failing to do so will almost certainly lead to stagnation or decline. Successful companies overcome the challenges posed by these imperatives and leverage them to drive innovation and growth. Frost & Sullivan’s recognition of NEC is a reflection of how well it is performing against the backdrop of these imperatives. Best Practices Criteria for World-Class Performance Frost & Sullivan applies a rigorous analytical process to evaluate multiple nominees for each award category before determining the final award recipient. The process involves a detailed evaluation of best practices criteria across two dimensions for each nominated company. NEC excels in many of the criteria in the high-performance computing (HPC) space. About NEC Established in 1899, NEC is a global IT, network, and infrastructure solution provider with a comprehensive product portfolio across computing, data storage, embedded systems, integrated IT infrastructure, network products, software, and unified communications. Headquartered in Tokyo, Japan, NEC has been at the forefront of accelerating the industrial revolution of the 20th and 21st © Frost & Sullivan 2021 The Growth Pipeline Company™ centuries by leveraging its technical knowhow and product expertise across thirteen different industries1 in industrial and energy markets. Deeply committed to the vision of orchestrating a better world. NEC envisions a future that embodies the values of safety, security, fairness, and efficiency, thus creating long-lasting social value.
    [Show full text]
  • Hardware Technology of the Earth Simulator 1
    Special Issue on High Performance Computing 27 Architecture and Hardware for HPC 1 Hardware Technology of the Earth Simulator 1 By Jun INASAKA,* Rikikazu IKEDA,* Kazuhiko UMEZAWA,* 5 Ko YOSHIKAWA,* Shitaka YAMADA† and Shigemune KITAWAKI‡ 5 1-1 This paper describes the hardware technologies of the supercomputer system “The Earth Simula- 1-2 ABSTRACT tor,” which has the highest performance in the world and was developed by ESRDC (the Earth 1-3 & 2-1 Simulator Research and Development Center)/NEC. The Earth Simulator has adopted NEC’s leading edge 2-2 10 technologies such as most advanced device technology and process technology to develop a high-speed and high-10 2-3 & 3-1 integrated LSI resulting in a one-chip vector processor. By combining this LSI technology with various advanced hardware technologies including high-density packaging, high-efficiency cooling technology against high heat dissipation and high-density cabling technology for the internal node shared memory system, ESRDC/ NEC has been successful in implementing the supercomputer system with its Linpack benchmark performance 15 of 35.86TFLOPS, which is the world’s highest performance (http://www.top500.org/ : the 20th TOP500 list of the15 world’s fastest supercomputers). KEYWORDS The Earth Simulator, Supercomputer, CMOS, LSI, Memory, Packaging, Build-up Printed Wiring Board (PWB), Connector, Cooling, Cable, Power supply 20 20 1. INTRODUCTION (Main Memory Unit) package are all interconnected with the fine coaxial cables to minimize the distances The Earth Simulator has adopted NEC’s most ad- between them and maximize the system packaging vanced CMOS technologies to integrate vector and density corresponding to the high-performance sys- 25 25 parallel processing functions for realizing a super- tem.
    [Show full text]
  • Shared-Memory Vector Systems Compared
    Shared-Memory Vector Systems Compared Robert Bell, CSIRO and Guy Robinson, Arctic Region Supercomputing Center ABSTRACT: The NEC SX-5 and the Cray SV1 are the only shared-memory vector computers currently being marketed. This compares with at least five models a few years ago (J90, T90, SX-4, Fujitsu and Hitachi), with IBM, Digital, Convex, CDC and others having fallen by the wayside in the early 1990s. In this presentation, some comparisons will be made between the architecture of the survivors, and some performance comparisons will be given on benchmark and applications codes, and in areas not usually presented in comparisons, e.g. file systems, network performance, gzip speeds, compilation speeds, scalability and tools and libraries. KEYWORDS: SX-5, SV1, shared-memory, vector systems averaged 95% on the larger node, and 92% on the smaller 1. Introduction node. HPCCC The SXes are supported by data storage systems – The Bureau of Meteorology and CSIRO in Australia SAM-FS for the Bureau, and DMF on a Cray J90se for established the High Performance Computing and CSIRO. Communications Centre (HPCCC) in 1997, to provide ARSC larger computational facilities than either party could The mission of the Arctic Region Supercomputing acquire separately. The main initial system was an NEC Centre is to support high performance computational SX-4, which has now been replaced by two NEC SX-5s. research in science and engineering with an emphasis on The current system is a dual-node SX-5/24M with 224 high latitudes and the Arctic. ARSC provides high Gbyte of memory, and this will become an SX-5/32M performance computational, visualisation and data storage with 224 Gbyte in July 2001.
    [Show full text]
  • Recent Supercomputing Development in Japan
    Supercomputing in Japan Yoshio Oyanagi Dean, Faculty of Information Science Kogakuin University 2006/4/24 1 Generations • Primordial Ages (1970’s) – Cray-1, 75APU, IAP • 1st Generation (1H of 1980’s) – Cyber205, XMP, S810, VP200, SX-2 • 2nd Generation (2H of 1980’s) – YMP, ETA-10, S820, VP2600, SX-3, nCUBE, CM-1 • 3rd Generation (1H of 1990’s) – C90, T3D, Cray-3, S3800, VPP500, SX-4, SP-1/2, CM-5, KSR2 (HPC ventures went out) • 4th Generation (2H of 1990’s) – T90, T3E, SV1, SP-3, Starfire, VPP300/700/5000, SX-5, SR2201/8000, ASCI(Red, Blue) • 5th Generation (1H of 2000’s) – ASCI,TeraGrid,BlueGene/L,X1, Origin,Power4/5, ES, SX- 6/7/8, PP HPC2500, SR11000, …. 2006/4/24 2 Primordial Ages (1970’s) 1974 DAP, BSP and HEP started 1975 ILLIAC IV becomes operational 1976 Cray-1 delivered to LANL 80MHz, 160MF 1976 FPS AP-120B delivered 1977 FACOM230-75 APU 22MF 1978 HITAC M-180 IAP 1978 PAX project started (Hoshino and Kawai) 1979 HEP operational as a single processor 1979 HITAC M-200H IAP 48MF 1982 NEC ACOS-1000 IAP 28MF 1982 HITAC M280H IAP 67MF 2006/4/24 3 Characteristics of Japanese SC’s 1. Manufactured by main-frame vendors with semiconductor facilities (not ventures) 2. Vector processors are attached to mainframes 3. HITAC IAP a) memory-to-memory b) summation, inner product and 1st order recurrence can be vectorized c) vectorization of loops with IF’s (M280) 4. No high performance parallel machines 2006/4/24 4 1st Generation (1H of 1980’s) 1981 FPS-164 (64 bits) 1981 CDC Cyber 205 400MF 1982 Cray XMP-2 Steve Chen 630MF 1982 Cosmic Cube in Caltech, Alliant FX/8 delivered, HEP installed 1983 HITAC S-810/20 630MF 1983 FACOM VP-200 570MF 1983 Encore, Sequent and TMC founded, ETA span off from CDC 2006/4/24 5 1st Generation (1H of 1980’s) (continued) 1984 Multiflow founded 1984 Cray XMP-4 1260MF 1984 PAX-64J completed (Tsukuba) 1985 NEC SX-2 1300MF 1985 FPS-264 1985 Convex C1 1985 Cray-2 1952MF 1985 Intel iPSC/1, T414, NCUBE/1, Stellar, Ardent… 1985 FACOM VP-400 1140MF 1986 CM-1 shipped, FPS T-series (max 1TF!!) 2006/4/24 6 Characteristics of Japanese SC in the 1st G.
    [Show full text]
  • Ushering in a New Era: Argonne National Laboratory & Aurora
    Ushering in a New Era Argonne National Laboratory’s Aurora System April 2015 ANL Selects Intel for World’s Biggest Supercomputer 2-system CORAL award extends IA leadership in extreme scale HPC Aurora Argonne National Laboratory >180PF Trinity NNSA† April ‘15 Cori >40PF NERSC‡ >30PF July ’14 + Theta Argonne National Laboratory April ’14 >8.5PF >$200M ‡ Cray* XC* Series at National Energy Research Scientific Computing Center (NERSC). † Cray XC Series at National Nuclear Security Administration (NNSA). 2 The Most Advanced Supercomputer Ever Built An Intel-led collaboration with ANL and Cray to accelerate discovery & innovation >180 PFLOPS (option to increase up to 450 PF) 18X higher performance† >50,000 nodes Prime Contractor 13MW >6X more energy efficient† 2018 delivery Subcontractor Source: Argonne National Laboratory and Intel. †Comparison of theoretical peak double precision FLOPS and power consumption to ANL’s largest current system, MIRA (10PFs and 4.8MW) 3 Aurora | Science From Day One! Extreme performance for a broad range of compute and data-centric workloads Transportation Biological Science Renewable Energy Training Argonne Training Program on Extreme- Scale Computing Aerodynamics Biofuels / Disease Control Wind Turbine Design / Placement Materials Science Computer Science Public Access Focus Areas Focus US Industry and International Co-array Fortran Batteries / Solar Panels New Programming Models 4 Aurora | Built on a Powerful Foundation Breakthrough technologies that deliver massive benefits Compute Interconnect File System 3rd Generation 2nd Generation Intel® Xeon Phi™ Intel® Omni-Path Intel® Lustre* Architecture Software >17X performance† >20X faster† >3X faster† FLOPS per node >500 TB/s bi-section bandwidth >1 TB/s file system throughput >12X memory bandwidth† >2.5 PB/s aggregate node link >5X capacity† bandwidth >30PB/s aggregate >150TB file system capacity in-package memory bandwidth Integrated Intel® Omni-Path Architecture Processor code name: Knights Hill Source: Argonne National Laboratory and Intel.
    [Show full text]
  • Architectural Trade-Offs in a Latency Tolerant Gallium Arsenide Microprocessor
    Architectural Trade-offs in a Latency Tolerant Gallium Arsenide Microprocessor by Michael D. Upton A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Electrical Engineering) in The University of Michigan 1996 Doctoral Committee: Associate Professor Richard B. Brown, CoChairperson Professor Trevor N. Mudge, CoChairperson Associate Professor Myron Campbell Professor Edward S. Davidson Professor Yale N. Patt © Michael D. Upton 1996 All Rights Reserved DEDICATION To Kelly, Without whose support this work may not have been started, would not have been enjoyed, and could not have been completed. Thank you for your continual support and encouragement. ii ACKNOWLEDGEMENTS Many people, both at Michigan and elsewhere, were instrumental in the completion of this work. I would like to thank my co-chairs, Richard Brown and Trevor Mudge, first for attracting me to Michigan, and then for allowing our group the freedom to explore many different ideas in architecture and circuit design. Their guidance and motivation combined to make this a truly memorable experience. I am also grateful to each of my other dissertation committee members: Ed Davidson, Yale Patt, and Myron Campbell. The support and encouragement of the other faculty on the project, Karem Sakallah and Ron Lomax, is also gratefully acknowledged. My friends and former colleagues Mark Rossman, Steve Sugiyama, Ray Farbarik, Tom Rossman and Kendall Russell were always willing to lend their assistance. Richard Oettel continually reminded me of the valuable support of friends and family, and the importance of having fun in your work. Our corporate sponsors: Cascade Design Automation, Chronologic, Cadence, and Metasoft, provided software and support that made this work possible.
    [Show full text]
  • TOP500 Supercomputer Sites
    7/24/2018 News | TOP500 Supercomputer Sites HOME | SEARCH | REGISTER RSS | MY ACCOUNT | EMBED RSS | SUPER RSS | Contact Us | News | TOP500 Supercomputer Sites http://top500.org/blog/category/feature-article/feeds/rss Are you the publisher? Claim or contact Browsing the Latest Browse All Articles (217 Live us about this channel Snapshot Articles) Browser Embed this Channel Description: content in your HTML TOP500 News Search Report adult content: 04/27/18--03:14: UK Commits a 0 0 Billion Pounds to AI Development click to rate The British government and the private sector are investing close to £1 billion Account: (login) pounds to boost the country’s artificial intelligence sector. The investment, which was announced on Thursday, is part of a wide-ranging strategy to make the UK a global leader in AI and big data. More Channels Under the investment, known as the “AI Sector Deal,” government, industry, and academia will contribute £603 million in new funding, adding to the £342 million already allocated in existing budgets. That brings the grand total to Showcase £945 million, or about $1.3 billion at the current exchange rate. The UK RSS Channel Showcase 1586818 government is also looking to increase R&D spending across all disciplines by 2.4 percent, while also raising the R&D tax credit from 11 to 12 percent. This is RSS Channel Showcase 2022206 part of a broader commitment to raise government spending in this area from RSS Channel Showcase 8083573 around £9.5 billion in 2016 to £12.5 billion in 2021. RSS Channel Showcase 1992889 The UK government policy paper that describes the sector deal meanders quite a bit, describing a lot of programs and initiatives that intersect with the AI investments, but are otherwise free-standing.
    [Show full text]