Supercomputing History and the Immo

Total Page:16

File Type:pdf, Size:1020Kb

Supercomputing History and the Immo VIRTUAL ROUNDTABLE The 30th Anniversary of the Supercomputing Conference: Bringing the Future Closer— Supercomputing upercomputing’s nascent era was borne of the late History and the 1940s and 1950s Cold War and increasing tensions be- Stween the East and the West; the first installations—which demanded ex- Immortality of Now tensive resources and manpower beyond what private corporations Jack Dongarra, University of Tennessee, Oak Ridge National Laboratory, could provide—were housed in and University of Manchester university and government labs in Vladimir Getov, University of Westminster the United States, United Kingdom, and the Soviet Union. Following the Kevin Walsh, University of California, San Diego Institute of Advanced Study (IAS) stored program computer architec- A panel of experts discusses historical ture, these so-called von Neumann machines were implemented as the reflections on the past 30 years of the MANIAC at Los Alamos Scientific Laboratory, the Atlas at the Univer- Supercomputing (SC) conference, its leading sity of Manchester, the ILLIAC at role for the professional community and some the University of Illinois, the BESM machines at the Soviet Academy of exciting future challenges. Sciences, the Johnniac at The Rand Corporation, and the SILLIAC in Australia. By 1955, private industry 74 COMPUTER PUBLISHED BY THE IEEE COMPUTER SOCIETY 0018-9162/18/$33.00 © 2018 IEEE joined in to support these initiatives and the IBM User Group, SHARE was formed, the Digital Equipment Corpo- MACHINES OF THE FUTURE ration was founded in 1957, while IBM built an early wide area computer net- “The advanced arithmetical machines of the future will be electrical in nature, and work SAGE (Semi-Automatic Ground they will perform at 100 times present speeds, or more. Moreover, they will be Environment), and the RCA 501, using far more versatile than present commercial machines, so that they may readily be all transistor logic was launched in adapted for a wide variety of operations. They will be controlled by a control card 1958. or film, they will select their own data and manipulate it in accordance with the in- In 1988, the first IEEE/ACM SC con- structions thus inserted, they will perform complex arithmetical computations at ference in Kissimmee, Florida, was exceedingly high speeds, and they will record results in such form as to be readily held. At that time, custom-built vec- available for distribution or for later further manipulation.” tor mainframes were the norm; the —“As We May Think,” by Vannevar Bush (The Atlantic, July 1945) Cray Y-MP was a leading machine of the day, with a peak performance of 333 Mflops per processor and could be equipped with up the eight processors; users typically accessed the machine HPC over the past 30 years, we have computers. Doing everything known over a dumb terminal at 9600 baud; invited 6 well-known experts—Gordon to be feasible for performance (for there was no visualization; a single Bell, Jack Dongarra, Bill Johnston, example, parallel units, lookahead, programmer would code and develop Horst Simon, Erich Strohmaier, and speculative execution). In pre-SC88, everything; and there were few tools Mateo Valero—all of whom offer com- there were trials and failures, such as or software libraries, and we relied on plimentary perspectives on the past violations of Amdahl’s Law1 by Single remote batch job submission. three to four decades of supercom- Instruction Multiple Data (SIMD) and Today, as we approach SC’s 30th an- puting. Their histories connect us in other architectures or pushing tech- niversary, late commodity massively community. nology that failed to reach a critical parallel platforms are the norm. The production (such as GaAs). At the be- HPC community has developed par- COMPUTER: Looking back at the early ginning of the first generation of com- allel debuggers and rich tool sets for years of electronic digital computers, mercial computing, Seymour Cray code share and reuse. Remote access what do you see as the turning points joined Control Data Corporation as a to several supercomputers at once is and defining eras of supercomputing founder and quickly demonstrated made possible by scientific gateways, that help chronicle the growth of the a proclivity for building the highest accessed over 10 and 100 Gbps net- HPC community and the SC confer- performance computer of the day; in works. High performance desktops ence at its 30th anniversary? essence, he defined and established with scientific visualization capabil- the tri-decade of supercomputing and ities are the chief methods we use to GORDON BELL: In 1961, after I vis- the market. cognitively grasp the quantity of data ited Lawrence Livermore National After the initial CDC 1604 (1960) produced by supercomputers. The Cold Laboratory with the Univac LARC, introduction, Seymour proceeded to War arms race has been eclipsed by an and Manchester University with the build computers without peer, includ- HPC race, and we are living through a Atlas prototype, I began to see what ing the CDC 6600 in 1965, and CDC radical refactoring of the time it takes supercomputing was about from a de- 7600 in 1969. He then formed Cray Re- to create new knowledge—and, con- sign and user perspective—namely, it search to introduce the vector proces- currently, the time it takes to learn was designing at the edge of the fea- sor Cray 1 (1976), which was followed how much we don’t know. The IEEE/ sibility envelope using every known by the multiprocessor Cray SMP (1985), ACM SC conference animates the technique. YMP (1988), and last C90 (1991). From community and allows us to see what The IBM Stretch was one of these 1965 through 1991, the Cray architec- knowledge changes, and what knowl- three 1960s computers aimed to tures defined and dominated computer edge stays stable over time. achieve over an order of magni- design and the market, which included To review and summarize the key tude performance increase over the CDC, Cray Research, Fujitsu, Hitachi, developments and achievements of largest, commercial state-of-the-art IBM, and NEC. OCTOBER 2018 75 VIRTUAL ROUNDTABLE MATEO VALERO: I see the rise, fall, the 500 largest installations and some market. A new criterion for determin- and resurgence of vector processors as of their main system characteristics ing which systems could be counted the turning points of supercomputing. are published twice a year. Its simplic- as supercomputers was needed. After Vector processors execute instructions ity has invited many critics but has two years of experimentation with whose operands are complete vectors. also allowed it to remain useful during various metrics and approaches, Hans This simple idea was so revolution- the advent and reigns of giga-, tera-, W. Meuer and I convinced ourselves ary, and the implementations were so and petascale computing. Systems are that the best long-term solution was efficient that the resulting vector -su ranked by their performance of the to maintain a list of systems in ques- percomputers reigned supreme as the Linpack benchmark,3 which solves a tion, ranking them based on the ac- fastest computers in the world from dense system of linear equations. Over tual performance the system had 1975 to 1995. Vector processors exploit time, the data collected allowed early achieved when running the Linpack data-level parallelism elegantly, they identification and quantification of benchmark. Based on our previous could hide memory latency very well, many trends related to computer ar- market studies we were confident that and they are energy efficient since they chitectures used in HPC.4,5 we could assemble a list of at least 500 did not need to fetch and decode as systems that we had previously con- many instructions. After some early COMPUTER: How was the Linpack sidered supercomputers. This deter- prototypes, the vector supercomputing benchmark selected as a measure for the mined our cutoff. era started with Seymour Cray and his TOP500 ranking of supercomputers? Cray 1.2 The company continued with COMPUTER: There were other drivers Cray 2, Cray X-MP, and Cray T90, and fi- ERICH STROHMAIER: In the mid- beyond Cold War competition by the nalized with the Cray X1 and X1E. Cray 1980s, Hans W. Meuer started a small late 1970s and early 1980s, including Research was building vector proces- and focused annual conference series the revolution in personal computing. sors for 30 years. The implementation about supercomputing, which soon What events spurred the funding of was very efficient partially due to the evolved to become the International supercomputing in particular? radical technologies that were used Supercomputing Conference (www at the time such as transistor-based .isc-hpc.com). During the opening ses- BELL: In 1982, Japan’s Ministry of memory instead of magnetic-core sions of these conferences, he used to Trade and Industry established the and extra fast Emitter-Coupled Logic present statistics collected from ven- Fifth Generation Computer Systems (ECL) instead of CMOS, which enabled dors and colleagues about the num- research program to create an AI com- a very high clock rate. This landscape bers, locations, and manufacturers of puter. This stimulated DARPA’s de- was soon made more heterogeneous supercomputers worldwide. Initially, cade-long Strategic Computing Initia- with the vector supercomputer imple- it was relatively obvious which sys- tive (SCI) in 1983 to advance computer mentations from Japan from Hitachi, tems should be considered as super- hardware and artificial intelligence. NEC, and Fujitsu. The vector supercom- computers. This label was reserved for SCI funded a number of designs, in- puters introduced many innovations, vector processing systems from com- cluding Thinking Machines, which from using massively multi-banked panies such as Cray, CDC, Fujitsu, NEC, was a key demonstration for displacing high-bandwidth memory systems, to and Hitachi, which competed in the transactional memory supercomput- multiprocessors with fast processor market and each claimed theirs had ers.
Recommended publications
  • User's Manual
    Management Number:OUCMC-super-技術-007-06 National University Corporation Osaka University Cybermedia Center User's manual NEC First Government & Public Solutions Division Summary 1. System Overview ................................................................................................. 1 1.1. Overall System View ....................................................................................... 1 1.2. Thress types of Computiong Environments ........................................................ 2 1.2.1. General-purpose CPU environment .............................................................. 2 1.2.2. Vector computing environment ................................................................... 3 1.2.3. GPGPU computing environment .................................................................. 5 1.3. Three types of Storage Areas ........................................................................... 6 1.3.1. Features.................................................................................................. 6 1.3.2. Various Data Access Methods ..................................................................... 6 1.4. Three types of Front-end nodes ....................................................................... 7 1.4.1. Features.................................................................................................. 7 1.4.2. HPC Front-end ......................................................................................... 7 1.4.1. HPDA Front-end ......................................................................................
    [Show full text]
  • Science-Driven Development of Supercomputer Fugaku
    Special Contribution Science-driven Development of Supercomputer Fugaku Hirofumi Tomita Flagship 2020 Project, Team Leader RIKEN Center for Computational Science In this article, I would like to take a historical look at activities on the application side during supercomputer Fugaku’s climb to the top of supercomputer rankings as seen from a vantage point in early July 2020. I would like to describe, in particular, the change in mindset that took place on the application side along the path toward Fugaku’s top ranking in four benchmarks in 2020 that all began with the Application Working Group and Computer Architecture/Compiler/ System Software Working Group in 2011. Somewhere along this path, the application side joined forces with the architecture/system side based on the keyword “co-design.” During this journey, there were times when our opinions clashed and when efforts to solve problems came to a halt, but there were also times when we took bold steps in a unified manner to surmount difficult obstacles. I will leave the description of specific technical debates to other articles, but here, I would like to describe the flow of Fugaku development from the application side based heavily on a “science-driven” approach along with some of my personal opinions. Actually, we are only halfway along this path. We look forward to the many scientific achievements and so- lutions to social problems that Fugaku is expected to bring about. 1. Introduction In this article, I would like to take a look back at On June 22, 2020, the supercomputer Fugaku the flow along the path toward Fugaku’s top ranking became the first supercomputer in history to simul- as seen from the application side from a vantage point taneously rank No.
    [Show full text]
  • Unit 18. Supercomputers: Everything You Need to Know About
    GAUTAM SINGH UPSC STUDY MATERIAL – Science & Technology 0 7830294949 Unit 18. Supercomputers: Everything you need to know about Supercomputers have a high level of computing performance compared to a general purpose computer. In this post, we cover all details of supercomputers like history, performance, application etc. We will also see top 3 supercomputers and the National Supercomputing Mission. What is a supercomputer? A computer with a high level of computing performance compared to a general purpose computer and performance measured in FLOPS (floating point operations per second). Great speed and great memory are the two prerequisites of a super computer. The performance is generally evaluated in petaflops (1 followed by 15 zeros). Memory is averaged around 250000 times of the normal computer we use on a daily basis. THANKS FOR READING – VISIT OUR WEBSITE www.educatererindia.com GAUTAM SINGH UPSC STUDY MATERIAL – Science & Technology 0 7830294949 Housed in large clean rooms with high air flow to permit cooling. Used to solve problems that are too complex and huge for standard computers. History of Supercomputers in the World Most of the computers on the market today are smarter and faster than the very first supercomputers and hopefully, today’s supercomputer would turn into future computers by repeating the history of innovation. The first supercomputer was built in 1957 for the United States Department of Defense by Seymour Cray in Control Data Corporation (CDC) in 1957. CDC 1604 was one of the first computers to replace vacuum tubes with transistors. In 1964, Cray’s CDC 6600 replaced Stretch as the fastest computer on earth with 3 million floating-point operations per second (FLOPS).
    [Show full text]
  • DMI-HIRLAM on the NEC SX-6
    DMI-HIRLAM on the NEC SX-6 Maryanne Kmit Meteorological Research Division Danish Meteorological Institute Lyngbyvej 100 DK-2100 Copenhagen Ø Denmark 11th Workshop on the Use of High Performance Computing in Meteorology 25-29 October 2004 Outline • Danish Meteorological Institute (DMI) • Applications run on NEC SX-6 cluster • The NEC SX-6 cluster and access to it • DMI-HIRLAM - geographic areas, versions, and improvements • Strategy for utilization and operation of the system DMI - the Danish Meteorological Institute DMI’s mission: • Making observations • Communicating them to the general public • Developing scientific meteorology DMI’s responsibilities: • Serving the meteorological needs of the kingdom of Denmark • Denmark, the Faroes and Greenland, including territorial waters and airspace • Predicting and monitoring weather, climate and environmental conditions, on land and at sea Applications running on the NEC SX-6 cluster Operational usage: • Long DMI-HIRLAM forecasts 4 times a day • Wave model forecasts for the North Atlantic, the Danish waters, and for the Mediterranean Sea 4 times a day • Trajectory particle model and ozone forecasts for air quality Research usage: • Global climate simulations • Regional climate simulations • Research and development of operational and climate codes Cluster interconnect GE−int Completely closed Connection to GE−ext Switch internal network institute network Switch GE GE nec1 nec2 nec3 nec4 nec5 nec6 nec7 nec8 GE GE neci1 neci2 FC 4*1Gbit/s FC links per node azusa1 asama1 FC McData FC 4*1Gbit/s FC links per azusa fiber channel switch 8*1Gbit/s FC links per asama azusa2 asama2 FC 2*1Gbit/s FC links per lun Disk#0 Disk#1 Disk#2 Disk#3 Cluster specifications • NEC SX-6 (nec[12345678]) : 64M8 (8 vector nodes with 8 CPU each) – Desc.
    [Show full text]
  • Providing a Robust Tools Landscape for CORAL Machines
    Providing a Robust Tools Landscape for CORAL Machines 4th Workshop on Extreme Scale Programming Tools @ SC15 in AusGn, Texas November 16, 2015 Dong H. Ahn Lawrence Livermore Naonal Lab Michael Brim Oak Ridge Naonal Lab Sco4 Parker Argonne Naonal Lab Gregory Watson IBM Wesley Bland Intel CORAL: Collaboration of ORNL, ANL, LLNL Current DOE Leadership Computers Objective - Procure 3 leadership computers to be Titan (ORNL) Sequoia (LLNL) Mira (ANL) 2012 - 2017 sited at Argonne, Oak Ridge and Lawrence Livermore 2012 - 2017 2012 - 2017 in 2017. Leadership Computers - RFP requests >100 PF, 2 GiB/core main memory, local NVRAM, and science performance 4x-8x Titan or Sequoia Approach • Competitive process - one RFP (issued by LLNL) leading to 2 R&D contracts and 3 computer procurement contracts • For risk reduction and to meet a broad set of requirements, 2 architectural paths will be selected and Oak Ridge and Argonne must choose different architectures • Once Selected, Multi-year Lab-Awardee relationship to co-design computers • Both R&D contracts jointly managed by the 3 Labs • Each lab manages and negotiates its own computer procurement contract, and may exercise options to meet their specific needs CORAL Innova,ons and Value IBM, Mellanox, and NVIDIA Awarded $325M U.S. Department of Energy’s CORAL Contracts • Scalable system solu.on – scale up, scale down – to address a wide range of applicaon domains • Modular, flexible, cost-effec.ve, • Directly leverages OpenPOWER partnerships and IBM’s Power system roadmap • Air and water cooling • Heterogeneous
    [Show full text]
  • Summit Supercomputer
    Cooling System Overview: Summit Supercomputer David Grant, PE, CEM, DCEP HPC Mechanical Engineer Oak Ridge National Laboratory Corresponding Member of ASHRAE TC 9.9 Infrastructure Co-Lead - EEHPCWG ORNL is managed by UT-Battelle, LLC for the US Department of Energy Today’s Presentation #1 ON THE TOP 500 LIST • System Description • Cooling System Components • Cooling System Performance NOVEMBER 2018 #1 JUNE 2018 #1 2 Feature Titan Summit Summit Node Overview Application Baseline 5-10x Titan Performance Number of 18,688 4,608 Nodes Node 1.4 TF 42 TF performance Memory per 32 GB DDR3 + 6 GB 512 GB DDR4 + 96 GB Node GDDR5 HBM2 NV memory per 0 1600 GB Node Total System >10 PB DDR4 + HBM2 + 710 TB Memory Non-volatile System Gemini (6.4 GB/s) Dual Rail EDR-IB (25 GB/s) Interconnect Interconnect 3D Torus Non-blocking Fat Tree Topology Bi-Section 15.6 TB/s 115.2 TB/s Bandwidth 1 AMD Opteron™ 2 IBM POWER9™ Processors 1 NVIDIA Kepler™ 6 NVIDIA Volta™ 32 PB, 1 TB/s, File System 250 PB, 2.5 TB/s, GPFS™ Lustre® Peak Power 9 MW 13 MW Consumption 3 System Description – How do we cool it? • >100,000 liquid connections 4 System Description – What’s in the data center? • Passive RDHXs- 215,150ft2 (19,988m2) of total heat exchange surface (>20X the area of the data center) – With a 70°F (21.1 °C) entering water temperature, the room averages ~73°F (22.8°C) with ~3.5MW load and ~75.5°F (23.9°C) with ~10MW load.
    [Show full text]
  • Hardware Technology of the Earth Simulator 1
    Special Issue on High Performance Computing 27 Architecture and Hardware for HPC 1 Hardware Technology of the Earth Simulator 1 By Jun INASAKA,* Rikikazu IKEDA,* Kazuhiko UMEZAWA,* 5 Ko YOSHIKAWA,* Shitaka YAMADA† and Shigemune KITAWAKI‡ 5 1-1 This paper describes the hardware technologies of the supercomputer system “The Earth Simula- 1-2 ABSTRACT tor,” which has the highest performance in the world and was developed by ESRDC (the Earth 1-3 & 2-1 Simulator Research and Development Center)/NEC. The Earth Simulator has adopted NEC’s leading edge 2-2 10 technologies such as most advanced device technology and process technology to develop a high-speed and high-10 2-3 & 3-1 integrated LSI resulting in a one-chip vector processor. By combining this LSI technology with various advanced hardware technologies including high-density packaging, high-efficiency cooling technology against high heat dissipation and high-density cabling technology for the internal node shared memory system, ESRDC/ NEC has been successful in implementing the supercomputer system with its Linpack benchmark performance 15 of 35.86TFLOPS, which is the world’s highest performance (http://www.top500.org/ : the 20th TOP500 list of the15 world’s fastest supercomputers). KEYWORDS The Earth Simulator, Supercomputer, CMOS, LSI, Memory, Packaging, Build-up Printed Wiring Board (PWB), Connector, Cooling, Cable, Power supply 20 20 1. INTRODUCTION (Main Memory Unit) package are all interconnected with the fine coaxial cables to minimize the distances The Earth Simulator has adopted NEC’s most ad- between them and maximize the system packaging vanced CMOS technologies to integrate vector and density corresponding to the high-performance sys- 25 25 parallel processing functions for realizing a super- tem.
    [Show full text]
  • Shared-Memory Vector Systems Compared
    Shared-Memory Vector Systems Compared Robert Bell, CSIRO and Guy Robinson, Arctic Region Supercomputing Center ABSTRACT: The NEC SX-5 and the Cray SV1 are the only shared-memory vector computers currently being marketed. This compares with at least five models a few years ago (J90, T90, SX-4, Fujitsu and Hitachi), with IBM, Digital, Convex, CDC and others having fallen by the wayside in the early 1990s. In this presentation, some comparisons will be made between the architecture of the survivors, and some performance comparisons will be given on benchmark and applications codes, and in areas not usually presented in comparisons, e.g. file systems, network performance, gzip speeds, compilation speeds, scalability and tools and libraries. KEYWORDS: SX-5, SV1, shared-memory, vector systems averaged 95% on the larger node, and 92% on the smaller 1. Introduction node. HPCCC The SXes are supported by data storage systems – The Bureau of Meteorology and CSIRO in Australia SAM-FS for the Bureau, and DMF on a Cray J90se for established the High Performance Computing and CSIRO. Communications Centre (HPCCC) in 1997, to provide ARSC larger computational facilities than either party could The mission of the Arctic Region Supercomputing acquire separately. The main initial system was an NEC Centre is to support high performance computational SX-4, which has now been replaced by two NEC SX-5s. research in science and engineering with an emphasis on The current system is a dual-node SX-5/24M with 224 high latitudes and the Arctic. ARSC provides high Gbyte of memory, and this will become an SX-5/32M performance computational, visualisation and data storage with 224 Gbyte in July 2001.
    [Show full text]
  • Seymour Cray: the Father of World Supercomputer
    History Research 2019; 7(1): 1-6 http://www.sciencepublishinggroup.com/j/history doi: 10.11648/j.history.20190701.11 ISSN: 2376-6700 (Print); ISSN: 2376-6719 (Online) Seymour Cray: The Father of World Supercomputer Si Hongwei Department of the History of Science, Tsinghua University, Beijing, China Email address: To cite this article: Si Hongwei. Seymour Cray: The Father of World Supercomputer. History Research. Vol. 7, No. 1, 2019, pp. 1-6. doi: 10.11648/j.history.20190701.11 Received : May 14, 2019; Accepted : June 13, 2019; Published : June 26, 2019 Abstract: Seymour R. Cray was an American engineer and supercomputer developer who designed a series of the fastest computers in the world in 1960-1980s. The difference between Cray and most other corporate engineers is that he often won those business battles. His success was attributable to his existence in a postwar culture where engineers were valued. He was able to also part of an extraordinary industry where revolutionary developments were encouraged, and even necessary. Lastly Cray is recognized as "the father of world supercomputer". From the perspective of science and technology history, this paper describes the history of Cray and his development of supercomputer. It also sums up his innovative ideas and scientific spirit. It provides a reference for supercomputer enthusiasts and peers in the history of computer research. Keywords: Seymour R. Cray, Supercomputer, Science and Technology History 1. Introduction 2. The Genius Seymour Supercomputer refers to the most advanced electronic computer system with the most advanced technology, the Seymour Cray was born on September 28th, 1925 in the fastest computing speed, the largest storage capacity and the town of Chippewa, Wisconsin.
    [Show full text]
  • Recent Supercomputing Development in Japan
    Supercomputing in Japan Yoshio Oyanagi Dean, Faculty of Information Science Kogakuin University 2006/4/24 1 Generations • Primordial Ages (1970’s) – Cray-1, 75APU, IAP • 1st Generation (1H of 1980’s) – Cyber205, XMP, S810, VP200, SX-2 • 2nd Generation (2H of 1980’s) – YMP, ETA-10, S820, VP2600, SX-3, nCUBE, CM-1 • 3rd Generation (1H of 1990’s) – C90, T3D, Cray-3, S3800, VPP500, SX-4, SP-1/2, CM-5, KSR2 (HPC ventures went out) • 4th Generation (2H of 1990’s) – T90, T3E, SV1, SP-3, Starfire, VPP300/700/5000, SX-5, SR2201/8000, ASCI(Red, Blue) • 5th Generation (1H of 2000’s) – ASCI,TeraGrid,BlueGene/L,X1, Origin,Power4/5, ES, SX- 6/7/8, PP HPC2500, SR11000, …. 2006/4/24 2 Primordial Ages (1970’s) 1974 DAP, BSP and HEP started 1975 ILLIAC IV becomes operational 1976 Cray-1 delivered to LANL 80MHz, 160MF 1976 FPS AP-120B delivered 1977 FACOM230-75 APU 22MF 1978 HITAC M-180 IAP 1978 PAX project started (Hoshino and Kawai) 1979 HEP operational as a single processor 1979 HITAC M-200H IAP 48MF 1982 NEC ACOS-1000 IAP 28MF 1982 HITAC M280H IAP 67MF 2006/4/24 3 Characteristics of Japanese SC’s 1. Manufactured by main-frame vendors with semiconductor facilities (not ventures) 2. Vector processors are attached to mainframes 3. HITAC IAP a) memory-to-memory b) summation, inner product and 1st order recurrence can be vectorized c) vectorization of loops with IF’s (M280) 4. No high performance parallel machines 2006/4/24 4 1st Generation (1H of 1980’s) 1981 FPS-164 (64 bits) 1981 CDC Cyber 205 400MF 1982 Cray XMP-2 Steve Chen 630MF 1982 Cosmic Cube in Caltech, Alliant FX/8 delivered, HEP installed 1983 HITAC S-810/20 630MF 1983 FACOM VP-200 570MF 1983 Encore, Sequent and TMC founded, ETA span off from CDC 2006/4/24 5 1st Generation (1H of 1980’s) (continued) 1984 Multiflow founded 1984 Cray XMP-4 1260MF 1984 PAX-64J completed (Tsukuba) 1985 NEC SX-2 1300MF 1985 FPS-264 1985 Convex C1 1985 Cray-2 1952MF 1985 Intel iPSC/1, T414, NCUBE/1, Stellar, Ardent… 1985 FACOM VP-400 1140MF 1986 CM-1 shipped, FPS T-series (max 1TF!!) 2006/4/24 6 Characteristics of Japanese SC in the 1st G.
    [Show full text]
  • NVIDIA Powers the World's Top 13 Most Energy Efficient Supercomputers
    NVIDIA Powers the World's Top 13 Most Energy Efficient Supercomputers ISC -- Advancing the path to exascale computing, NVIDIA today announced that the NVIDIA® Tesla® AI supercomputing platform powers the top 13 measured systems on the new Green500 list of the world's most energy-efficient high performance computing (HPC) systems. All 13 use NVIDIA Tesla P100 data center GPU accelerators, including four systems based on the NVIDIA DGX-1™ AI supercomputer. • As Moore's Law slows, NVIDIA Tesla GPUs continue to extend computing, improving performance 3X in two years • Tesla V100 GPUs projected to provide U.S. Energy Department's Summit supercomputer with 200 petaflops of HPC, 3 exaflops of AI performance • Major cloud providers commit to bring NVIDIA Volta GPU platform to market Advancing the path to exascale computing, NVIDIA today announced that the NVIDIA® Tesla® AI supercomputing platform powers the top 13 measured systems on the new Green500 list of the world's most energy-efficient high performance computing (HPC) systems. All 13 use NVIDIA Tesla P100 data center GPU accelerators, including four systems based on the NVIDIA DGX-1™ AI supercomputer. NVIDIA today also released performance data illustrating that NVIDIA Tesla GPUs have improved performance for HPC applications by 3X over the Kepler architecture released two years ago. This significantly boosts performance beyond what would have been predicted by Moore's Law, even before it began slowing in recent years. Additionally, NVIDIA announced that its Tesla V100 GPU accelerators -- which combine AI and traditional HPC applications on a single platform -- are projected to provide the U.S.
    [Show full text]
  • TOP500 Supercomputer Sites
    7/24/2018 News | TOP500 Supercomputer Sites HOME | SEARCH | REGISTER RSS | MY ACCOUNT | EMBED RSS | SUPER RSS | Contact Us | News | TOP500 Supercomputer Sites http://top500.org/blog/category/feature-article/feeds/rss Are you the publisher? Claim or contact Browsing the Latest Browse All Articles (217 Live us about this channel Snapshot Articles) Browser Embed this Channel Description: content in your HTML TOP500 News Search Report adult content: 04/27/18--03:14: UK Commits a 0 0 Billion Pounds to AI Development click to rate The British government and the private sector are investing close to £1 billion Account: (login) pounds to boost the country’s artificial intelligence sector. The investment, which was announced on Thursday, is part of a wide-ranging strategy to make the UK a global leader in AI and big data. More Channels Under the investment, known as the “AI Sector Deal,” government, industry, and academia will contribute £603 million in new funding, adding to the £342 million already allocated in existing budgets. That brings the grand total to Showcase £945 million, or about $1.3 billion at the current exchange rate. The UK RSS Channel Showcase 1586818 government is also looking to increase R&D spending across all disciplines by 2.4 percent, while also raising the R&D tax credit from 11 to 12 percent. This is RSS Channel Showcase 2022206 part of a broader commitment to raise government spending in this area from RSS Channel Showcase 8083573 around £9.5 billion in 2016 to £12.5 billion in 2021. RSS Channel Showcase 1992889 The UK government policy paper that describes the sector deal meanders quite a bit, describing a lot of programs and initiatives that intersect with the AI investments, but are otherwise free-standing.
    [Show full text]