Cray Supercomputers Past, Present, and Future
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
New CSC Computing Resources
New CSC computing resources Atte Sillanpää, Nino Runeberg CSC – IT Center for Science Ltd. Outline CSC at a glance New Kajaani Data Centre Finland’s new supercomputers – Sisu (Cray XC30) – Taito (HP cluster) CSC resources available for researchers CSC presentation 2 CSC’s Services Funet Services Computing Services Universities Application Services Polytechnics Ministries Data Services for Science and Culture Public sector Information Research centers Management Services Companies FUNET FUNET and Data services – Connections to all higher education institutions in Finland and for 37 state research institutes and other organizations – Network Services and Light paths – Network Security – Funet CERT – eduroam – wireless network roaming – Haka-identity Management – Campus Support – The NORDUnet network Data services – Digital Preservation and Data for Research Data for Research (TTA), National Digital Library (KDK) International collaboration via EU projects (EUDAT, APARSEN, ODE, SIM4RDM) – Database and information services Paituli: GIS service Nic.funet.fi – freely distributable files with FTP since 1990 CSC Stream Database administration services – Memory organizations (Finnish university and polytechnics libraries, Finnish National Audiovisual Archive, Finnish National Archives, Finnish National Gallery) 4 Current HPC System Environment Name Louhi Vuori Type Cray XT4/5 HP Cluster DOB 2007 2010 Nodes 1864 304 CPU Cores 10864 3648 Performance ~110 TFlop/s 34 TF Total memory ~11 TB 5 TB Interconnect Cray QDR IB SeaStar Fat tree 3D Torus CSC -
Red Storm Infrastructure at Sandia National Laboratories
Red Storm Infrastructure Robert A. Ballance, John P. Noe, Milton J. Clauser, Martha J. Ernest, Barbara J. Jennings, David S. Logsted, David J. Martinez, John H. Naegle, and Leonard Stans Sandia National Laboratories∗ P.O. Box 5800 Albquuerque, NM 87185-0807 May 13, 2005 Abstract Large computing systems, like Sandia National Laboratories’ Red Storm, are not deployed in isolation. The computer, its infrastructure support, and the operations of both have to be designed to support the mission of the or- ganization. This talk and paper will describe the infrastructure and operational requirements of Red Storm along with a discussion of how Sandia is meeting those requirements. It will include discussions of the facilities that surround and support Red Storm: shelter, power, cooling, security, networking, interactions with other Sandia platforms and capabilities, operations support, and user services. 1 Introduction mary components: Stewart Brand, in How Buildings Learn[Bra95], has doc- 1. The physical setting, such as buildings, power, and umented many of the ways in which buildings — often cooling, thought to be static structures – evolve over time to meet the needs of their occupants and the constraints of their 2. The computational setting which includes the re- environments. Machines learn in the same way. This is lated computing systems that support ASC efforts not the learning of “Machine learning” in Artificial Intel- on Red Storm, ligence; it is the very human result of making a machine 3. The networking setting that ties all of these to- smarter as it operates within its distinct environment. gether, and Deploying a large computational resource teaches many lessons, including the need to meet the expecta- 4. -
UNICOS/Mk Status)
Status on the Serverization of UNICOS - (UNICOS/mk Status) Jim Harrell, Cray Research, Inc., 655-F Lone Oak Drive, Eagan, Minnesota 55121 ABSTRACT: UNICOS is being reorganized into a microkernel based system. The purpose of this reorganization is to provide an operating system that can be used on all Cray architectures and provide both the current UNICOS functionality and a path to the future distributed systems. The reorganization of UNICOS is moving forward. The port of this “new” system to the MPP is also in progress. This talk will present the current status, and plans for UNICOS/mk. The chal- lenges of performance, size and scalability will be discussed. 1 Introduction have to be added in order to provide required functionality, such as support for distributed applications. As the work of adding This discussion is divided into four parts. The first part features and porting continues there is a testing effort that discusses the development process used by this project. The ensures correct functionality of the product. development process is the methodology that is being used to serverize UNICOS. The project is actually proceeding along Some of the initial porting can be and has been done in the multiple, semi-independent paths at the same time.The develop- simulator. However, issues such as MPP system organization, ment process will help explain the information in the second which nodes the servers will reside on and how the servers will part which is a discussion of the current status of the project. interact can only be completed on the hardware. The issue of The third part discusses accomplished milestones. -
Musings RIK FARROWOPINION
Musings RIK FARROWOPINION Rik is the editor of ;login:. While preparing this issue of ;login:, I found myself falling down a rabbit hole, like [email protected] Alice in Wonderland . And when I hit bottom, all I could do was look around and puzzle about what I discovered there . My adventures started with a casual com- ment, made by an ex-Cray Research employee, about the design of current super- computers . He told me that today’s supercomputers cannot perform some of the tasks that they are designed for, and used weather forecasting as his example . I was stunned . Could this be true? Or was I just being dragged down some fictional rabbit hole? I decided to learn more about supercomputer history . Supercomputers It is humbling to learn about the early history of computer design . Things we take for granted, such as pipelining instructions and vector processing, were impor- tant inventions in the 1970s . The first supercomputers were built from discrete components—that is, transistors soldered to circuit boards—and had clock speeds in the tens of nanoseconds . To put that in real terms, the Control Data Corpora- tion’s (CDC) 7600 had a clock cycle of 27 .5 ns, or in today’s terms, 36 4. MHz . This was CDC’s second supercomputer (the 6600 was first), but included instruction pipelining, an invention of Seymour Cray . The CDC 7600 peaked at 36 MFLOPS, but generally got 10 MFLOPS with carefully tuned code . The other cool thing about the CDC 7600 was that it broke down at least once a day . -
Cray Product Installation and Configuration
Cray Product Installation and Configuration Scott E. Grabow, Silicon Graphics, 655F Lone Oak Drive, Eagan, MN 55121, [email protected] ABSTRACT: The Common Installation Tool (CIT) has had a large impact upon the installation process for Cray software. The migration to CIT for installation has yielded a large number of questions about the methodology for installing new software via CIT. This document discusses the installation process with CIT and how CIT’s introduction has brought about changes to our installation process and documentation. 1 Introduction • Provide a complete replacement approach rather than an incremental software approach or a mods based release The Common Installation Tool (CIT) has made a dramatic change to the processes and manuals for installation of UNICOS • Provide a better way to ensure that relocatable and source and asynchronous products. This document is divided into four files match on site after an install major sections dealing with the changes and the impacts of the • Reduce the amount of time to do an install installation process. • Support single source upgrade option for all systems 2 Changes made between UNICOS 9.0 and Some customers have expressed that the UNICOS 9.0 instal- UNICOS 10.0/UNICOS/mk lation process is rather hard to understand initially due to the incremental software approach, the number of packages needed With the release of UNICOS 9.2 and UNICOS/mk the pack- to be loaded, and when these packages need to be installed. Part aging and installation of Cray software has undergone major of the problem was the UNICOS Installation Guide[1] did not changes when compared to the UNICOS 9.0 packaging and present a clear path as what should be performed during a installation process. -
Merrimac – High-Performance and Highly-Efficient Scientific Computing with Streams
MERRIMAC – HIGH-PERFORMANCE AND HIGHLY-EFFICIENT SCIENTIFIC COMPUTING WITH STREAMS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Mattan Erez May 2007 c Copyright by Mattan Erez 2007 All Rights Reserved ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (William J. Dally) Principal Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (Patrick M. Hanrahan) I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (Mendel Rosenblum) Approved for the University Committee on Graduate Studies. iii iv Abstract Advances in VLSI technology have made the raw ingredients for computation plentiful. Large numbers of fast functional units and large amounts of memory and bandwidth can be made efficient in terms of chip area, cost, and energy, however, high-performance com- puters realize only a small fraction of VLSI’s potential. This dissertation describes the Merrimac streaming supercomputer architecture and system. Merrimac has an integrated view of the applications, software system, compiler, and architecture. We will show how this approach leads to over an order of magnitude gains in performance per unit cost, unit power, and unit floor-space for scientific applications when compared to common scien- tific computers designed around clusters of commodity general-purpose processors. -
Magnetic Peripherals Inc Storage Module Drive Vintage: C.1975 Synopsis: 14-Inch Cartridge Disk Drive
AccessionIndex: TCD-SCSS-T.20121208.028 Accession Date: 8-Dec-2012 Accession By: Tom Kearney Object name: Magnetic Peripherals Inc Storage Module Drive Vintage: c.1975 Synopsis: 14-inch cartridge disk drive. S/N: 652128. Description: Magnetic Peripherals Inc (MPI), of Minneapolis, Minnesota , was a joint venture formed by Control Data Corporation (CDC) and Honeywell Bull in 1975. CDC contributed their disk drive facilities at Normandale and Omaha, USA. Honeywell contributed its ex-GE disk plant at Oklahoma City and CII-Honeywell Bull’s plant at Heppenheim, Germany. Its early products were IBM-3330-like third generation disk drives, superceded by IBM-3340/3350/3370-like drives, all with IBM interfaces or Storage Module Device (SMD) interfaces. It became a major player in the hard disk drive market. It was the world wide leader in 14-inch disk drive technology in the OEM marketplace in the 1970s and early 1980s, especially with its SMD and CMD (Cartridge Module Drive) interfaces, and with its 24-hour/7-days-a-week plant at Brynmawr in South Wales, which celebrated the production of 1 million disks and 3 million magnetic tapes in Oct-1979. MPI was renamed Imprimus in 1988 and was bought by Seagate in 1989. The drive in this collection looks almost identical to a CDC9762 80MB disk drive, introduced in 1974 by CDC, see Figures 21 and 22. However, the drive in this collection was made in Portugal by Magnetic Peripherals Inc, and from the rear can be seen to have a different interface, which looks very like an SMD interface. -
Vector Vs. Scalar Processors: a Performance Comparison Using a Set of Computational Science Benchmarks
Vector vs. Scalar Processors: A Performance Comparison Using a Set of Computational Science Benchmarks Mike Ashworth, Ian J. Bush and Martyn F. Guest, Computational Science & Engineering Department, CCLRC Daresbury Laboratory ABSTRACT: Despite a significant decline in their popularity in the last decade vector processors are still with us, and manufacturers such as Cray and NEC are bringing new products to market. We have carried out a performance comparison of three full-scale applications, the first, SBLI, a Direct Numerical Simulation code from Computational Fluid Dynamics, the second, DL_POLY, a molecular dynamics code and the third, POLCOMS, a coastal-ocean model. Comparing the performance of the Cray X1 vector system with two massively parallel (MPP) micro-processor-based systems we find three rather different results. The SBLI PCHAN benchmark performs excellently on the Cray X1 with no code modification, showing 100% vectorisation and significantly outperforming the MPP systems. The performance of DL_POLY was initially poor, but we were able to make significant improvements through a few simple optimisations. The POLCOMS code has been substantially restructured for cache-based MPP systems and now does not vectorise at all well on the Cray X1 leading to poor performance. We conclude that both vector and MPP systems can deliver high performance levels but that, depending on the algorithm, careful software design may be necessary if the same code is to achieve high performance on different architectures. KEYWORDS: vector processor, scalar processor, benchmarking, parallel computing, CFD, molecular dynamics, coastal ocean modelling All of the key computational science groups in the 1. Introduction UK made use of vector supercomputers during their halcyon days of the 1970s, 1980s and into the early 1990s Vector computers entered the scene at a very early [1]-[3]. -
UNICOS® Installation Guide for CRAY J90lm Series SG-5271 9.0.2
UNICOS® Installation Guide for CRAY J90lM Series SG-5271 9.0.2 / ' Cray Research, Inc. Copyright © 1996 Cray Research, Inc. All Rights Reserved. This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Research, Inc. Portions of this product may still be in development. The existence of those portions still in development is not a commitment of actual release or support by Cray Research, Inc. Cray Research, Inc. assumes no liability for any damages resulting from attempts to use any functionality or documentation not officially released and supported. If it is released, the final form and the time of official release and start of support is at the discretion of Cray Research, Inc. Autotasking, CF77, CRAY, Cray Ada, CRAYY-MP, CRAY-1, HSX, SSD, UniChem, UNICOS, and X-MP EA are federally registered trademarks and CCI, CF90, CFr, CFr2, CFT77, COS, Cray Animation Theater, CRAY C90, CRAY C90D, Cray C++ Compiling System, CrayDoc, CRAY EL, CRAY J90, Cray NQS, CraylREELlibrarian, CraySoft, CRAY T90, CRAY T3D, CrayTutor, CRAY X-MP, CRAY XMS, CRAY-2, CRInform, CRIlThrboKiva, CSIM, CVT, Delivering the power ..., DGauss, Docview, EMDS, HEXAR, lOS, LibSci, MPP Apprentice, ND Series Network Disk Array, Network Queuing Environment, Network Queuing '!boIs, OLNET, RQS, SEGLDR, SMARTE, SUPERCLUSTER, SUPERLINK, Trusted UNICOS, and UNICOS MAX are trademarks of Cray Research, Inc. Anaconda is a trademark of Archive Technology, Inc. EMASS and ER90 are trademarks of EMASS, Inc. EXABYTE is a trademark of EXABYTE Corporation. GL and OpenGL are trademarks of Silicon Graphics, Inc. -
Through the Years… When Did It All Begin?
& Through the years… When did it all begin? 1974? 1978? 1963? 2 CDC 6600 – 1974 NERSC started service with the first Supercomputer… ● A well-used system - Serial Number 1 ● On its last legs… ● Designed and built in Chippewa Falls ● Launch Date: 1963 ● Load / Store Architecture ● First RISC Computer! ● First CRT Monitor ● Freon Cooled ● State-of-the-Art Remote Access at NERSC ● Via 4 acoustic modems, manually answered capable of 10 characters /sec 3 50th Anniversary of the IBM / Cray Rivalry… Last week, CDC had a press conference during which they officially announced their 6600 system. I understand that in the laboratory developing this system there are only 32 people, “including the janitor”… Contrasting this modest effort with our vast development activities, I fail to understand why we have lost our industry leadership position by letting someone else offer the world’s most powerful computer… T.J. Watson, August 28, 1963 4 2/6/14 Cray Higher-Ed Roundtable, July 22, 2013 CDC 7600 – 1975 ● Delivered in September ● 36 Mflop Peak ● ~10 Mflop Sustained ● 10X sustained performance vs. the CDC 6600 ● Fast memory + slower core memory ● Freon cooled (again) Major Innovations § 65KW Memory § 36.4 MHz clock § Pipelined functional units 5 Cray-1 – 1978 NERSC transitions users ● Serial 6 to vector architectures ● An fairly easy transition for application writers ● LTSS was converted to run on the Cray-1 and became known as CTSS (Cray Time Sharing System) ● Freon Cooled (again) ● 2nd Cray 1 added in 1981 Major Innovations § Vector Processing § Dependency -
(PDF) Kostenlos
David Gugerli | Ricky Wichum Simulation for All David Gugerli Ricky Wichum The Politics of Supercomputing in Stuttgart David Gugerli, Ricky Wichum SIMULATION FOR ALL THE POLITICS OF SUPERCOMPUTING IN STUTTGART Translator: Giselle Weiss Cover image: Installing the Cray-2 in the computing center on 7 October 1983 (Polaroid UASt). Cover design: Thea Sautter, Zürich © 2021 Chronos Verlag, Zürich ISBN 978-3-0340-1621-6 E-Book (PDF): DOI 10.33057/chronos.1621 German edition: ISBN 978-3-0340-1620-9 Contents User’s guide 7 The centrality issue (1972–1987) 13 Gaining dominance 13 Planning crisis and a flood of proposals 15 5 Attempted resuscitation 17 Shaping policy and organizational structure 22 A diversity of machines 27 Shielding users from complexity 31 Communicating to the public 34 The performance gambit (1988–1996) 41 Simulation for all 42 The cost of visualizing computing output 46 The false security of benchmarks 51 Autonomy through regional cooperation 58 Stuttgart’s two-pronged solution 64 Network to the rescue (1997–2005) 75 Feasibility study for a national high-performance computing network 77 Metacomputing – a transatlantic experiment 83 Does the university really need an HLRS? 89 Grid computing extends a lifeline 95 Users at work (2006–2016) 101 6 Taking it easy 102 With Gauss to Europe 106 Virtual users, and users in virtual reality 109 Limits to growth 112 A history of reconfiguration 115 Acknowledgments 117 Notes 119 Bibliography 143 List of Figures 153 User’s guide The history of supercomputing in Stuttgart is fascinating. It is also complex. Relating it necessarily entails finding one’s own way and deciding meaningful turning points. -
Unix and Linux System Administration and Shell Programming
Unix and Linux System Administration and Shell Programming Unix and Linux System Administration and Shell Programming version 56 of August 12, 2014 Copyright © 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2009, 2010, 2011, 2012, 2013, 2014 Milo This book includes material from the http://www.osdata.com/ website and the text book on computer programming. Distributed on the honor system. Print and read free for personal, non-profit, and/or educational purposes. If you like the book, you are encouraged to send a donation (U.S dollars) to Milo, PO Box 5237, Balboa Island, California, USA 92662. This is a work in progress. For the most up to date version, visit the website http://www.osdata.com/ and http://www.osdata.com/programming/shell/unixbook.pdf — Please add links from your website or Facebook page. Professors and Teachers: Feel free to take a copy of this PDF and make it available to your class (possibly through your academic website). This way everyone in your class will have the same copy (with the same page numbers) despite my continual updates. Please try to avoid posting it to the public internet (to avoid old copies confusing things) and take it down when the class ends. You can post the same or a newer version for each succeeding class. Please remove old copies after the class ends to prevent confusing the search engines. You can contact me with a specific version number and class end date and I will put it on my website. version 56 page 1 Unix and Linux System Administration and Shell Programming Unix and Linux Administration and Shell Programming chapter 0 This book looks at Unix (and Linux) shell programming and system administration.