Unit 18. Supercomputers: Everything You Need to Know About

Total Page:16

File Type:pdf, Size:1020Kb

Unit 18. Supercomputers: Everything You Need to Know About GAUTAM SINGH UPSC STUDY MATERIAL – Science & Technology 0 7830294949 Unit 18. Supercomputers: Everything you need to know about Supercomputers have a high level of computing performance compared to a general purpose computer. In this post, we cover all details of supercomputers like history, performance, application etc. We will also see top 3 supercomputers and the National Supercomputing Mission. What is a supercomputer? A computer with a high level of computing performance compared to a general purpose computer and performance measured in FLOPS (floating point operations per second). Great speed and great memory are the two prerequisites of a super computer. The performance is generally evaluated in petaflops (1 followed by 15 zeros). Memory is averaged around 250000 times of the normal computer we use on a daily basis. THANKS FOR READING – VISIT OUR WEBSITE www.educatererindia.com GAUTAM SINGH UPSC STUDY MATERIAL – Science & Technology 0 7830294949 Housed in large clean rooms with high air flow to permit cooling. Used to solve problems that are too complex and huge for standard computers. History of Supercomputers in the World Most of the computers on the market today are smarter and faster than the very first supercomputers and hopefully, today’s supercomputer would turn into future computers by repeating the history of innovation. The first supercomputer was built in 1957 for the United States Department of Defense by Seymour Cray in Control Data Corporation (CDC) in 1957. CDC 1604 was one of the first computers to replace vacuum tubes with transistors. In 1964, Cray’s CDC 6600 replaced Stretch as the fastest computer on earth with 3 million floating-point operations per second (FLOPS). The term supercomputer was coined to describe CDC 6600. Earlier supercomputers used to have very few processors but as the technology evolved and vector processing was turned into parallel processing, use of processors multiplied manifold resulting into supra fast supercomputers of the current decade. History of Supercomputer in India As the saying goes “need is the mother of all inventions”, India started its journey towards supercomputers because it was denied the import of Cray supercomputers from the United States of America due to arms embargo imposed on India after Nuclear tests in the 1970s. They were of the opinion that India might use the same for the development of military rather than civilian purposes since supercomputers came under dual-use technology group. Ideation phase was started in the 1980s. The first indigenous supercomputer was developed indigenously in 1991 by Centre for Development of Advanced Computing which was called as PARAM 8000. Russian assistant in the development was paramount. THANKS FOR READING – VISIT OUR WEBSITE www.educatererindia.com GAUTAM SINGH UPSC STUDY MATERIAL – Science & Technology 0 7830294949 PARAM 8000 was replicated and installed at ICAD Moscow in 1991 under Russian collaboration. In 2007, India held top 10 spots for speeds of supercomputers. As of July 2016, India has 9 supercomputers with speeds in top 500 but not any in top 10. How powerful are supercomputers as compared to a computer? The performance of ordinary computers is generally quoted in MIPS (million instructions per second). MIPS is about the fundamental programming commands (read, write, store, and so on) the processor can manage. Therefore computers are compared based on the number of MIPS they can handle which is typically rated in gigahertz as the processor speed. Supercomputers are rated a different way because they are dealing with the scientific calculations. They are measured according to how many floating point operations per second (FLOPS) they can do. Since supercomputers were first developed, their performance has been measured in successively greater numbers of FLOPS, as the table below illustrates: World’s top 3 supercomputers 1. Sunway TaihuLight – developed in China with the computing power of a 93 petaflop/s. 2. The Tianhe-2 (MilkyWay-2) – from China. This supercomputer is capable of 33.8 petaflop/s. 3. Titan – from the US. Computing capacity is 17.5 petaflop/s. What is the next generation supercomputing? Optical computing calculations with the near speed of light by using optical devices and connections in place of transistors. Latest developments in this field have already taken place with the optical equivalent of transistors being switched on using photons and not electrons. Since photons travel at speed of light, therefore, calculations may be done at sub-light speed. DNA computing calculations by recombining DNA in a parallel environment. Numerous possibilities are tried at the same time; the most optimal solution will be “the strongest to survive.” THANKS FOR READING – VISIT OUR WEBSITE www.educatererindia.com GAUTAM SINGH UPSC STUDY MATERIAL – Science & Technology 0 7830294949 Quantum computing not in practical use yet only conceptual proofing done but think of it as calculations being done before you have thought of them. Work is done in the blink of an eye since time is of no essence here. What are the Applications of a Supercomputer? Academic research: For observing and simulating the phenomena which are too big, too small, too fast, or too slow to observe in laboratories. For example, astrophysicists use supercomputers as “time machines” to explore the past and the future of our universe. Another important area is quantum mechanics. Weather and climate modeling to forecast with better accuracy by analyzing multiple factors and their interrelationships. Medicine discovery for e.g. How a protein folds information leads to the discovery of new drugs. Monsoon Forecasting using dynamic Models. Big data mining to strengthen and better mobilization of digital India mission. Oil and gas exploration, therefore, ensuring energy security of India. Airplane and spacecraft aerodynamics research and development, therefore better safety standards and smoother connectivity thereby helping in ease of transportation. Simulation of nuclear fission and fusion processes, therefore imparting better nuclear infrastructure models and helping in energy security of the nation. Molecular dynamics: supercomputer simulations allow scientists to dock two molecules together to study their interaction which may lead to the development of innovative materials for future generation technologies. In 1994, A supercomputer was used to alert the scientists about the collision of a comet with Jupiter, providing them time to prepare to observe and record the THANKS FOR READING – VISIT OUR WEBSITE www.educatererindia.com GAUTAM SINGH UPSC STUDY MATERIAL – Science & Technology 0 7830294949 event for useful analysis and its application in predicting future comet collision with the earth. What are the initiatives taken by the Government of India? In the 12th five-year plan, the government of India (GOI) had committed that $2.5bn would be sanctioned for the research in the supercomputing field. In 2015, GOI approved 7-year supercomputing program known as National Supercomputing Mission which aims to create a cluster of 73 supercomputers connecting various academic and research institutions across India with $730mn investment. Some facts for Prelims There are no exaflop (higher than petaflops) computing supercomputers in the world and the first product is expected around 2019-20. India is also preparing to launch its exaflop supercomputers by 2020. China’s, Sunway TaihuLight is the fastest supercomputer (93 Pflops) and China has more supercomputers than the USA as of July 2016. Possible Sample Questions for Mains 1. What are supercomputers? What is its status in India? How does it help in the development of India and the world? 2. Supercomputers have more strategic significance than scientific. Illustrate. Sample Questions for Prelims Question: With reference to supercomputers, petaflops are related to? A – The latest model of sSupercomputers developed by China. B – The latest model of supercomputers developed by the USA. C – The performance of supercomputers. D – Floppy disks which are used on normal desktop computers. Answer: (Option C) The performance of supercomputers. Learning Zone: The performance is generally evaluated in petaflops (1 followed by 15 zeros) and some supercomputers may even perform quadrillions flops. THANKS FOR READING – VISIT OUR WEBSITE www.educatererindia.com .
Recommended publications
  • Seymour Cray: the Father of World Supercomputer
    History Research 2019; 7(1): 1-6 http://www.sciencepublishinggroup.com/j/history doi: 10.11648/j.history.20190701.11 ISSN: 2376-6700 (Print); ISSN: 2376-6719 (Online) Seymour Cray: The Father of World Supercomputer Si Hongwei Department of the History of Science, Tsinghua University, Beijing, China Email address: To cite this article: Si Hongwei. Seymour Cray: The Father of World Supercomputer. History Research. Vol. 7, No. 1, 2019, pp. 1-6. doi: 10.11648/j.history.20190701.11 Received : May 14, 2019; Accepted : June 13, 2019; Published : June 26, 2019 Abstract: Seymour R. Cray was an American engineer and supercomputer developer who designed a series of the fastest computers in the world in 1960-1980s. The difference between Cray and most other corporate engineers is that he often won those business battles. His success was attributable to his existence in a postwar culture where engineers were valued. He was able to also part of an extraordinary industry where revolutionary developments were encouraged, and even necessary. Lastly Cray is recognized as "the father of world supercomputer". From the perspective of science and technology history, this paper describes the history of Cray and his development of supercomputer. It also sums up his innovative ideas and scientific spirit. It provides a reference for supercomputer enthusiasts and peers in the history of computer research. Keywords: Seymour R. Cray, Supercomputer, Science and Technology History 1. Introduction 2. The Genius Seymour Supercomputer refers to the most advanced electronic computer system with the most advanced technology, the Seymour Cray was born on September 28th, 1925 in the fastest computing speed, the largest storage capacity and the town of Chippewa, Wisconsin.
    [Show full text]
  • R00456--FM Getting up to Speed
    GETTING UP TO SPEED THE FUTURE OF SUPERCOMPUTING Susan L. Graham, Marc Snir, and Cynthia A. Patterson, Editors Committee on the Future of Supercomputing Computer Science and Telecommunications Board Division on Engineering and Physical Sciences THE NATIONAL ACADEMIES PRESS Washington, D.C. www.nap.edu THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Gov- erning Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engi- neering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for ap- propriate balance. Support for this project was provided by the Department of Energy under Spon- sor Award No. DE-AT01-03NA00106. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the organizations that provided support for the project. International Standard Book Number 0-309-09502-6 (Book) International Standard Book Number 0-309-54679-6 (PDF) Library of Congress Catalog Card Number 2004118086 Cover designed by Jennifer Bishop. Cover images (clockwise from top right, front to back) 1. Exploding star. Scientific Discovery through Advanced Computing (SciDAC) Center for Supernova Research, U.S. Department of Energy, Office of Science. 2. Hurricane Frances, September 5, 2004, taken by GOES-12 satellite, 1 km visible imagery. U.S. National Oceanographic and Atmospheric Administration. 3. Large-eddy simulation of a Rayleigh-Taylor instability run on the Lawrence Livermore National Laboratory MCR Linux cluster in July 2003.
    [Show full text]
  • CDC Begann 1963 Und Endete Erst Nach Über 30 Jahren Im Jahre 1994
    Die Zusammenarbeit zwischen der Universität Hannover und der Firma CDC begann 1963 und endete erst nach über 30 Jahren im Jahre 1994. Als 1963 eine CDC 1604-A an die TH Hannover geliefert wurde, war der Hersteller noch wenig bekannt: CDC war das Kürzel der jungen und (damals) noch kleinen amerikanischen Firma Control Data Corporation, die sich aus ehemaligen Mitarbeitern etablierten Firmen gebildet hatte, um ohne Rücksicht auf firmeninterne Bürokratie möglichst leistungsfähige Computer bauen zu können. Allen voran: Seymour Cray – dem bis in die 90er Jahre legendären Computer-Pionier. Das wussten wir damals natürlich noch nicht – aber es kamen die ersten „Cray-Geschichten“ auf, wie etwa die von den Konstruktionsplänen eines neuen Rechners, die Cray am Wochenende im Bett ersonnen haben soll, als er einmal krank war. Mit dem Einzug von CDC im Jahr 1963 begann die CDC-Epoche am Rechenzentrum, die schließlich bis 1994 reichte. Die CDC 1604-A selbst war bis 1973 im Einsatz. Seymour Cray hatte Mitte der 60er-Jahre die legendäre CDC 6600 kreiert, das erste System überhaupt mit getrennten Funktionseinheiten und 10 peripheren Prozessoren für die Systemsteuerung und Ein-/Ausgabe-Zwecke. Mit diesem System bewies er seinen Führungsanspruch bezüglich des schnellsten Rechners und war gleichzeitig IBM ein großer Dorn im Auge: Aus dieser Zeit wurden Reaktionen des IBM-Managements kolportiert, in denen man sich fragte, wieso eine kleine Firma mit 37 Mitarbeitern den schnellsten Rechner der Welt bauen kann, was einem selbst nicht mit Tausenden von Mitarbeitern gelang. (Seymour Cray verließ 1972 das Unternehmen, gründete eine eigene Firma „Cray Research“ und trat damit in Konkurrenz zu CDC) Die CDC-Epochen an der TH/TU/Uni Hannover I: CDC 1604-A II: Die CYBER 76 mit Vorrechnern III: Die CYBER 180 - Systeme Der neue Top-Rechner am RRZN (der „Niedersächsische Vektorrechner“) sollte eine ETA 10 (Weiterentwicklung der CYBER 205) werden.
    [Show full text]
  • The CRAY- 1 Computer System
    We believe the key to the 10's longevity is its basically simple, clean structure with adequately large (one Mbyte) address space that allows users to get work done. In this way, it has evolved easily with use and with technology. An equally significant factor in Computer G. Bell, S. H. Fuller, and its success is a single operating system environment Systems D. Siewiorek, Editors enabling user program sharing among all machines. The machine has thus attracted users who have built significant languages and applications in a variety of The CRAY- 1 environments. These user-developers are thus the dominant system architects-implementors. Computer System In retrospect, the machine turned out to be larger Richard M. Russell and further from a minicomputer than we expected. Cray Research, Inc. As such it could easily have died or destroyed the tiny DEC organization that started it. We hope that this paper has provided insight into the interactions of its development. This paper describes the CRAY,1, discusses the evolution of its architecture, and gives an account of Acknowledgments. Dan Siewiorek deserves our some of the problems that were overcome during its greatest thanks for helping with a complete editing of manufacture. the text. The referees and editors have been especially The CRAY-1 is the only computer to have been helpful. The important program contributions by users built to date that satisfies ERDA's Class VI are too numerous for us to give by name but here are requirement (a computer capable of processing from most of them: APL, Basic, BLISS, DDT, LISP, Pascal, 20 to 60 million floating point operations per second) Simula, sos, TECO, and Tenex.
    [Show full text]
  • VMS 4.2 Service Begins
    r, University of Minnesota Twin Cities May 1986 VAX News VMS 4.2 Service Begins Marisa Riviere On April16 the VAX 8600, ACSS's passwords on the VAX 8600 are network prompt as described new VX system, became available the same as those you were using above. to our users. This larger, faster on the VAX 11/780 at the time of VAX permits us to offer new VMS the transfer. You may also have to change your services that were difficult or terminal parity; in VMS 4.2 the impossible to offer on the VA, a As previously announced in our parity default is even. smaller VMS machine, and will cost March Newsletter, we discontinued users 20 to 30 percent less than our VMS 3.6 service on the VA a the VA. few days after the VX became Training Software available. The ACSS additions to the VMS We plan to offer several on-line operating system have been Not all the software available on the training packages on the VX. As transferred to the VAX 8600. VAX 11/780 was transferred to the we go to press, two packages are There are, however, some VAX 8600. See the March available: the Introduction to VMS important differences between Newsletterorthe VMS42 writeup and the Introduction to EDT. (EDT VMS 4.2 (the operating system on on the VX for details. is the VMS line and full-screen the VX) and VMS 3.6 (the editor). To use e~her package, first operating system on the VA). type in the set term/vt100 Some changes are discussed later Logging On command.
    [Show full text]
  • Supercomputers: the Amazing Race Gordon Bell November 2014
    Supercomputers: The Amazing Race Gordon Bell November 2014 Technical Report MSR-TR-2015-2 Gordon Bell, Researcher Emeritus Microsoft Research, Microsoft Corporation 555 California, 94104 San Francisco, CA Version 1.0 January 2015 1 Submitted to STARS IEEE Global History Network Supercomputers: The Amazing Race Timeline (The top 20 significant events. Constrained for Draft IEEE STARS Article) 1. 1957 Fortran introduced for scientific and technical computing 2. 1960 Univac LARC, IBM Stretch, and Manchester Atlas finish 1956 race to build largest “conceivable” computers 3. 1964 Beginning of Cray Era with CDC 6600 (.48 MFlops) functional parallel units. “No more small computers” –S R Cray. “First super”-G. A. Michael 4. 1964 IBM System/360 announcement. One architecture for commercial & technical use. 5. 1965 Amdahl’s Law defines the difficulty of increasing parallel processing performance based on the fraction of a program that has to be run sequentially. 6. 1976 Cray 1 Vector Processor (26 MF ) Vector data. Sid Karin: “1st Super was the Cray 1” 7. 1982 Caltech Cosmic Cube (4 node, 64 node in 1983) Cray 1 cost performance x 50. 8. 1983-93 Billion dollar SCI--Strategic Computing Initiative of DARPA IPTO response to Japanese Fifth Gen. 1990 redirected to supercomputing after failure to achieve AI goals 9. 1982 Cray XMP (1 GF) Cray shared memory vector multiprocessor 10. 1984 NSF Establishes Office of Scientific Computing in response to scientists demand and to counteract the use of VAXen as personal supercomputers 11. 1987 nCUBE (1K computers) achieves 400-600 speedup, Sandia winning first Bell Prize, stimulated Gustafson’s Law of Scalable Speed-Up, Amdahl’s Law Corollary 12.
    [Show full text]
  • Open PDF in New Window
    HISTORY OF NSA GENERAL-PURPOSE ELECTRONIC DIGITAL COMPUTERS '1'. .~. _. , 1964 'lppro\/ed Fe;( F~elea~=;6 b~l r\I~3A, or J2-0D-2004 FIJI/\. C;:I':,(:' # 41 02~_ HISTORY OF NSA GENERAL-PURPOSE ELECTRONIC DIGITAL COMPUTERS By Samuel S. Snyder 1964 Department of Defense Washington, D. C. 20301 -FOR OFFICIAL USE ONLY r PREFACE The author has attempted to write this material so that it will be easily understood by those who have had only limited experience with computers. To aid those readers, several terms and concepts have been defined, and Chapter 1 includes a brief discussion of principles of computer operation, programming, data-preparation prob­ lems, and automatic programming. Engineering terminology has been held to a minimum, and the history of programmer training, personnel and organizational growth, and-the like has not been treated. To some small extent, the comments on operational utility bring out the very real usefulness of computers for the solution of data-processing problems. The cutoff date for eveift:s-·-related -he-re---was-the end of December 1963. s.s.s. ii TABLE OF CONTF.NTS CHAPTER 1 -- BACKGROUND 'Description Page Punched Card Equipment and the Computer - - - 1 Computers in NSA ---------- 2 Computer Principles ------------- 2 Programming Principles - - - - - - - - - 4 Data Preparation ---------- 4 Automatic Programming --------- --- 4 Special-Purpose Attachments ------- e Impact of NSA on Commercial Computer Developments 6 CHAPTER 2 -- AGENCY-SPONSORED COMPUTERS ATLAS I and ABEL ---- 8 ATLAS II ---------------- 13 ABNER and BAKER - ------- - 14 NOMAD ---------------- 28 SOLO ------- ----- 29 BOGART ------ ----- 31 CUB ------------ --- 36 UNIVAC l224A (CRISPI) -------------- 36 HARVEST -------- ----- .39 HARVEST Modes of Operation - --- 46 __ Ar.i-thmetic .Mode·- ------- -" -- .
    [Show full text]
  • History of NSA General-Purpose Electronic Digital Computers; 1964
    Doc ID: 6586784 ..r HISTORY OF NSA GENERAL-PURPOSE ELECTRONIC DIGITAL COMPUTERS ' ~ -· . ·.~ •. 1964 pproved for Release by NSA on 2-09-2004, FOIA Case# 41023 •Doc ID: 6586784 HISTORY OF NSA GENERAL-PURPOSE ELECTRONIC DIGITAL COMPUTERS By Samuel S. Snyder • j' 1964 r .-I ,_ Department of Defense Washington, D. c. 20301 -FOR OFFICIAL USE ONLY DocI. ID: 6586784 r PREFACE The author has attempted to write this material so that it will be easily understood by those who have had only limited experience with computers. To aid those readers, several terms and concepts have been defined, and Chapter 1 includes a brief discussion of principles of computer operation, programming, data-preparation prob­ lems, and automatic programming. Engineering terminology has been held to a minimum, and the history of programmer training, personnel and organizational growth, and-the like r has not been treated. To some small extent, the comments on operational utility bring out the very real usefulness of computers for the solution of data-processing problems. T--·-----i ! The cutoff date for even'C!:f-related -her·e-·--·was-the end I of December 1963. i s.s.s. ii Doc ID: 6586784 TABLE OF CONTF.NTS CHAPTER 1 -- BACKGROUND ·Description Page Punched Card Equipment and the Computer Computers in NSA ---------- Computer Principles ---------- Programming Principles Data Preparation ---------- Automatic Programming ------------ Special-Purpose Attachments ----- Impact of NSA on Commercial Computer Developments CHAPTER 2 -- AGENCY-SPONSORED COMPUTERS ATLAS I and ABEL
    [Show full text]
  • History of Computation
    HISTORY OF COMPUTATION Sotirios G. Ziavras, Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, New Jersey 07102, U.S.A. Keywords Computer Systems, Early Computers, Electronic Computers, Generations of Electronic Computers, Mainframe Computers, Mechanical Computers, Microprocessor Architectures, Minicomputers, Parallel Computers, Supercomputers. Contents 1. Early (Non-Electronic) Computing Machines 2. Electronic Computers 2.1. First Generation (1945-1955) 2.2. Second Generation (1955-1965) 2.3. Third Generation (1965-1974) 2.4. Fourth Generation (1974-1989) 2.5. Fifth Generation (1990-present) Glossary CPU: Central Processing Unit. Distributed processing: the process of executing a single program on a network of computers. Local-area network: a network of connected computers in a relatively small physical area. Mainframe computer: a large, high cost computer for very good performance. Massively-parallel computer: a parallel computer containing hundreds or thousands of (micro)processors. MIMD: Multiple Instruction streams, Multiple Data streams. Microcomputer: a small computer driven by a microprocessor. Minicomputer: it costs much less than a large computer (such as a mainframe) but can yield very substantial performance. Multicomputer: a parallel computer system containing many microprocessors which are interconnected via a static point-to-point physical network. Multiprocessor: a parallel computer system containing many microprocessors which are interconnected via a dynamic network. This network is implemented with either a common bus (that is, a common set of physical wires) or a network of switches. The exact interconnection scheme is determined each time by the application program. Parallel computer: a computer system containing from a few to thousands of microprocessors. 1 Pipelined processor: a CPU that contains many processing stages.
    [Show full text]
  • ACSS Apr 1986.Pdf (898.3Kb Application/Pdf)
    The fA©®® University of Minnesota Twin Cities April 1986 ComPUting Reflections Seymour Cray's Machines (Part 1) Lawrence Liddiard Seymour Cray is known as the man the CDC 1604 (serial number 50) functional units, floating point who has been the chief architect of and CDC 160 computers, new in arithmetic with infinite and seven remarkable computers since 1962, the CDC 6600 (serial indefinite values, a sequential 1958. The seven machines are number 16), new in 1966, a slightly instruction stack, and it only cost the 1604, 160,6600, and 7600 used CRAY-1 (serial number 12) $7 million. In fact, the $7 million I systems of Control Data Corpora- acquired in 1981, and the CRAY-2 price tag seemed to be the right tion (CDC) and the CRAY-1, -2, (serial number3), new in 1985. price during 1966-1976 for a and -3 of Cray Research lncorpo- The CDC 1604 retired in 1968, the leading-edge computer, since it rated (CRI), and all of them, except CDC 6600 was replaced in 1974 by also was the approximate price for the 7600, were, are, or will be part its twin sister the CYBER 74, which the CDC 7600 and the CRA Y-1 . The CDC 7600 added the "00 of the computing resources used finally retired in 1983, and the l at the University of Minnesota. CRAY -1 will be put out to pasture in concepts of "pipe lined parallel 1986. arithmetic units" (the precursor of The first director of computing at vector units on the C RAY -1) and Minnesota, Dr.
    [Show full text]
  • Supercomputing History and the Immo
    VIRTUAL ROUNDTABLE The 30th Anniversary of the Supercomputing Conference: Bringing the Future Closer— Supercomputing upercomputing’s nascent era was borne of the late History and the 1940s and 1950s Cold War and increasing tensions be- Stween the East and the West; the first installations—which demanded ex- Immortality of Now tensive resources and manpower beyond what private corporations Jack Dongarra, University of Tennessee, Oak Ridge National Laboratory, could provide—were housed in and University of Manchester university and government labs in Vladimir Getov, University of Westminster the United States, United Kingdom, and the Soviet Union. Following the Kevin Walsh, University of California, San Diego Institute of Advanced Study (IAS) stored program computer architec- A panel of experts discusses historical ture, these so-called von Neumann machines were implemented as the reflections on the past 30 years of the MANIAC at Los Alamos Scientific Laboratory, the Atlas at the Univer- Supercomputing (SC) conference, its leading sity of Manchester, the ILLIAC at role for the professional community and some the University of Illinois, the BESM machines at the Soviet Academy of exciting future challenges. Sciences, the Johnniac at The Rand Corporation, and the SILLIAC in Australia. By 1955, private industry 74 COMPUTER PUBLISHED BY THE IEEE COMPUTER SOCIETY 0018-9162/18/$33.00 © 2018 IEEE joined in to support these initiatives and the IBM User Group, SHARE was formed, the Digital Equipment Corpo- MACHINES OF THE FUTURE ration was founded in 1957, while IBM built an early wide area computer net- “The advanced arithmetical machines of the future will be electrical in nature, and work SAGE (Semi-Automatic Ground they will perform at 100 times present speeds, or more.
    [Show full text]
  • Von IBM Zu IBM 4 Jahrzehnte Rechenzentrum 4 Jahrzehnte Großrechner-Entwicklung
    Von IBM zu IBM 4 Jahrzehnte Rechenzentrum 4 Jahrzehnte Großrechner-Entwicklung Die Beschaffung des neuen IBM-Systems gibt Anlass, einen Blick zurück auf die Geschichte des RRZN und seines Vorläufers, des Hochschulrechenzentrums der vormals Technischen Hochschule Hannover zu werfen, die naturgemäß hochgradig geprägt wurde durch die Geschichte der Großrechner -Entwicklung. Diese Betrachtung kann natürlich keinen Anspruch auf Vollständigkeit erheben. Sie soll – als durchaus auch subjektiv gefärbter Blick eines Mitarbeiters, der seit 1960 „dabei“ ist – einige Stationen und Highlights aus einer insgesamt doch stürmischen Entwicklung aufzeigen. Der Start mit einer IBM 650 Im Jahr 1957 wurde am Institut für Praktische Mathematik der damaligen Technischen Hochschule Hannover eine IBM 650, einer der ersten kommerziell verfügbaren „Großrechner“, installiert. Dieser Rechner war seinerzeit ein großer Fortschritt für die Hochschule, auch wenn man sich heute angesichts seiner Spezifikationen kaum noch vorzustellen vermag, was man überhaupt damit anfangen konnte: • Elektronischer Rechner auf Röhrenbasis • Arbeitspeicher: Magnettrommelspeicher • Kapazität: 2000 Zahlen à 10 Dezimalstellen • Ein- und Ausgabe über Lochkarten • Offline-Drucken der Ergebnisse aus den Lochkarten (rein numerisch) • Betriebssystem: So gut wie keins! Sowohl die Programmierung und auch die damals schon angebrachte Optimierung muten heutzutage geradezu archaisch an: Die Programmierung erfolgte in absolutem Maschinen- code (kein Assembler)! Den Trommelspeicher mussten sich natürlich Instruktionen und Daten teilen. Zur Optimierung mussten die jeweils für den nächsten Schritt erforderlichen Informationen möglichst schnell greifbar sein (d. h. gerade vor den Köpfen der Trommel stehen). Daher sollte die jeweilige Operandenadresse drei Plätze nach der Instruktionsadres- se liegen – in der Zeit war die Instruktion entschlüsselt und die Trommel hatte sich von der Instruktion zum Operanden gedreht.
    [Show full text]