ACSS Apr 1986.Pdf (898.3Kb Application/Pdf)

Total Page:16

File Type:pdf, Size:1020Kb

ACSS Apr 1986.Pdf (898.3Kb Application/Pdf) The fA©®® University of Minnesota Twin Cities April 1986 ComPUting Reflections Seymour Cray's Machines (Part 1) Lawrence Liddiard Seymour Cray is known as the man the CDC 1604 (serial number 50) functional units, floating point who has been the chief architect of and CDC 160 computers, new in arithmetic with infinite and seven remarkable computers since 1962, the CDC 6600 (serial indefinite values, a sequential 1958. The seven machines are number 16), new in 1966, a slightly instruction stack, and it only cost the 1604, 160,6600, and 7600 used CRAY-1 (serial number 12) $7 million. In fact, the $7 million I systems of Control Data Corpora- acquired in 1981, and the CRAY-2 price tag seemed to be the right tion (CDC) and the CRAY-1, -2, (serial number3), new in 1985. price during 1966-1976 for a and -3 of Cray Research lncorpo- The CDC 1604 retired in 1968, the leading-edge computer, since it rated (CRI), and all of them, except CDC 6600 was replaced in 1974 by also was the approximate price for the 7600, were, are, or will be part its twin sister the CYBER 74, which the CDC 7600 and the CRA Y-1 . The CDC 7600 added the "00 of the computing resources used finally retired in 1983, and the l at the University of Minnesota. CRAY -1 will be put out to pasture in concepts of "pipe lined parallel 1986. arithmetic units" (the precursor of The first director of computing at vector units on the C RAY -1) and Minnesota, Dr. Marvin Stein, once The CDC 1604 made major the associative instruction stack. A told me about a conversation he advances over previous machines, minor aberration was introduced on had with a former Electrical such as the 48-bit word length that the CDC 7600 because it had both Engineering graduate of the gave floating point numbers a 10 to fast small and slower large core University in the late '50s. During the 616 exponent range, memories for single and eight-word 1 that visit, the graduate, Seymour compared to the 10 to the 76 loads/stores. Core memory was the Cray, offered the University a range on competing systems, and speed bottleneck in the design of chance to share in the develop- the current IEEE standard single the 7600 and represented 80 I ment of a new 48-bit word precision. In other respects the percent of the component cost. computer made with transistors. CDC 1604 combined elements of Lacking the necessary funds (a the earlier UNIVAC 11 03, besides Cray corrected this deviation from common University problem), using memory banking and the right path in the CRAY -1, which Dr. Stein turned down that offer, packing two instructions per word. used chips for memory that other and Mr. Cray went on to be the manufacturers had used only for architect of CDC computers. But the CDC 6600 was the first of high-speed register files. The Cray's major scientific computer CRAY-1 introduced "vector Not withstanding this early advances: It included a simpified registers" with corresponding ; rejection of Cray's work, the instruction set, register files, pipelined parallel vector functional University of Minnesota acquired parallel operation both with 1/0 units, returned to a uniformly fast processors and arithmetic Continued on page 50 I If April1986 49 i l r Continued from page 49 (3} "Hardwired control provides for divide instruction, a harder the fastest possible single-cycle process, did not have a /4 cycle memory, and extended operation. Microcode leads to corresponding lower command to the exponent range to 10 to the slower control paths and adds to produce the division remainder, 4960 while adding a post­ interpretive overhead." and thus a fairly long sequence of operation floating point error commands was needed to detection and recovery. (4} "Relatively few instructions and produce the answer for a double addressing modes facilitate a fast, precision divide. simple interpretation by the control Simplified Instruction Set engine." Seymour Gray was always looking for the fastest execution rates, and In the early 1980's the concept of (5} "Fixed instruction format with normal arithmetic rounding reduced instruction set computers consistent use eases the hard­ requires that the round bit be (RISC} as opposed to the then­ wired decoding of instructions, added after any normalization in an current complex (CISC} machines which again speeds control paths." additive or multiplicative operation began to be promoted within to correctly produce the rounded academia. The first VLSI chip (6} "More compile-time effort offers result. Since this can add several using that concept was called the an opportunity to explicitly move cycles to a command, the 6600 RISC I in a journal article by a group static run-time complexity into the and 7600 were designed to have at the University of California, compiler." the round bits added before the Berkeley. In that article, 1 the operation took place in the additive authors wrote: From that proposed definition of a (1/2 bit} and divide (1/3 bit} RISC, it does seem that Gray's operations. Since a post­ Since architects of new machines machines from the 6600 onward normalization down-shift of 1 bit build on the work of others, we could be considered RISC types. can occur, the rounding for those believe it is important to trace the That same reference shows that cases is sometimes only by 1/4 bit genealogy of RISG I. The the RISC I register file, organized or 1/6 bit respectively. CDC initially leading proponent of reduced as a series of overlapping registers delivered 6600s with the rounded instruction set computers for that enabled fast procedure calls, multiply adding a 1/2 bit before the floating-point data is [Seymour] was a large part of the performance post-normalization, which caused a Gray. For the last 15 years, he has improvement seen on that system. bit to be added in the lowest combined simple instruction sets Perhaps the GRAY -2 should have position for product results that with sophisticated pipelined organized its local memory to have shifted up the one-bit maximum. implementations to create the most similar procedure call speedups. This meant that integer products powerful floating-point engines in At ACSS we have always such as 1*1 received an answer the world. While Gray concentrates considered the CDC 6600 and 1/2** 47 too large. All of the 6600s on impressive floating-point rates CYBER 74 as the world's largest were corrected to round only by a at impressive costs, RISG I and fastest micro engines. 1/4 bit before the post­ concentrates on improved normalization, giving a resulting performance at lower cost for round of 1/2 or 1/4. Numerical integer programs written in HLLs analysts have included these Gray [Higher Level Languages]. Floating Point Arithmetic: arithmetic rounding anomalies in Rounding programming puzzles and articles Not all agree that RISC is the wave on how not to design rounded of the future,2 but some observers In the CDC 6600 and 7600, there arithmetic. propose "the following elements were hidden 96-bit registers used as a working definition of a RISC":3 in computing the full product, In our FORTRAN compilers (MNF addition, or subtraction results of and M77}, running a program with (1 } "Single-cycle operation two 48-bit mantissas of floating rounded arithmetic usually facilitates the rapid execution of point values. The instruction set produces more accurate answers simple functions that dominate a had separate commands to obtain and, combined with a run with computer's instruction stream and the upper or lower 48 bits of these unrounded arithmetic, quickly promotes a low interpretive results. Then double precision shows if loss of significant digits is overhead." arithmetic could be done, taking occurring at a cost penalty of only approximately four times longer two (i.e., often cheaper than (2} "Load/Store design follows than single precision with short converting to DOUBLE from a desire for single-cycle sequences of these upper and PRECISION or INTERVAL operation." lower precision commands. The arithmetic}. 50 April1986 Next month I will continue to Notes SIGMICRONewsletter, Vol.14, examine Seymour Gray's No. 4 (December 1983). machines, discussing some of the 1 David A. Patterson and Carlo other innovative ways Cray solved Sequin, "A VLSI RISC," IEEE 3Robert P. Colwell, Charles Y. floating point arnhmetic problems Computer, Vol.15, No.9 Hnchcock Ill, E. Douglas Jensen, and also examining some of the (September 1982). H. M. Brinkley Sprunt, and Charles interesting features of his machine P. Kollar, "Computers, Complexity, designs. 2William C. Hopkins, "HLLDA and Controversy," IEEE Computer, defines RISC: Thoughts on RISCs, Vol. 18, No.9 (September 1985). CISCs, and HLLDAs," ACM PrO!)rammjag Languages Programming Languages on the VAX 8600: An Overview Jim Miner and Janet Eberhart Wnh the advent of VAX 8600 services. This all adds up to a VAX Ada is a validated service, we are introducing some versatile and powerful environment implementation of the ANSI- and I new language processors and for developing and maintaining DOD-standard language. It is "' upgrading the existing ones as sophisticated applications. integrated into the VMS Common well. FORTRAN and Pascal are the Language Environment. All VMS perennial favorite languages on system services and utilities are our systems, and we have a new Ada® thus available to programs wrnten version of each. There is also a in VAX Ada. VAX Ada supports new version of COBOL and new Ada is a powerful, general-purpose VAX Record Management Ada, Common LISP, and OPSS language wnh integrated facilities Services (RMS).
Recommended publications
  • Unit 18. Supercomputers: Everything You Need to Know About
    GAUTAM SINGH UPSC STUDY MATERIAL – Science & Technology 0 7830294949 Unit 18. Supercomputers: Everything you need to know about Supercomputers have a high level of computing performance compared to a general purpose computer. In this post, we cover all details of supercomputers like history, performance, application etc. We will also see top 3 supercomputers and the National Supercomputing Mission. What is a supercomputer? A computer with a high level of computing performance compared to a general purpose computer and performance measured in FLOPS (floating point operations per second). Great speed and great memory are the two prerequisites of a super computer. The performance is generally evaluated in petaflops (1 followed by 15 zeros). Memory is averaged around 250000 times of the normal computer we use on a daily basis. THANKS FOR READING – VISIT OUR WEBSITE www.educatererindia.com GAUTAM SINGH UPSC STUDY MATERIAL – Science & Technology 0 7830294949 Housed in large clean rooms with high air flow to permit cooling. Used to solve problems that are too complex and huge for standard computers. History of Supercomputers in the World Most of the computers on the market today are smarter and faster than the very first supercomputers and hopefully, today’s supercomputer would turn into future computers by repeating the history of innovation. The first supercomputer was built in 1957 for the United States Department of Defense by Seymour Cray in Control Data Corporation (CDC) in 1957. CDC 1604 was one of the first computers to replace vacuum tubes with transistors. In 1964, Cray’s CDC 6600 replaced Stretch as the fastest computer on earth with 3 million floating-point operations per second (FLOPS).
    [Show full text]
  • Seymour Cray: the Father of World Supercomputer
    History Research 2019; 7(1): 1-6 http://www.sciencepublishinggroup.com/j/history doi: 10.11648/j.history.20190701.11 ISSN: 2376-6700 (Print); ISSN: 2376-6719 (Online) Seymour Cray: The Father of World Supercomputer Si Hongwei Department of the History of Science, Tsinghua University, Beijing, China Email address: To cite this article: Si Hongwei. Seymour Cray: The Father of World Supercomputer. History Research. Vol. 7, No. 1, 2019, pp. 1-6. doi: 10.11648/j.history.20190701.11 Received : May 14, 2019; Accepted : June 13, 2019; Published : June 26, 2019 Abstract: Seymour R. Cray was an American engineer and supercomputer developer who designed a series of the fastest computers in the world in 1960-1980s. The difference between Cray and most other corporate engineers is that he often won those business battles. His success was attributable to his existence in a postwar culture where engineers were valued. He was able to also part of an extraordinary industry where revolutionary developments were encouraged, and even necessary. Lastly Cray is recognized as "the father of world supercomputer". From the perspective of science and technology history, this paper describes the history of Cray and his development of supercomputer. It also sums up his innovative ideas and scientific spirit. It provides a reference for supercomputer enthusiasts and peers in the history of computer research. Keywords: Seymour R. Cray, Supercomputer, Science and Technology History 1. Introduction 2. The Genius Seymour Supercomputer refers to the most advanced electronic computer system with the most advanced technology, the Seymour Cray was born on September 28th, 1925 in the fastest computing speed, the largest storage capacity and the town of Chippewa, Wisconsin.
    [Show full text]
  • R00456--FM Getting up to Speed
    GETTING UP TO SPEED THE FUTURE OF SUPERCOMPUTING Susan L. Graham, Marc Snir, and Cynthia A. Patterson, Editors Committee on the Future of Supercomputing Computer Science and Telecommunications Board Division on Engineering and Physical Sciences THE NATIONAL ACADEMIES PRESS Washington, D.C. www.nap.edu THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Gov- erning Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engi- neering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for ap- propriate balance. Support for this project was provided by the Department of Energy under Spon- sor Award No. DE-AT01-03NA00106. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the organizations that provided support for the project. International Standard Book Number 0-309-09502-6 (Book) International Standard Book Number 0-309-54679-6 (PDF) Library of Congress Catalog Card Number 2004118086 Cover designed by Jennifer Bishop. Cover images (clockwise from top right, front to back) 1. Exploding star. Scientific Discovery through Advanced Computing (SciDAC) Center for Supernova Research, U.S. Department of Energy, Office of Science. 2. Hurricane Frances, September 5, 2004, taken by GOES-12 satellite, 1 km visible imagery. U.S. National Oceanographic and Atmospheric Administration. 3. Large-eddy simulation of a Rayleigh-Taylor instability run on the Lawrence Livermore National Laboratory MCR Linux cluster in July 2003.
    [Show full text]
  • CDC Begann 1963 Und Endete Erst Nach Über 30 Jahren Im Jahre 1994
    Die Zusammenarbeit zwischen der Universität Hannover und der Firma CDC begann 1963 und endete erst nach über 30 Jahren im Jahre 1994. Als 1963 eine CDC 1604-A an die TH Hannover geliefert wurde, war der Hersteller noch wenig bekannt: CDC war das Kürzel der jungen und (damals) noch kleinen amerikanischen Firma Control Data Corporation, die sich aus ehemaligen Mitarbeitern etablierten Firmen gebildet hatte, um ohne Rücksicht auf firmeninterne Bürokratie möglichst leistungsfähige Computer bauen zu können. Allen voran: Seymour Cray – dem bis in die 90er Jahre legendären Computer-Pionier. Das wussten wir damals natürlich noch nicht – aber es kamen die ersten „Cray-Geschichten“ auf, wie etwa die von den Konstruktionsplänen eines neuen Rechners, die Cray am Wochenende im Bett ersonnen haben soll, als er einmal krank war. Mit dem Einzug von CDC im Jahr 1963 begann die CDC-Epoche am Rechenzentrum, die schließlich bis 1994 reichte. Die CDC 1604-A selbst war bis 1973 im Einsatz. Seymour Cray hatte Mitte der 60er-Jahre die legendäre CDC 6600 kreiert, das erste System überhaupt mit getrennten Funktionseinheiten und 10 peripheren Prozessoren für die Systemsteuerung und Ein-/Ausgabe-Zwecke. Mit diesem System bewies er seinen Führungsanspruch bezüglich des schnellsten Rechners und war gleichzeitig IBM ein großer Dorn im Auge: Aus dieser Zeit wurden Reaktionen des IBM-Managements kolportiert, in denen man sich fragte, wieso eine kleine Firma mit 37 Mitarbeitern den schnellsten Rechner der Welt bauen kann, was einem selbst nicht mit Tausenden von Mitarbeitern gelang. (Seymour Cray verließ 1972 das Unternehmen, gründete eine eigene Firma „Cray Research“ und trat damit in Konkurrenz zu CDC) Die CDC-Epochen an der TH/TU/Uni Hannover I: CDC 1604-A II: Die CYBER 76 mit Vorrechnern III: Die CYBER 180 - Systeme Der neue Top-Rechner am RRZN (der „Niedersächsische Vektorrechner“) sollte eine ETA 10 (Weiterentwicklung der CYBER 205) werden.
    [Show full text]
  • The CRAY- 1 Computer System
    We believe the key to the 10's longevity is its basically simple, clean structure with adequately large (one Mbyte) address space that allows users to get work done. In this way, it has evolved easily with use and with technology. An equally significant factor in Computer G. Bell, S. H. Fuller, and its success is a single operating system environment Systems D. Siewiorek, Editors enabling user program sharing among all machines. The machine has thus attracted users who have built significant languages and applications in a variety of The CRAY- 1 environments. These user-developers are thus the dominant system architects-implementors. Computer System In retrospect, the machine turned out to be larger Richard M. Russell and further from a minicomputer than we expected. Cray Research, Inc. As such it could easily have died or destroyed the tiny DEC organization that started it. We hope that this paper has provided insight into the interactions of its development. This paper describes the CRAY,1, discusses the evolution of its architecture, and gives an account of Acknowledgments. Dan Siewiorek deserves our some of the problems that were overcome during its greatest thanks for helping with a complete editing of manufacture. the text. The referees and editors have been especially The CRAY-1 is the only computer to have been helpful. The important program contributions by users built to date that satisfies ERDA's Class VI are too numerous for us to give by name but here are requirement (a computer capable of processing from most of them: APL, Basic, BLISS, DDT, LISP, Pascal, 20 to 60 million floating point operations per second) Simula, sos, TECO, and Tenex.
    [Show full text]
  • VMS 4.2 Service Begins
    r, University of Minnesota Twin Cities May 1986 VAX News VMS 4.2 Service Begins Marisa Riviere On April16 the VAX 8600, ACSS's passwords on the VAX 8600 are network prompt as described new VX system, became available the same as those you were using above. to our users. This larger, faster on the VAX 11/780 at the time of VAX permits us to offer new VMS the transfer. You may also have to change your services that were difficult or terminal parity; in VMS 4.2 the impossible to offer on the VA, a As previously announced in our parity default is even. smaller VMS machine, and will cost March Newsletter, we discontinued users 20 to 30 percent less than our VMS 3.6 service on the VA a the VA. few days after the VX became Training Software available. The ACSS additions to the VMS We plan to offer several on-line operating system have been Not all the software available on the training packages on the VX. As transferred to the VAX 8600. VAX 11/780 was transferred to the we go to press, two packages are There are, however, some VAX 8600. See the March available: the Introduction to VMS important differences between Newsletterorthe VMS42 writeup and the Introduction to EDT. (EDT VMS 4.2 (the operating system on on the VX for details. is the VMS line and full-screen the VX) and VMS 3.6 (the editor). To use e~her package, first operating system on the VA). type in the set term/vt100 Some changes are discussed later Logging On command.
    [Show full text]
  • Supercomputers: the Amazing Race Gordon Bell November 2014
    Supercomputers: The Amazing Race Gordon Bell November 2014 Technical Report MSR-TR-2015-2 Gordon Bell, Researcher Emeritus Microsoft Research, Microsoft Corporation 555 California, 94104 San Francisco, CA Version 1.0 January 2015 1 Submitted to STARS IEEE Global History Network Supercomputers: The Amazing Race Timeline (The top 20 significant events. Constrained for Draft IEEE STARS Article) 1. 1957 Fortran introduced for scientific and technical computing 2. 1960 Univac LARC, IBM Stretch, and Manchester Atlas finish 1956 race to build largest “conceivable” computers 3. 1964 Beginning of Cray Era with CDC 6600 (.48 MFlops) functional parallel units. “No more small computers” –S R Cray. “First super”-G. A. Michael 4. 1964 IBM System/360 announcement. One architecture for commercial & technical use. 5. 1965 Amdahl’s Law defines the difficulty of increasing parallel processing performance based on the fraction of a program that has to be run sequentially. 6. 1976 Cray 1 Vector Processor (26 MF ) Vector data. Sid Karin: “1st Super was the Cray 1” 7. 1982 Caltech Cosmic Cube (4 node, 64 node in 1983) Cray 1 cost performance x 50. 8. 1983-93 Billion dollar SCI--Strategic Computing Initiative of DARPA IPTO response to Japanese Fifth Gen. 1990 redirected to supercomputing after failure to achieve AI goals 9. 1982 Cray XMP (1 GF) Cray shared memory vector multiprocessor 10. 1984 NSF Establishes Office of Scientific Computing in response to scientists demand and to counteract the use of VAXen as personal supercomputers 11. 1987 nCUBE (1K computers) achieves 400-600 speedup, Sandia winning first Bell Prize, stimulated Gustafson’s Law of Scalable Speed-Up, Amdahl’s Law Corollary 12.
    [Show full text]
  • Open PDF in New Window
    HISTORY OF NSA GENERAL-PURPOSE ELECTRONIC DIGITAL COMPUTERS '1'. .~. _. , 1964 'lppro\/ed Fe;( F~elea~=;6 b~l r\I~3A, or J2-0D-2004 FIJI/\. C;:I':,(:' # 41 02~_ HISTORY OF NSA GENERAL-PURPOSE ELECTRONIC DIGITAL COMPUTERS By Samuel S. Snyder 1964 Department of Defense Washington, D. C. 20301 -FOR OFFICIAL USE ONLY r PREFACE The author has attempted to write this material so that it will be easily understood by those who have had only limited experience with computers. To aid those readers, several terms and concepts have been defined, and Chapter 1 includes a brief discussion of principles of computer operation, programming, data-preparation prob­ lems, and automatic programming. Engineering terminology has been held to a minimum, and the history of programmer training, personnel and organizational growth, and-the like has not been treated. To some small extent, the comments on operational utility bring out the very real usefulness of computers for the solution of data-processing problems. The cutoff date for eveift:s-·-related -he-re---was-the end of December 1963. s.s.s. ii TABLE OF CONTF.NTS CHAPTER 1 -- BACKGROUND 'Description Page Punched Card Equipment and the Computer - - - 1 Computers in NSA ---------- 2 Computer Principles ------------- 2 Programming Principles - - - - - - - - - 4 Data Preparation ---------- 4 Automatic Programming --------- --- 4 Special-Purpose Attachments ------- e Impact of NSA on Commercial Computer Developments 6 CHAPTER 2 -- AGENCY-SPONSORED COMPUTERS ATLAS I and ABEL ---- 8 ATLAS II ---------------- 13 ABNER and BAKER - ------- - 14 NOMAD ---------------- 28 SOLO ------- ----- 29 BOGART ------ ----- 31 CUB ------------ --- 36 UNIVAC l224A (CRISPI) -------------- 36 HARVEST -------- ----- .39 HARVEST Modes of Operation - --- 46 __ Ar.i-thmetic .Mode·- ------- -" -- .
    [Show full text]
  • History of NSA General-Purpose Electronic Digital Computers; 1964
    Doc ID: 6586784 ..r HISTORY OF NSA GENERAL-PURPOSE ELECTRONIC DIGITAL COMPUTERS ' ~ -· . ·.~ •. 1964 pproved for Release by NSA on 2-09-2004, FOIA Case# 41023 •Doc ID: 6586784 HISTORY OF NSA GENERAL-PURPOSE ELECTRONIC DIGITAL COMPUTERS By Samuel S. Snyder • j' 1964 r .-I ,_ Department of Defense Washington, D. c. 20301 -FOR OFFICIAL USE ONLY DocI. ID: 6586784 r PREFACE The author has attempted to write this material so that it will be easily understood by those who have had only limited experience with computers. To aid those readers, several terms and concepts have been defined, and Chapter 1 includes a brief discussion of principles of computer operation, programming, data-preparation prob­ lems, and automatic programming. Engineering terminology has been held to a minimum, and the history of programmer training, personnel and organizational growth, and-the like r has not been treated. To some small extent, the comments on operational utility bring out the very real usefulness of computers for the solution of data-processing problems. T--·-----i ! The cutoff date for even'C!:f-related -her·e-·--·was-the end I of December 1963. i s.s.s. ii Doc ID: 6586784 TABLE OF CONTF.NTS CHAPTER 1 -- BACKGROUND ·Description Page Punched Card Equipment and the Computer Computers in NSA ---------- Computer Principles ---------- Programming Principles Data Preparation ---------- Automatic Programming ------------ Special-Purpose Attachments ----- Impact of NSA on Commercial Computer Developments CHAPTER 2 -- AGENCY-SPONSORED COMPUTERS ATLAS I and ABEL
    [Show full text]
  • History of Computation
    HISTORY OF COMPUTATION Sotirios G. Ziavras, Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, New Jersey 07102, U.S.A. Keywords Computer Systems, Early Computers, Electronic Computers, Generations of Electronic Computers, Mainframe Computers, Mechanical Computers, Microprocessor Architectures, Minicomputers, Parallel Computers, Supercomputers. Contents 1. Early (Non-Electronic) Computing Machines 2. Electronic Computers 2.1. First Generation (1945-1955) 2.2. Second Generation (1955-1965) 2.3. Third Generation (1965-1974) 2.4. Fourth Generation (1974-1989) 2.5. Fifth Generation (1990-present) Glossary CPU: Central Processing Unit. Distributed processing: the process of executing a single program on a network of computers. Local-area network: a network of connected computers in a relatively small physical area. Mainframe computer: a large, high cost computer for very good performance. Massively-parallel computer: a parallel computer containing hundreds or thousands of (micro)processors. MIMD: Multiple Instruction streams, Multiple Data streams. Microcomputer: a small computer driven by a microprocessor. Minicomputer: it costs much less than a large computer (such as a mainframe) but can yield very substantial performance. Multicomputer: a parallel computer system containing many microprocessors which are interconnected via a static point-to-point physical network. Multiprocessor: a parallel computer system containing many microprocessors which are interconnected via a dynamic network. This network is implemented with either a common bus (that is, a common set of physical wires) or a network of switches. The exact interconnection scheme is determined each time by the application program. Parallel computer: a computer system containing from a few to thousands of microprocessors. 1 Pipelined processor: a CPU that contains many processing stages.
    [Show full text]
  • Supercomputing History and the Immo
    VIRTUAL ROUNDTABLE The 30th Anniversary of the Supercomputing Conference: Bringing the Future Closer— Supercomputing upercomputing’s nascent era was borne of the late History and the 1940s and 1950s Cold War and increasing tensions be- Stween the East and the West; the first installations—which demanded ex- Immortality of Now tensive resources and manpower beyond what private corporations Jack Dongarra, University of Tennessee, Oak Ridge National Laboratory, could provide—were housed in and University of Manchester university and government labs in Vladimir Getov, University of Westminster the United States, United Kingdom, and the Soviet Union. Following the Kevin Walsh, University of California, San Diego Institute of Advanced Study (IAS) stored program computer architec- A panel of experts discusses historical ture, these so-called von Neumann machines were implemented as the reflections on the past 30 years of the MANIAC at Los Alamos Scientific Laboratory, the Atlas at the Univer- Supercomputing (SC) conference, its leading sity of Manchester, the ILLIAC at role for the professional community and some the University of Illinois, the BESM machines at the Soviet Academy of exciting future challenges. Sciences, the Johnniac at The Rand Corporation, and the SILLIAC in Australia. By 1955, private industry 74 COMPUTER PUBLISHED BY THE IEEE COMPUTER SOCIETY 0018-9162/18/$33.00 © 2018 IEEE joined in to support these initiatives and the IBM User Group, SHARE was formed, the Digital Equipment Corpo- MACHINES OF THE FUTURE ration was founded in 1957, while IBM built an early wide area computer net- “The advanced arithmetical machines of the future will be electrical in nature, and work SAGE (Semi-Automatic Ground they will perform at 100 times present speeds, or more.
    [Show full text]
  • Von IBM Zu IBM 4 Jahrzehnte Rechenzentrum 4 Jahrzehnte Großrechner-Entwicklung
    Von IBM zu IBM 4 Jahrzehnte Rechenzentrum 4 Jahrzehnte Großrechner-Entwicklung Die Beschaffung des neuen IBM-Systems gibt Anlass, einen Blick zurück auf die Geschichte des RRZN und seines Vorläufers, des Hochschulrechenzentrums der vormals Technischen Hochschule Hannover zu werfen, die naturgemäß hochgradig geprägt wurde durch die Geschichte der Großrechner -Entwicklung. Diese Betrachtung kann natürlich keinen Anspruch auf Vollständigkeit erheben. Sie soll – als durchaus auch subjektiv gefärbter Blick eines Mitarbeiters, der seit 1960 „dabei“ ist – einige Stationen und Highlights aus einer insgesamt doch stürmischen Entwicklung aufzeigen. Der Start mit einer IBM 650 Im Jahr 1957 wurde am Institut für Praktische Mathematik der damaligen Technischen Hochschule Hannover eine IBM 650, einer der ersten kommerziell verfügbaren „Großrechner“, installiert. Dieser Rechner war seinerzeit ein großer Fortschritt für die Hochschule, auch wenn man sich heute angesichts seiner Spezifikationen kaum noch vorzustellen vermag, was man überhaupt damit anfangen konnte: • Elektronischer Rechner auf Röhrenbasis • Arbeitspeicher: Magnettrommelspeicher • Kapazität: 2000 Zahlen à 10 Dezimalstellen • Ein- und Ausgabe über Lochkarten • Offline-Drucken der Ergebnisse aus den Lochkarten (rein numerisch) • Betriebssystem: So gut wie keins! Sowohl die Programmierung und auch die damals schon angebrachte Optimierung muten heutzutage geradezu archaisch an: Die Programmierung erfolgte in absolutem Maschinen- code (kein Assembler)! Den Trommelspeicher mussten sich natürlich Instruktionen und Daten teilen. Zur Optimierung mussten die jeweils für den nächsten Schritt erforderlichen Informationen möglichst schnell greifbar sein (d. h. gerade vor den Köpfen der Trommel stehen). Daher sollte die jeweilige Operandenadresse drei Plätze nach der Instruktionsadres- se liegen – in der Zeit war die Instruktion entschlüsselt und die Trommel hatte sich von der Instruktion zum Operanden gedreht.
    [Show full text]