DEPARTMENT of COMPUTER SCIENCE Carneg=E-Mellon Un

Total Page:16

File Type:pdf, Size:1020Kb

DEPARTMENT of COMPUTER SCIENCE Carneg=E-Mellon Un CMU-CS-85-180 A Data-Driven Multiprocessor for Switch-Level Simulation of VLSI Circuits Edward Harrison Frank November, 1985 DEPARTMENT of COMPUTER SCIENCE Carneg=e-Mellon Un=vers,ty CMU-CS-85-180 A Data-Driven Multiprocessor for Switch-Level Simulation of VLSI Circuits Edward Harrison Frank November, 1985 Carnegie-Mellon University Department of Computer Science Pittsburgh, PA 15213 Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science at Carnegie-Mellon University. Copyright © 1985 Edward H. Frank This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory Under Contract F33615-81-K-1539, and by the Fannie and John Hertz Foundation. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either ex- pressed or implied, of the Defense Advanced Research Projects Agency, the US Government, or the Hertz Foundation. Abstract In this dissertation I describe the algorithms, architecture, and performance of a computer called the FAST-I--a special-purpose machine for switch-level simulation of VLS1 circuits. The FASr-I does not implement a previously exist- ing simulation algorithm. Rather its simulation algorithm and its architecture were developed together. The FAS_I-Iis data-driven, which means that the flow of data determines which instructions to execute next. Data-driven execution has several important attributes: it implements event-driven simulation in a natural way, and it makes parallelism easier to exploit. Although the architecture described in this dissertation has yet to be imple- mented in hardware, it has itself been simulated using a 'software implementation' that allows performance to be measured in terms of read- modify-write memory cycles. The software-implemented FAsr-1 runs at speeds comparable to other software-implemented switch-level simulators. Thus it was possible to collect an extensive set of experimental performance results of the F_ST-1 simulating actual circuits, including some with over twenty thousand transistors. These measurements indicate that a hardware-implemented, uniprocessor F_'-I offers several orders of magnitude speedup over software- implemented simulators running on conventional computers built using similar technology. Additional speedup over a uniprocessor can be obtained using a Fasr-I mul- tiprocessor, that is constructed using multiple FAST-1uniprocessors that are in- terconnected by one or more broadcast busses. In order for a FAsr-I mul- tiprocessor to exploit the parallelism available in simulations, the FAs'r-I representation of circuits must be carefully partitioned onto the processors. Al- though, even simple versions of the partitioning problem are NP-complete, I show that an additional order of magnitude speedup can be obtained by using a multiprocessor F, sr-1 and fast heuristic partitioning algorithms. //i Acknowledgements The completion of this work owes much to many people. Throughout my years at CMU, my advisor, Bob Sproull, has provided guidance, fi%ndship, understanding, and good ideas as needed. My dear fi-iend and officemate, Carl Ebeling, has spent many hours listening to me instead of doing his own work. Both Dr. Bob's, and Carl's contributions to this research, and to my stay at CMU, are immeasurable. My thesis committee, Randal Bryant, A1 Davis, Marc Raibert. Alfred Spector, and Robert Sproull, provided the proper amount of help and criticism, at the proper times. I thank them all for care- fully reading this dissertation in finite time. The CMU VLSI project, originally directed by Sproull, and now being run by HT Kung. provided the overall context in which this work was conducted. Many project members, in particular Allan Fisher, and Hank Walker, have offered many good ideas and been good listeners. Several other people have given aid at important times: Rob Mathews of Silicon Solutions Corp. graciously allowed me to simulate the SSC Filter chip, and Dan Perkins, also of SSC, spent several hours working with me in order to get their test vectors to work with my simulator. Marco Annaratone provided the CMOS adder circuit, and Thomas Anantharaman provided the multiplier circuit. Many of the other circuits were given to me by Carl Ebeling. Ivan Sutherland provided many useful comments on an early draft of the thesis. Though the CMU Computer Science Department has grown and changed since I first came here many years ago, it is still a wonderful place with great resources, both human and computational. As with most people who come from the West. 1 was pleasantly surprised by Pittsburgh, although it still needs a real place to ski and some real lakes. During my studies at CMU, I was sup- ported by a Fannie and John Hertz Foundation Fellowship, for which I am most grateful. I am most indebted to my family who, throughout my life, have provided un- ending love and support. My wife, Sarah Ratchye, has endured the hard times, enjoyed the good times, and along with our daughter, Whitton Anne, has made this all worthwhile. Table of Contents Acknowledgements v 1 Introduction 1 1.1. Background and Motivation 1 1.1.1.Why Machines for VLSI Simulation? 1 1.1.2.Algorithms, Architecture, and Implementation 2 1.2. A Simple Simulation Algorithm and a Simple Simulation Machine 3 1.2.1.An Event-Driven Simulation Algorithm 3 1.2.2.The Fast-I Simulation Machine 5 1.3. The Organization of the Dissertation 8 1.4. The Contributions of this Research 10 1.5. A Final Note 11 II Related Work 13 II.1. Data-Driven Computers 13 II.2. Multiprocessors and Interconnection Networks 16 II.2.1. MIMD Machines 16 11.2.2.SIMD Machines 17 11.3.Simulation Algorithms 18 II.3.1. A Brief Survey of Digital Simulation Techniques 18 II.3.2. Switch-level Simulation Algorithms 19 II.4. Simulation Machines 20 II.4.1. Logic-Level Machines 20 11.4.2.Switch-level Machines 22 II.4.3. Circuit-Level Machines 23 11.5.Partitioning 23 II1 A Switch-Level Simulation Algorithm 25 III.1. Notation 26 111.2.A Switch-Level Model of MOS Circuits 26 III.2.1. Signals 27 III.2.2. Transistors 28 1II.2.3. Nodes 29 III.2.4. Strengths and Sizes 30 III.2.5. Actual Signal Models 30 1II.2.6. Modeling Threshold Drops 35 III.3. Determining the Steady State of a Network 37 III.3.1. An Incorrect Switch-Level Simulation Algorithm 38 I11.3.2.The Fast-I Switch-Level Simulation Algorithm 44 III.3.3. The Correctness and Complexity of the Simulation Algorithm 46 I11.3.4.Delay 50 vii viii A Data-Driven Multiprocessor for Switch-Level Simulation of VLSI Circuits I11.3.5.Initialization 51 I11.3.6.Optimizations 52 I11.4.Compiling Circuits into Simulations 56 I11.5.Other Issues 58 111.5.1.Multi-level Simulation 58 !11.5.2.Fault Simulation 60 IV The Architecture of a Fast-1 Uniprocessor 61 IV.1. Uniprocessor Architecture 61 IV.I.1. Instruction Definition 62 IV.I.2. Instruction Execution 63 IV.1.3. Implementing Algorithm III-4 Using the Fast-l: A Summary 68 IV.1.4. Other Issues 68 IV.2. Implementation 70 IV.2.1. The Datapaths of a Fast-1 Processor 70 IV.2.2. Keeping Track of Executable Instructions 71 IV.2.3. Fixed-Width versus Variable-Width Instructions 73 IV.2.4. The Impact of Technology 78 IV.2.5. Reliability 81 IV.2.6. Other Issues 81 V Uniproeessor Experiments 83 V.1. The Circuits 84 V.2. The Software Implementation of the Fast-1 MOS Simulator 85 V.3. Static Measurements 87 V.3.1. Transistors, Nodes, and the Distribution of Instructions 88 V.3.2. Fan-In and Fan-Out 89 V.3.3. Sizes of Transistor Groups 91 V.3.4. Representing Bidirectional Transistors Using Two Unidirectional 99 Transistor Instructions V.3.5. Using Minimal Machines and Fan-in and Fan-out Trees 99 V.3.6. The Effect of Finding Unidirectional Transistors and Eliminating 100 One-input Nodes V.4. Dynamic Measurements 102 V.4.1. The Base Case 106 V.4.2. The Effect of Changing the Representation of Circuits 108 V.4.3. The Effect of Optimizations 113 V.4.4. Using a Queue versus a Stack for Keeping Track of Executable In- 117 structions V.4.5. Parallelism in the Fast-1 117 V.4.6. Execution Time Estimates for Other Simulation Machine Architec- 125 tures V.4.7. Some Other Thoughts on Parallelism 129 VI Algorithms for Multiprocessor Simulation 131 VI.1. Multiprocessor Implementation of Algorithm III-4 131 VI.1.1. Implementation 131 Vl.l.2. Correctness 132 VI.1.3. Performance Considerations 132 VI.2. Partitioning Algorithms 133 V1.2.1.The Complexity of Partitioning 134 VI.2.2. Practical Partitioning Algorithms 135 VII The Architecture of a Fast-1 Multiprocessor 141 "Fableof Contents ix Vii.1. Approaches to Exploiting Parallelism 141 VII.2. A Multiprocessor Fast-1 144 VI1.2.1.Processor Architecture Assuming Static Instruction Assignment 145 V!1.2.2. Reorganizing Fan-out and Broadcasting 148 VII.2.3. Interconnect 150 VII.2.4. Multi-level Simulation 155 VIII Multiprocessor Experiments 157 VII 1.1.An Outline of the Experiments 157 VII 1.2.Speedup 158 VIII.3. Message Traffic 162 VIII.4. The Impact of Broadcasting 163 IX Conclusions 167 IX.1. Contributions 167 IX.2. Other Applications 168 IX.3. Future Work 169 1X.4. And Now a Word to Our Sponsor 170 References 171 A Circuit Descriptions 177 A.1. Adder 177 A.2.
Recommended publications
  • Mathematics 18.337, Computer Science 6.338, SMA 5505 Applied Parallel Computing Spring 2004
    Mathematics 18.337, Computer Science 6.338, SMA 5505 Applied Parallel Computing Spring 2004 Lecturer: Alan Edelman1 MIT 1Department of Mathematics and Laboratory for Computer Science. Room 2-388, Massachusetts Institute of Technology, Cambridge, MA 02139, Email: [email protected], http://math.mit.edu/~edelman ii Math 18.337, Computer Science 6.338, SMA 5505, Spring 2004 Contents 1 Introduction 1 1.1 The machines . 1 1.2 The software . 2 1.3 The Reality of High Performance Computing . 3 1.4 Modern Algorithms . 3 1.5 Compilers . 3 1.6 Scientific Algorithms . 4 1.7 History, State-of-Art, and Perspective . 4 1.7.1 Things that are not traditional supercomputers . 4 1.8 Analyzing the top500 List Using Excel . 5 1.8.1 Importing the XML file . 5 1.8.2 Filtering . 7 1.8.3 Pivot Tables . 9 1.9 Parallel Computing: An Example . 14 1.10 Exercises . 16 2 MPI, OpenMP, MATLAB*P 17 2.1 Programming style . 17 2.2 Message Passing . 18 2.2.1 Who am I? . 19 2.2.2 Sending and receiving . 20 2.2.3 Tags and communicators . 22 2.2.4 Performance, and tolerance . 23 2.2.5 Who's got the floor? . 24 2.3 More on Message Passing . 26 2.3.1 Nomenclature . 26 2.3.2 The Development of Message Passing . 26 2.3.3 Machine Characteristics . 27 2.3.4 Active Messages . 27 2.4 OpenMP for Shared Memory Parallel Programming . 27 2.5 STARP . 30 3 Parallel Prefix 33 3.1 Parallel Prefix .
    [Show full text]
  • Trends in HPC Architectures and Parallel Programmming
    Trends in HPC Architectures and Parallel Programmming Giovanni Erbacci - [email protected] Supercomputing, Applications & Innovation Department - CINECA Agenda - Computational Sciences - Trends in Parallel Architectures - Trends in Parallel Programming - PRACE G. Erbacci 1 Computational Sciences Computational science (with theory and experimentation ), is the “third pillar” of scientific inquiry, enabling researchers to build and test models of complex phenomena Quick evolution of innovation : • Instantaneous communication • Geographically distributed work • Increased productivity • More data everywhere • Increasing problem complexity • Innovation happens worldwide G. Erbacci 2 Technology Evolution More data everywhere : Radar, satellites, CAT scans, weather models, the human genome. The size and resolution of the problems scientists address today are limited only by the size of the data they can reasonably work with. There is a constantly increasing demand for faster processing on bigger data. Increasing problem complexity : Partly driven by the ability to handle bigger data, but also by the requirements and opportunities brought by new technologies. For example, new kinds of medical scans create new computational challenges. HPC Evolution As technology allows scientists to handle bigger datasets and faster computations, they push to solve harder problems. In turn, the new class of problems drives the next cycle of technology innovation . G. Erbacci 3 Computational Sciences today Multidisciplinary problems Coupled applicatitions -
    [Show full text]
  • Introduction to Parallel Processing : Algorithms and Architectures
    Introduction to Parallel Processing Algorithms and Architectures PLENUM SERIES IN COMPUTER SCIENCE Series Editor: Rami G. Melhem University of Pittsburgh Pittsburgh, Pennsylvania FUNDAMENTALS OF X PROGRAMMING Graphical User Interfaces and Beyond Theo Pavlidis INTRODUCTION TO PARALLEL PROCESSING Algorithms and Architectures Behrooz Parhami Introduction to Parallel Processing Algorithms and Architectures Behrooz Parhami University of California at Santa Barbara Santa Barbara, California KLUWER ACADEMIC PUBLISHERS NEW YORK, BOSTON , DORDRECHT, LONDON , MOSCOW eBook ISBN 0-306-46964-2 Print ISBN 0-306-45970-1 ©2002 Kluwer Academic Publishers New York, Boston, Dordrecht, London, Moscow All rights reserved No part of this eBook may be reproduced or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, without written consent from the Publisher Created in the United States of America Visit Kluwer Online at: http://www.kluweronline.com and Kluwer's eBookstore at: http://www.ebooks.kluweronline.com To the four parallel joys in my life, for their love and support. This page intentionally left blank. Preface THE CONTEXT OF PARALLEL PROCESSING The field of digital computer architecture has grown explosively in the past two decades. Through a steady stream of experimental research, tool-building efforts, and theoretical studies, the design of an instruction-set architecture, once considered an art, has been transformed into one of the most quantitative branches of computer technology. At the same time, better understanding of various forms of concurrency, from standard pipelining to massive parallelism, and invention of architectural structures to support a reasonably efficient and user-friendly programming model for such systems, has allowed hardware performance to continue its exponential growth.
    [Show full text]
  • Parallel Computing in Economics - an Overview of the Software Frameworks
    Munich Personal RePEc Archive Parallel Computing in Economics - An Overview of the Software Frameworks Oancea, Bogdan May 2014 Online at https://mpra.ub.uni-muenchen.de/72039/ MPRA Paper No. 72039, posted 21 Jun 2016 07:40 UTC PARALLEL COMPUTING IN ECONOMICS - AN OVERVIEW OF THE SOFTWARE FRAMEWORKS Bogdan OANCEA Abstract This paper discusses problems related to parallel computing applied in economics. It introduces the paradigms of parallel computing and emphasizes the new trends in this field - a combination between GPU computing, multicore computing and distributed computing. It also gives examples of problems arising from economics where these parallel methods can be used to speed up the computations. Keywords: Parallel Computing, GPU Computing, MPI, OpenMP, CUDA, computational economics. 1. Introduction Although parallel computers have existed for over 40 years and parallel computing offer a great advantage in terms of performance for a wide range of applications in different areas like engineering, physics, chemistry, biology, computer vision, in the economic research field they were very rarely used until recent years. Economists have been accustomed to only use desktop computers that have enough computing power for most of the problems encountered in economics because nowadays off-the-shelves desktops have FLOP rates greater than a supercomputer in late 80’s. The nature of parallel computing has changed since the first supercomputers in early ‘70s and new opportunities and challenges have appeared over the time including for economists. While 35 years ago computer scientists used massively parallel processors like the Goodyear MPP (Batcher, 1980), Connection Machine (Tucker & Robertson, 1988), Ultracomputer (Gottlieb et al., 1982), machines using Transputers (Baron, 1978) or dedicated parallel vector computers, like Cray computer series, now we are used to work on cluster of workstations with multicore processors and GPU cards that allows general programming.
    [Show full text]
  • National Bureau of Standards Workshop on Performance Evaluation of Parallel Computers
    NBS NATL INST OF STANDARDS & TECH R.I.C. A111DB PUBLICATIONS 5423b? All 102542367 Salazar, Sandra B/Natlonal Bureau of Sta QC100 .1156 NO. 86- 3395 1986 V19 C.1 NBS-P NBSIR 86-3395 National Bureau of Standards Workshop on Performance Evaluation of Parallel Computers Sandra B. Salazar Carl H. Smith U.S. DEPARTMENT OF COMMERCE National Bureau of Standards Center for Computer Systems Engineering Institute for Computer Sciences and Technology Gaithersburg, MD 20899 July 1986 U.S. DEPARTMENT OF COMMERCE ONAL BUREAU OF STANDARDS QC 100 - U 5 6 86-3395 1986 C. 2 NBS RESEARCH INFORMATION CENTER NBSIR 86-3395 * 9 if National Bureau of Standards Workshop on Performance Evaluation of Parallel Computers Sandra B. Salazar Carl H. Smith U.S. DEPARTMENT OF COMMERCE National Bureau of Standards Center for Computer Systems Engineering Institute for Computer Sciences and Technology Gaithersburg, MD 20899 July 1986 U.S. DEPARTMENT OF COMMERCE NATIONAL BUREAU OF STANDARDS NBSIR 86-3395 NATIONAL BUREAU OF STANDARDS WORKSHOP ON PERFORMANCE EVALUATION OF PARALLEL COMPUTERS Sandra B. Salazar Carl H. Smith U.S. DEPARTMENT OF COMMERCE National Bureau of Standards . Center for Computer Systems Engineering Institute for Computer Sciences and Technology Gaithersburg, MD 20899 July 1986 U.S. DEPARTMENT OF COMMERCE, Malcolm Baldrige, Secretary NATIONAL BUREAU OF STANDARDS, Ernest Ambler, Director o'* NATIONAL BUREAU OF STANDARDS WORKSHOP ON PERFORMANCE EVALUATION OF PARALLEL COMPUTERS Sandra B. Salazar and Carl H. Smith * U.S. Department of Commerce National Bureau of Standards Center for Computer Systems Engineering Institute for Computer Sciences and Technology Gaithersburg, MD 20899 The Institute for Computer Sciences and Technology of the National Bureau of Standards held a workshop on June 5 and 6 of 1985 in Gaithersburg, Maryland to discuss techniques for the measurement and evaluation of parallel computers.
    [Show full text]
  • Parallel Processing, Part 6
    Part VI Implementation Aspects Fall 2010 Parallel Processing, Implementation Aspects Slide 1 About This Presentation This presentation is intended to support the use of the textbook Introduction to Parallel Processing: Algorithms and Architectures (Plenum Press, 1999, ISBN 0-306-45970-1). It was prepared by the author in connection with teaching the graduate-level course ECE 254B: Advanced Computer Architecture: Parallel Processing, at the University of California, Santa Barbara. Instructors can use these slides in classroom teaching and for other educational purposes. Any other use is strictly prohibited. © Behrooz Parhami Edition Released Revised Revised First Spring 2005 Fall 2008* * Very limited update Fall 2010 Parallel Processing, Implementation Aspects Slide 2 About This Presentation This presentation is intended to support the use of the textbook Introduction to Parallel Processing: Algorithms and Architectures (Plenum Press, 1999, ISBN 0-306-45970-1). It was prepared by the author in connection with teaching the graduate-level course ECE 254B: Advanced Computer Architecture: Parallel Processing, at the University of California, Santa Barbara. Instructors can use these slides in classroom teaching and for other educational purposes. Any other use is strictly prohibited. © Behrooz Parhami Edition Released Revised Revised Revised First Spring 2005 Fall 2008 Fall 2010* * Very limited update Fall 2010 Parallel Processing, Implementation Aspects Slide 3 VI Implementation Aspects Study real parallel machines in MIMD and SIMD classes:
    [Show full text]
  • Much Ado About Almost Nothing: Compilation for Nanocontrollers
    Much Ado about Almost Nothing: Compilation for Nanocontrollers Henry G. Dietz, Shashi D. Arcot, and Sujana Gorantla Electrical and Computer Engineering Department University of Kentucky Lexington, KY 40506-0046 [email protected] http://aggregate.org/ Advances in nanotechnology have made it possible to assemble nanostructures into a wide range of micrometer-scale sensors, actuators, and other noveldevices... and to place thousands of such devices on a single chip. Most of these devices can benefit from intelligent control, but the control often requires full programmability for each device’scontroller.This paper presents a combination of programming language, compiler technology,and target architecture that together provide full MIMD-style programmability with per-processor circuit complexity lowenough to alloweach nanotechnology-based device to be accompanied by its own nanocontroller. 1. Introduction Although the dominant trend in the computing industry has been to use higher transistor counts to build more complexprocessors and memory hierarchies, there always have been applications for which a parallel system using processing elements with simpler,smaller,circuits is preferable. SIMD (Single Instruction stream, Multiple Data stream) has been the clear winner in the quest for lower circuit complexity per processing element. Examples of SIMD machines using very simple processing elements include STARAN [Bat74], the Goodyear MPP [Bat80], the NCR GAPP [DaT84], the AMT DAP 510 and 610, the Thinking Machines CM-1 and CM-2 [TMC89], the MasPar MP1 and MP2 [Bla90], and Terasys [Erb]. SIMD processing element circuit complexity is less than for an otherwise comparable MIMD processor because instruction decode, addressing, and sequencing logic does not need to be replicated; the SIMD control unit decodes each instruction and broadcasts the control signals.
    [Show full text]
  • Implicit Vs. Explicit Parallelism
    CS758 Introduction to Parallel Architectures To learn more, take CS757 Slides adapted from Saman Amarasinghe 1 CS758 Implicit vs. Explicit Parallelism Implicit Explicit Hardware Compiler Superscalar Explicitly Parallel Architectures Processors Prof. David Wood 2 CS758 1 Outline ● Implicit Parallelism: Superscalar Processors ● Explicit Parallelism ● Shared Pipeline Processors ● Shared Instruction Processors ● Shared Sequencer Processors ● Shared Network Processors ● Shared Memory Processors ● Multicore Processors Prof. David Wood 3 CS758 Implicit Parallelism: Superscalar Processors ● Issue varying numbers of instructions per clock statically scheduled – using compiler techniques – in-order execution dynamically scheduled – Extracting ILP by examining 100’s of instructions – Scheduling them in parallel as operands become available – Rename registers to eliminate anti dependences – out-of-order execution – Speculative execution Prof. David Wood 4 CS758 2 Pipelining Execution IF: Instruction fetch ID : Instruction decode EX : Execution WB : Write back Cycles Instruction # 1 2 3 4 5 6 7 8 Instruction i IF ID EX WB Instruction i+1 IF ID EX WB Instruction i+2 IF ID EX WB Instruction i+3 IF ID EX WB Instruction i+4 IF ID EX WB Prof. David Wood 5 CS758 Super-Scalar Execution Cycles Instruction type 1 2 3 4 5 6 7 Integer IF ID EX WB Floating point IF ID EX WB Integer IF ID EX WB Floating point IF ID EX WB Integer IF ID EX WB Floating point IF ID EX WB Integer IF ID EX WB Floating point IF ID EX WB 2-issue super-scalar machine Prof. David Wood 6 CS758 3 Data Dependence and Hazards ● InstrJ is data dependent (aka true dependence) on InstrI: I: add r1,r2,r3 J: sub r4,r1,r3 ● If two instructions are data dependent, they cannot execute simultaneously, be completely overlapped or execute in out-of-order ● If data dependence caused a hazard in pipeline, called a Read After Write (RAW) hazard Prof.
    [Show full text]
  • Large Scale Computing in Science and Engineering
    ^I| report of the panel on Large Scale Computing in Science and Engineering Peter D. Lax, Chairman December 26,1982 REPORT OF THE PANEL ON LARGE SCALE COMPUTING IN SCIENCE AND ENGINEERING Peter D. Lax, Chairman fVV C- -/c..,,* V cui'-' -ku .A rr Under the sponsorship of I Department of Defense (DOD) National Science Foundation (NSF) rP TC i rp-ia In cooperation with Department of Energy (DOE) National Aeronautics and Space Adrinistration (NASA) December 26, 1982 This report does not necessarily represent either the policies or the views of the sponsoring or cooperating agencies. r EXECUTIVE SUMMARY Report of the Panel on Large Scale Computing in Science'and Engineering Large scale computing is a vital component of science, engineering, and modern technology, especially those branches related to defense, energy, and aerospace. In the 1950's and 1960's the U. S. Government placed high priority on large scale computing. The United States became, and continues to be, the world leader in the use, development, and marketing of "supercomputers," the machines that make large scale computing possible. In the 1970's the U. S. Government slackened its support, while other countries increased theirs. Today there is a distinct danger that the U.S. will fail to take full advantage of this leadership position and make the needed investments to secure it for the future. Two problems stand out: Access. Important segments of the research and defense communities lack effective access to supercomputers; and students are neither familiar with their special capabilities nor trained in their use. Access to supercomputers is inadequate in all disciplines.
    [Show full text]
  • Parallel Processing, 1980 to 2020 Robert Kuhn, Retired (Formerly Intel Corporation) Parallel David Padua, University of Illinois at Urbana-Champaign
    Series ISSN: 1935-3235 PADUA • KUNH Synthesis Lectures on Computer Architecture Series Editor: Natalie Enright Jerger, University of Toronto Parallel Processing, 1980 to 2020 Robert Kuhn, Retired (formerly Intel Corporation) Parallel David Padua, University of Illinois at Urbana-Champaign This historical survey of parallel processing from 1980 to 2020 is a follow-up to the authors’ 1981Tutorial on Parallel PROCESSING,PARALLEL TO 2020 1980 Processing, which covered the state of the art in hardware, programming languages, and applications. Here, we cover the evolution of the field since 1980 in: parallel computers, ranging from the Cyber 205 to clusters now approaching an Processing, exaflop, to multicore microprocessors, and Graphic Processing Units (GPUs) in commodity personal devices; parallel programming notations such as OpenMP, MPI message passing, and CUDA streaming notation; and seven parallel applications, such as finite element analysis and computer vision. Some things that looked like they would be major trends in 1981, such as big Single Instruction Multiple Data arrays disappeared for some time but have been revived recently in deep neural network processors. There are now major trends that did not exist in 1980, such as GPUs, distributed memory machines, and parallel processing in nearly every commodity device. 1980 to 2020 This book is intended for those that already have some knowledge of parallel processing today and want to learn about the history of the three areas. In parallel hardware, every major parallel architecture type from 1980 has scaled-up in performance and scaled-out into commodity microprocessors and GPUs, so that every personal and embedded device is a parallel processor.
    [Show full text]
  • CS 252 Graduate Computer Architecture Lecture 17 Parallel
    CS 252 Graduate Computer Architecture Lecture 17 Parallel Processors: Past, Present, Future Krste Asanovic Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~krste http://inst.eecs.berkeley.edu/~cs252 Parallel Processing: The Holy Grail • Use multiple processors to improve runtime of a single task – Available technology limits speed of uniprocessor – Economic advantages to using replicated processing units • Preferably programmed using a portable high- level language 11/27/2007 2 Flynn’s Classification (1966) Broad classification of parallel computing systems based on number of instruction and data streams • SISD: Single Instruction, Single Data – conventional uniprocessor • SIMD: Single Instruction, Multiple Data – one instruction stream, multiple data paths – distributed memory SIMD (MPP, DAP, CM-1&2, Maspar) – shared memory SIMD (STARAN, vector computers) • MIMD: Multiple Instruction, Multiple Data – message passing machines (Transputers, nCube, CM-5) – non-cache-coherent shared memory machines (BBN Butterfly, T3D) – cache-coherent shared memory machines (Sequent, Sun Starfire, SGI Origin) • MISD: Multiple Instruction, Single Data – Not a practical configuration 11/27/2007 3 SIMD Architecture • Central controller broadcasts instructions to multiple processing elements (PEs) Inter-PE Connection Network Array Controller P P P P P P P P E E E E E E E E Control Data M M M M M M M M e e e e e e e e m m m m m m m m • Only requires one controller for whole array • Only requires storage
    [Show full text]
  • Multicore Processors
    MIT OpenCourseWare http://ocw.mit.edu 6.189 Multicore Programming Primer, January (IAP) 2007 Please use the following citation format: Saman Amarasinghe, 6.189 Multicore Programming Primer, January (IAP) 2007. (Massachusetts Institute of Technology: MIT OpenCourseWare). http://ocw.mit.edu (accessed MM DD, YYYY). License: Creative Commons Attribution-Noncommercial-Share Alike. Note: Please use the actual date you accessed this material in your citation. For more information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms 6.189 IAP 2007 Lecture 3 Introduction to Parallel Architectures Prof. Saman Amarasinghe, MIT. 1 6.189 IAP 2007 MIT Implicit vs. Explicit Parallelism Implicit Explicit Hardware Compiler Superscalar Explicitly Parallel Architectures Processors Prof. Saman Amarasinghe, MIT. 2 6.189 IAP 2007 MIT Outline ● Implicit Parallelism: Superscalar Processors ● Explicit Parallelism ● Shared Instruction Processors ● Shared Sequencer Processors ● Shared Network Processors ● Shared Memory Processors ● Multicore Processors Prof. Saman Amarasinghe, MIT. 3 6.189 IAP 2007 MIT Implicit Parallelism: Superscalar Processors ● Issue varying numbers of instructions per clock statically scheduled – using compiler techniques – in-order execution dynamically scheduled – Extracting ILP by examining 100’s of instructions – Scheduling them in parallel as operands become available – Rename registers to eliminate anti dependences – out-of-order execution – Speculative execution Prof. Saman Amarasinghe, MIT. 4 6.189 IAP 2007 MIT Pipelining Execution IF: Instruction fetch ID : Instruction decode EX : Execution WB : Write back Cycles Instruction #1 2 3 45 6 7 8 Instruction i IF ID EX WB Instruction i+1 IF ID EX WB Instruction i+2 IF ID EX WB Instruction i+3 IF ID EX WB Instruction i+4 IF ID EX WB Prof.
    [Show full text]