Personal Computers (PC)

Total Page:16

File Type:pdf, Size:1020Kb

Personal Computers (PC) Computer Architecture ----A Quantitative Approach College of Computer of Zhejiang University CHEN WEN ZHI [email protected] Room 511, CaoGuangBiao BLD 2014/4/13 1 Topics in Chapter 1 1.1 Introduction 1.2 Classes of computers 1.3 Defining computer architecture and What’s the task of computer design? 1.4 Trends in Technology 1.5 Trends in power in Integrated circuits 1.6 Trends in Cost 1.7 Dependability 1.8 Measuring, Reporting and summerizing Perf. 1.9 Quantitative Principles of computer Design 1.10 Putting it altogether 2014/4/13 2 Topics in Chapter 1 1.1 Introduction 1.2 Classes of computers 1.3 Defining computer architecture and What’s the task of computer design? 1.4 Trends in Technology 1.5 Trends in power in Integrated circuits 1.6 Trends in Cost 1.7 Dependability 1.8 Measuring, Reporting and summerizing Perf. 1.9 Quantitative Principles of computer Design 1.10 Putting it altogether 2014/4/13 3 History of the Computer Original: Big Fishes Eating Little Fishes 2014/4/13 4 Computer Food Chain Mainframe Work- PC Mini- Supercomputer Mini- station supercomputer computer Massively Parallel Processors 2014/4/13 5 Computer Food Chain Mini- Mini- supercomputer computer Massively Parallel Processors Embedded system Mainframe Work- PC Server station Supercomputer 2014/4/13 6 2014/4/13 7 Incredible performance improvement 10000 6505 53645764 4195 2584 20%/year 1779 1267 11/780) 1000 Now 500$ computer with more perf.993 , More MM and - more Disk 649 481 || 280 1985年 1,000,000183 $ computer 100 117 80 51 52%/year 24 18 13 10 9 5 Performance(vs. VAX Performance(vs. 25% /year 1.5 2014/4/13 1 8 1978 1982 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 Conclusion 25%: Technological improvements more steady than progress in computer architectrue 52%: After RISC emergence, computer design emphsized both architecutural innovation and efficient use of technology improvments. CA plays an important role in perf. Improvement 20%: little ILP left to exploit due to power dissipation Faster uniprocessor => multiple processor on chip ILP => TLP and DLP Implicity, compiler and hardware => Explicity, programmer 2014/4/13 9 Process ability New Applications MIPS 10,000 Pentium ® IV ® 1,000 Pentium III Pentium ® II 100 Pentium® i486 10 i386 80286 1 8086 0.1 1975 1980 1985 1990 1995 2000 2014/4/13 10 Why Such Change in 60 years? Two reasons: advances in the technology used to build computers IC Strorage device(including RAM and DISK) Peripheral device innovation in computer design. Simplecomplexmost complexsimplecomplexmost complex Sometimes rapid, sometimes slow Many technology have been washed out 2014/4/13 11 Four Decades of microprocessor The Decade of the 1970’s “Microprocessors” - Programmable Controller - Single-Chip Microprocessors - Personal Computers (PC) The Decade of the 1980’s “Quantitative Architecture” - Instruction Pipelining - Fast Cache Memories - Compiler Considerations - Workstations The Decade of the 1990’s “Instruction-Level Parallelism” - Superscalar Processors - Speculative Microarchitectures - Aggressive Code Scheduling - Low-Cost Desktop Supercomputing The Decade of the 2000’s “Thread-level/Data-level parallelism” 2014/4/13 12 Forces on Computer Architecture Parallelism Technology Programming Languages Applications Interface Design Computer Architecture: (ISA) • Instruction Set Design • Organization • Hardware/Software Boundary Compilers Operating Measurement & Systems Evaluation History Computer architecture has been at the core of such 2014/4/13technological development and is still on a forward move 13 Topics in Chapter 1.1 Introduction 1.2 Classes of computers 1.3 Defining computer architecture and What’s the task of computer design? 1.4 Trends in Technology 1.5 Trends in power in Integrated circuits 1.6 Trends in Cost 1.7 Dependability 1.8 Measuring, Reporting and summerizing Perf. 1.9 Quantitative Principles of computer Design 1.10 Putting it altogether 2014/4/13 14 Classes of computers •Flynn’s Taxonomy: A classification of computer architectures based on the number of streams of instructions and data 2014/4/13 15 SISD A serial (non-parallel) computer Single instruction: only one instruction stream is being acted on by the CPU during any one clock cycle Single data: only one data stream is being used as input during any one clock cycle Deterministic execution This is the oldest and until recently, the most prevalent form of computer Examples: most PCs, single CPU workstations and mainframes IS IS DS I/O CU PU MU 2014/4/13 16 SIMD A type of parallel computer Single instruction: All processing units execute the same instruction at any given clock cycle Multiple data: Each processing unit can operate on a different data element DS This type of machine typicallyPU1 has an MMinstruction dispatcher, a very highIS -bandwidthIS internal network, and a very large array of very smallCU-capacity instruction units.1 DS Best suited for specializedPUn problems characterizedMM by a high degree of regularity,such as image processing.n Synchronous (lockstep) and deterministic execution Two varieties: Processor Arrays: Connection Machine CM-2, Maspar MP-1, MP-2 Vector Pipelines: IBM 9000, Cray C90, Fujitsu VP, NEC SX-2, Hitachi S820 2014/4/13 17 MISD A single data stream is fed into multiple processing units. Each processing unit operatesDS on the data independently via independent instruction IS streams.CU1 PE1 Few actual… examples of this class of parallelI/O … MM … MM computer… have ever existed1 . One is then experimentalIS Carnegie-Mellon C.mmp computer (1971).CU n PEn DS …… Some conceivable uses might be: multiple frequency filters operating on a single signal stream multiple cryptography algorithms attempting to crack a single coded message. 2014/4/13 18 MIMD Currently, the most common type of parallel computer. Most modern computers fall into this category. Multiple InstructionIS : every processor may be executing a SM different instruction stream Multiple DataIS : every processor may be working with a CU1 PE1 MM1 I/Odifferent data stream Execution… can be synchronous… or asynchronous, deterministic or non-deterministic … IS I/OExamples: most current supercomputers, networked parallel CUn PEn MMn computer "grids" and multi-processor SMP computers - including someIS types of PCs. 2014/4/13 19 Classification-market Mainframe 1960s Embedded system 2000s Mini- Mini- Supercomputer Work- PC Supercomputer Computer 1970s Station 1970s 1970s 1980s-2000s Server Massively Parallel 2000s Processors 2014/4/13 1980s 20 Effect of dramatic perf. growth Enhanced the capability available to computer users. Microprocessor-based computers across the entire range of the computer design. Minicomputer => servers using microprocessors Mainframe => multiprocessors consisting of microprocessors Supercomputer => multiprocessor collections 2014/4/13 21 3 computing markets Feature Desktop Server Embedded Price of $500-$5000 $5000 $10 system -$5,000,000 -$100,000 Price of $50-$500 $200 $0.01 microproces per proc. -$10,000 -$100 sor module per proc. per proc. Critical Price-perf. Throughput, Price, Power system Graphics availability, consumptio design perf. scalability n,applicatio issues n-specific 2014/4/13 perf. 22 Desktop Computing The first, and still the largest market in dollar terms, is desktop computing. Requirement: Optimized price-performance New challenges: Web-centric, interactive application How to evaluate performance ? 2014/4/13 23 Servers The role of servers to provide larger scale and more reliable file and computing services grew. For servers, different characteristics are important. First, dependability is critical. A second key feature of server systems is an emphasis on scalability. Lastly, servers are designed for efficient throughput. 2014/4/13 24 Embedded Computers The fastest growing portion of the computer market. 8-bit 16-bit 32-bit 64-bit Real time performance (soft & hard) Strict resource constraints limited memory size, lower power consumption,... The use of processor cores together with application-specific circuitry. DSP, mobile computing, mobile phone, Digital TV 2014/4/13 25 Widest spread of power and cost 8-bit, 16bit microprocessor / one dime 32-bit 100M instructions per sec. /5$ 32-bit, 1 billion instr. per sec. / 100$ 2014/4/13 26 Retail POS Thin Clients Set-Top-Box Gateway/Media Store Network Devices Intel® Architecture Intel® XScale™ Kiosk/ATM Game Platforms Office Industrial Automation Automation 2014/4/13 27 What we need to design for this computing market ? What a computer Architecture designer need to know ? 2014/4/13 28 Topics in Chapter 1.1 Why take this course ? 1.2 Classes of computers in current computer market 1.3 Defining computer architecture and What’s the task of computer design? 1.4 Trends in Technology 1.5 Trends in power in Integrated circuits 1.6 Trends in Cost 1.7 Dependability 1.8 Measuring, Reporting and summerizing Perf. 1.9 Quantitative Principles of computer Design 1.10 Putting it altogether 2014/4/13 29 What computer architecture is all about What are the components of a computer? How to effectively put together the various components 2014/4/13 30 Original Concept of Computer architecture The attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls the logic design, and the physical implementation. Amdahl, Blaaw, and Brooks, 1964 2014/4/13 31 Instruction Set Architecture (ISA) Applications Operating System Compiler
Recommended publications
  • Chapter 1: Computer Abstractions and Technology 1.1 – 1.4: Introduction, Great Ideas, Moore’S Law, Abstraction, Computer Components, and Program Execution
    Chapter 1: Computer Abstractions and Technology 1.1 – 1.4: Introduction, great ideas, Moore’s law, abstraction, computer components, and program execution ITSC 3181 Introduction to Computer Architecture https://passlab.github.io/ITSC3181/ Department of Computer Science Yonghong Yan [email protected] https://passlab.github.io/yanyh/ Lectures for Chapter 1 and C Basics Computer Abstractions and Technology ☛• Lecture 01: Chapter 1 – 1.1 – 1.4: Introduction, great ideas, Moore’s law, abstraction, computer components, and program execution • Lecture 02 - 03: C Basics; Compilation, Assembly, Linking and Program Execution • Lecture 03 - 04: Chapter 1 – 1.6 – 1.7: Performance, power and technology trends • Lecture 04 - 05: Memory and Binary Systems • Lecture 05: – 1.8 - 1.9: Multiprocessing and benchmarking 2 § 1.1 Introduction 1.1 The Computer Revolution • Progress in computer technology – Underpinned by Moore’s Law • Every two years, circuit density ~= increasing frequency ~= performance, double • Makes novel applications feasible – Computers in automobiles – Cell phones – Human genome project – World Wide Web – Search Engines • Computers are pervasive 3 Generation Of Computers https://solarrenovate.com/the-evolution-of-computers/ 4 New School Computer 5 Classes of Computers • Personal computers (PC) --> computers are PCs today – General purpose, variety of software – Subject to cost/performance tradeoff • Server computers – Network based – High capacity, performance, reliability – Range from small servers to building sized 6 Classes of Computers
    [Show full text]
  • 2200, Canmath 201.Qxd
    An Introduction to the Computer Age Computers are changing our world. The small amount of training. Amazingly invention of the internal combustion engine enough, as the size has decreased, the power and the harnessing of electricity have had a has increased and prices have plummeted. profound effect on the way society operates. How does the computer age affect the The widespread use of the computer is Christian? What is the history behind the having a similar and perhaps even greater computer? What are the fundamental parts impact on our society. of a computer and how do they work In only decades, computers have shrunk together? What are the uses, advantages, from mammoth, room-filling machines that and limitations of computers? Do I need a only the highly educated could operate to computer? This LightUnit and those follow - tiny devices held in the palm of the hand ing it will begin to answer some of these that the average person can use after a questions for you. Section 1 Computer Background Any study must be based on definitions very specialized field, new words have come about the subject. If definitions are not into being, and many common words have understood, there is little hope that much acquired new definitions. Some of the words can be learned about the subject. The goal of may already be familiar to you, but in the the first section of this LightUnit is to pro - context of computers, they may take on a vide a basis for the rest of the course by different meaning. Therefore, do not assume defining computer and many terms associ - you know the definition even if the word is ated with computers.
    [Show full text]
  • Cpu Performance
    LECTURE 1 Introduction CLASSES OF COMPUTERS • When we think of a “computer”, most of us might first think of our laptop or maybe one of the desktop machines frequently used in the Majors’ Lab. Computers, however, are used for a wide variety of applications, each of which has a unique set of design considerations. Although computers in general share a core set of technologies, the implementation and use of these technologies varies with the chosen application. In general, there are three classes of applications to consider: desktop computers, servers, and embedded computers. CLASSES OF COMPUTERS • Desktop Computers (or Personal Computers) • Emphasize good performance for a single user at relatively low cost. • Mostly execute third-party software. • Servers • Emphasize great performance for a few complex applications. • Or emphasize reliable performance for many users at once. • Greater computing, storage, or network capacity than personal computers. • Embedded Computers • Largest class and most diverse. • Usually specifically manufactured to run a single application reliably. • Stringent limitations on cost and power. PERSONAL MOBILE DEVICES • A newer class of computers, Personal Mobile Devices (PMDs), has quickly become a more numerous alternative to PCs. PMDs, including small general-purpose devices such as tablets and smartphones, generally have the same design requirements as PCs with much more stringent efficiency requirements (to preserve battery life and reduce heat emission). Despite the various ways in which computational technology can be applied, the core concepts of the architecture of a computer are the same. Throughout the semester, try to test yourself by imagining how these core concepts might be tailored to meet the needs of a particular domain of computing.
    [Show full text]
  • A PARALLEL IMPLEMENTATION of BACKPROPAGATION NEURAL NETWORK on MASPAR MP-1 Faramarz Valafar Purdue University School of Electrical Engineering
    Purdue University Purdue e-Pubs ECE Technical Reports Electrical and Computer Engineering 3-1-1993 A PARALLEL IMPLEMENTATION OF BACKPROPAGATION NEURAL NETWORK ON MASPAR MP-1 Faramarz Valafar Purdue University School of Electrical Engineering Okan K. Ersoy Purdue University School of Electrical Engineering Follow this and additional works at: http://docs.lib.purdue.edu/ecetr Valafar, Faramarz and Ersoy, Okan K., "A PARALLEL IMPLEMENTATION OF BACKPROPAGATION NEURAL NETWORK ON MASPAR MP-1" (1993). ECE Technical Reports. Paper 223. http://docs.lib.purdue.edu/ecetr/223 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. TR-EE 93-14 MARCH 1993 A PARALLEL IMPLEMENTATION OF BACKPROPAGATION NEURAL NETWORK ON MASPAR MP-1" Faramarz Valafar Okan K. Ersoy School of Electrical Engineering Purdue University W. Lafayette, IN 47906 - * The hdueUniversity MASPAR MP-1 research is supponed in pan by NSF Parallel InfrasmctureGrant #CDA-9015696. - 2 - ABSTRACT One of the major issues in using artificial neural networks is reducing the training and the testing times. Parallel processing is the most efficient approach for this purpose. In this paper, we explore the parallel implementation of the backpropagation algorithm with and without hidden layers [4][5] on MasPar MP-I. This implementation is based on the SIMD architecture, and uses a backpropagation model which is more exact theoretically than the serial backpropagation model. This results in a smoother convergence to the solution. Most importantly, the processing time is reduced both theoretically and experimentally by the order of 3000, due to architectural and data parallelism of the backpropagation algorithm.
    [Show full text]
  • Copyrighted Material
    1 The Function of Computation As in music theory, we cannot discuss the microprocessor without positioning it in the context of the history of the computer, since this component is the integrated version of the central unit. Its internal mechanisms are the same as those of supercomputers, mainframe computers and minicomputers. Thanks to advances in microelectronics, additional functionality has been integrated with each generation in order to speed up internal operations. A computer1 is a hardware and software system responsible for the automatic processing of information, managed by a stored program. To accomplish this task, the computer’s essential function is the transformation of data using computation, but two other functions are also essential. Namely, these are storing and transferring information (i.e. communication). In some industrial fields, control is a fourth function. This chapter focuses on the requirements that led to the invention of tools and calculating machines to arrive at the modern version of the computer that we know today. The technological aspect is then addressed. Some chronological references are given. Then several classification criteria are proposed. The analog computer, which is then described, was an alternative to the digital version. Finally, the relationship between hardware and software and the evolution of integration and its limits are addressed. NOTE.– This chapter does not attempt to replace a historical study. It gives only a few key dates and technical benchmarks to understand the technological evolution of the field. COPYRIGHTED MATERIAL 1 The French word ordinateur (computer) was suggested by Jacques Perret, professor at the Faculté des Lettres de Paris, in his letter dated April 16, 1955, in response to a question from IBM to name these machines; the English name was the Electronic Data Processing Machine.
    [Show full text]
  • History of Computers, Chronological Record of Events – Particularly in the Area of Technological Development – Will Be Explained
    Computer Organization 1. Introduction STUDY MATERIALS ON COMPUTER ORGANIZATION (As per the curriculum of Third semester B.Sc. Electronics of Mahatma Gandh Uniiversity) Compiled by Sam Kollannore U.. Lecturer in Electronics M.E.S. College, Marampally 1. INTRODUCTION 1.1 GENERATION OF COMPUTERS The first electronic computer was designed and built at the University of Pennsylvania based on vacuum tube technology. Vacuum tubes were used to perform logic operations and to store data. Generations of computers has been divided into five according to the development of technologies used to fabricate the processors, memories and I/O units. I Generation : 1945 – 55 II Generation : 1955 – 65 III Generation : 1965 – 75 IV Generation : 1975 – 89 V Generation : 1989 to present First Generation (ENIAC - Electronic Numerical Integrator And Calculator EDSAC – Electronic Delay Storage Automatic Calculator EDVAC – Electronic Discrete Variable Automatic Computer UNIVAC – Universal Automatic Computer IBM 701) Vacuum tubes were used – basic arithmetic operations took few milliseconds Bulky Consume more power with limited performance High cost Uses assembly language – to prepare programs. These were translated into machine level language for execution. Mercury delay line memories and Electrostatic memories were used Fixed point arithmetic was used 100 to 1000 fold increase in speed relative to the earlier mechanical and relay based electromechanical technology Punched cards and paper tape were invented to feed programs and data and to get results. Magnetic tape / magnetic drum were used as secondary memory Mainly used for scientific computations. Second Generation (Manufacturers – IBM 7030, Digital Data Corporation’s PDP 1/5/8 Honeywell 400) Transistors were used in place of vacuum tubes.
    [Show full text]
  • Thinking Machines
    High Performance Comput ing IAnd Communications Week I K1N.G COMMUNICATIONS GROUP, INC. 627 National Press Building, Washington, D.C. 20045 (202) 638-4260 Fax: (202) 662-9744 Thursday, July 8,1993 Volume 2, Number 27 Gordon Bell Makes His Case: Get The Feds out Of Computer Architecture BY RICHARD McCORMACK It's an issue that has been simmering, then smolder- Bell: "You've got ing and occasionally flaring up: will the big massively huge forces tellmg parallel machines costing tens of millions of dollars you who's malnl~ne prove themselves worthy of their promise? Or will and how you bu~ld these machines, developed with millions of dollars computers " from the taxpayer, be an embarrassing bust? It's a debate that occurs daily-even with spouses in bed at night-but not much outside of the high- performance computing industry's small borders. Lit- erally thousands of people are engaged in trying to make massive parallelism a viable technology. But there are still few objective observers, very little data, and not enough experience with the big machines to prove-or disprove-their true worth. afraid to talk about [the situation] because they know Interestingly, though, one of the biggest names in they've conned the world and they have to keep lying computing has made up his mind. The MPPs are aw- to support" their assertions that the technology needs ful, and the companies selling them, notably Intel and government support, says the ever-quotable Bell. "It's Thinking Machines, but others as well, are bound to really bad when it turns the scientists into a bunch of fail, says Gordon Bell, whose name is attached to the liars.
    [Show full text]
  • 1. Types of Computers Contents
    1. Types of Computers Contents 1 Classes of computers 1 1.1 Classes by size ............................................. 1 1.1.1 Microcomputers (personal computers) ............................ 1 1.1.2 Minicomputers (midrange computers) ............................ 1 1.1.3 Mainframe computers ..................................... 1 1.1.4 Supercomputers ........................................ 1 1.2 Classes by function .......................................... 2 1.2.1 Servers ............................................ 2 1.2.2 Workstations ......................................... 2 1.2.3 Information appliances .................................... 2 1.2.4 Embedded computers ..................................... 2 1.3 See also ................................................ 2 1.4 References .............................................. 2 1.5 External links ............................................. 2 2 List of computer size categories 3 2.1 Supercomputers ............................................ 3 2.2 Mainframe computers ........................................ 3 2.3 Minicomputers ............................................ 3 2.4 Microcomputers ........................................... 3 2.5 Mobile computers ........................................... 3 2.6 Others ................................................. 4 2.7 Distinctive marks ........................................... 4 2.8 Categories ............................................... 4 2.9 See also ................................................ 4 2.10 References
    [Show full text]
  • MINICOMPUTER CONCEPTS by BENEDICTO CACHO Bachelor Of
    MINICOMPUTER CONCEPTS By BENEDICTO CACHO II Bachelor of Science Southeastern Oklahoma State University Durant, Oklahoma 1973 Submitted to the Faculty of the Graduate College of the Oklahoma State University in partial fulfillment of the requirements for the Degre~ of MASTER OF SCIENCE July, 1976 . -<~ .i;:·~.~~·:-.~?. '; . .- ·~"' . ~ ' .. .• . ~ . .. ' . ,. , .. J:. MINICOMPUTER CONCEPTS Thesis Approved: fl. F. w. m wd/ I"'' ') 2 I"' e.: 9 tl d I a i. i PREFACE This thesis presents a study of concepts used in the design of minicomputers currently on the market. The material is drawn from research on sixteen minicomputer systems. I would like to thank my major adviser, Dr. Donald D. Fisher, for his advice, guidance, and encouragement, and other committee members, Dr. George E. Hedrick and Dr. James Van Doren, for their suggestions and assistance. Thanks are also due to my typist, Sherry Rodgers, for putting up with my illegible rough draft and the excessive number of figures, and to Dr. Bill Grimes and Dr. Doyle Bostic for prodding me on. Finally, I would like to thank members of my family for seeing me through it a 11 . iii TABLE OF CONTENTS Chapter Page I. INTRODUCTION 1 Objective ....... 1 History of Minicomputers 2 II. ELEMENTS OF MINICOMPUTER DESIGN 6 Introduction 6 The Processor . 8 Organization 8 Operations . 12 The Memory . 20 Input/Output Elements . 21 Device Controllers .. 21 I/0 Operations . 22 III. GENERAL SYSTEM DESIGNS ... 25 Considerations ..... 25 General Processor Designs . 25 Fixed Purpose Register Design 26 General Purpose Register Design 29 Multi-accumulator Design 31 Microprogramm1ng 34 Stack Structures 37 Bus Structures . 39 Typical System Options 41 IV.
    [Show full text]
  • The Maspar MP-1 As a Computer Arithmetic Laboratory
    Division Computing and Applied Mathematics Laboratory The MasPar MP-1 as a Computer Arithmetic Laboratory M. A. Anuta, D. W. Lozier and P. R. Turner January 1995 U. S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Gaithersburg, MD 20899 QC N\sr 100 .U56 NO. 5569 1995 NISTIR 5569 The MasPar MP-l as a Computer Arithmetic Laboratory M. A. Anuta D. W. Lozier P. R. Turner U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Applied and Computational Mathematics Division Computing and Applied Mathematics Laboratory Gaithersburg, MD 20899 January 1995 U.S. DEPARTMENT OF COMMERCE Ronald H. Brown, Secretary TECHNOLOGY ADMINISTRATION Mary L. Good, Under Secretary for Technology NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY Arati Prabhakar, Director I I The MasPar MP-1 as a Computer Arithmetic Laboratory Michael A Anuta^ Daniel W Lozier and Peter R Turner^ Abstract This paper describes the use of a massively parallel SIMD computer architecture for the simulation of various forms of computer arithmetic. The particular system used is a DEC/MasPar MP-1 with 4096 processors in a square array. This architecture has many ad\>cmtagesfor such simulations due largely to the simplicity of the individual processors. Arithmetic operations can be spread across the processor array to simulate a hardware chip. Alternatively they may be performed on individual processors to allow simulation of a massively parallel implementation of the arithmetic. Compromises between these extremes permit speed-area trade-offs to be examined. The paper includes a description of the architecture and its features. It then summarizes some of the arithmetic systems which have been, or are to be, implemented.
    [Show full text]
  • The Race Continues
    SLALOM Update: The Race Continues John Gustafson, Diane Rover, Stephen Elbert, and Michael Carter Ames Laboratory DOE, Ames, Iowa Last November, we introduced in these pages a new kind of computer benchmark: a complete scientific problem that scales to the amount of computing power available, and always runs in the same amount of time… one minute. SLALOM assigns no penalty for novelty in language or architecture, and runs on computers as different as an Alliant, a MasPar, an nCUBE, and a Toshiba notebook PC. Since that time, there have been several developments: • The number of computer systems in the list has more than doubled. • The algorithms have improved. • An annual award for SLALOM performance has been announced. • SLALOM is the judge for at least one competitive supercomputer procurement. • The massively-parallel contenders are starting to unseat the low-end Cray computers. • All but a few major scientific computer manufacturers are represented in our report. • Many of the original numbers have improved significantly. Most Wanted List We’re still waiting to hear results for a few major players in supercomputing: Thinking Machines, Convex, MEIKO, and Stardent haven’t sent anything to us, nor have any of their customers. We’d also very much like numbers for the WaveTracer and Active Memory Technology computers. Our Single-Instruction, Multiple Data (SIMD) version has been improved since the last Supercomputing Review article, so the groups working on those machines might want to check it out as a better starting point (see inset). The only IBM mainframe measurements are nonparallel and nonvector, so we expect big improvements to its performance.
    [Show full text]
  • Porting Industrial Codes and Developing Sparse Linear Solvers on Parallel Computers
    Science and Engineering Research Council "The Science and Engineering Research Council does not accept any responsibility for loss or damage arising from the use of information contained in any of its reports or in any communication about its tests or investigations" OCR Output RAL 94-019 Porting Industrial Codes and Developing Sparse Linear Solvers on Parallel Computers Michel J. Daydé 2 and Iain S. Duff ABSTRACT We address the main issues when porting existing codes from serial to parallel computers and when developing portable parallel software on MIMD multiprocessors (shared memory, virtual shared memory, and distributed memory multiprocessors, and networks of computers). We discuss the use of numerical libraries as a way of developing portable and efficient parallel code. We illustrate this by using examples from our experience in porting industrial codes and in designing parallel numerical libraries. We report in some detail on the parallelization of scientific applications coming from Centre National d’Etudes Spatiales and from Aérospatiale, and we illustrate how it is possible to develop portable and efficient numerical software by considering the parallel solution of sparse linear systems of equations. Keywords: industrial codes, sparse matrices, multifrontal, BLAS, PVM, P4, MIMD multiprocessors, networks. AMS(MOS) subject classifications: 65F05, 65F50, 68N99, 68U99. 90C30. Computing Reviews classification: G.4, G.1.3, D.2.7, D.2.10 Computing Reviews General Terms: Algorithms Text from invited lectures given at First International Meeting on Vector and Parallel Processing, Porto, Portugal. 29 September - 1 October, 1993. Part of this work was supported by Aérospatiale, Division Avions, and Centre National d’Etudes Spatiales under contracts 11C05770 and 873/CNES/90/0841/00.
    [Show full text]