Frontiers of Supercomputing

Total Page:16

File Type:pdf, Size:1020Kb

Frontiers of Supercomputing by B. L. Buzbee, N. Metropolis, and D. H. Sharp scientists two years to do may be reduced to needed for rapid progress. two months of effort. When this happens. Before presenting highlights of the con- practice in many fields of science and tech- ference we will review in more depth the nology will be revolutionized. importance of supercomputing and the These radical changes would also have a trends in computer performance that form needs. Electronic computers were developed, large and rapid impact on the nation’s econ- the background for the conference dis- in fact, during and after World War 11 to omy and security. The skill and effectiveness cussions. meet the need for numerical simulation in the with which supercomputers can be used to design of nuclear weapons, aircraft, and design new and more economical civilian The Importance of Supercomputers conventional ordnance. Today. the avail- aircraft will determine whether there is em.. ability of supercomputers ten thousand times ployrnent in Seattle or in a foreign city. The term “supercomputer” refers to the faster than the first electronic devices is Computer-aided design of automobiles is most powerful scientific computer available having a profound impact on all branches of already playing an important role in De- at a given time. The power of a computer is science and engineering—from astrophysics troit’s effort to recapture its position in the measured by its speed, storage capacity to elementary particle physics. from fusion automobile market. The speed and accuracy (memory). and precision. Today’s com- energy research to automobile design. The with which information can be processed will mercially available supercomputers, the reason is clear: supercomputers extend bear importantly on the effectiveness of our Cray I from Cray Research and the enormously the range of problems that are national intelligence activities. CYBER 205 from Control Data Corpora- effectively solvable. Japan and other foreign countries have tion, have a peak speed of over one hundred Although the last forty years have seen a already realized the significance of making million operations per second, a memory of dramatic increase in computer performance, this quantum jump in supercomputer devel- about four million 64-bit “words,” and a the number of users and the range of applica- opment, Japan is, in fact, actively engaged in precision of at least 64 bits. tions have been increasing at an even faster a highly integrated effort to realize it. At this There are presently about seventy-five of rate, to the point that demands for greater juncture the United States is in danger of these supercomputers in use throughout the performance now far outstrip the improve- losing its long-held leadership in supercom- world. They are being used at national ments in hardware. Moreover, the broadened puting. laboratories to solve complex scientific prob- community of users is also demanding im- Various efforts are being made in this lems in weapon design, energy research, provements in software that will permit a country to develop the hardware and soft- meteorology, oceanography, and geophysics. more comfortable interface between user and ware necessary for the change to massively In industry they are being used for design machine. parallel computer architecture. But these and simulation of very large-scale integrated Scientific supercomputing is now at a efforts are only loosely coordinated. circuits, design of aircraft and automobiles, critical juncture. While the demand for How can the United States maintain its and exploration for oil and minerals. To higher speed grows daily. computers as we worldwide supremacy in supercomputing? demonstrate why we need even greater have known them for the past twenty to What policies and strategies are needed to speed, we will examine some of these uses in thirty years are about as fast as we can make make the revolutionary step to massively more detail. them. The growing demand can only be met parallel supercomputers? How can individ- The first step in scientific research and by a radical change in computer architec- ual efforts in computer architecture. soft- engineering design is to model the phenom- ture. a change from a single serial processor ware. and languages be best coordinated for ena being studied. In most cases the whose logical design goes back to Turing this purpose? phenomena are complex and are modeled by and von Neumann to an aggregation of up to To discuss these questions. the Labora- equations that are nonlinear and singular a thousand parallel processors that can tory and the National Security Agency spore and. hence. refractory to analysis. Neverthe- perform many independent operations con- sored a conference in Los Alamos on August less. one must somehow discern what the currently. This radical change in hardware 15-19, 1983. Entitled “Frontiers of Super- model predicts. test whether the predictions will necessitate improvements in program- computing." the conference brought together are correct, and then (invariably) improve ming languages and software. If made, they leading representatives from industry, gov- the model, Each of these steps is difficult, could significantly reduce the time needed to ernment, and universities who shared the time-consuming, and expensive. Supercom- translate difficult problems into machine- most recent technical developments as well puters play an important role in each step. computable form. What now takes a team of as their individual perspectives on the tactics helping to make practical what would other- LOS ALAMOS SCIENCE Fall 1983 65 wise be impractical. The demand for im- oceans or the atmosphere or the motion of or decades to complete. Here numerical proved performance of supercomputers tectonic plates cannot be tested in the labora- simulation can provide an accelerated pic- stems from our constant desire to bring new tory, but can be simulated on a large com- ture that may help, for example, in assessing problems over the threshold of practicality. puter. Many phenomena are difficult to the long-term effects of increasing the per- As an example of the need for supercom- investigate experimentally in a way that will centage of carbon dioxide in our atmosphere. puters in modeling complex phenomena, not modify the behavior being studied. An consider magnetic fusion research. Magnetic example is the flow of reactants and Trends in Performance and fusion requires heating and compressing a products within an internal combustion en- Architecture magnetically confined plasma to the ex- gine. Supercomputers are currently being tremes of temperature and density at which used to study this problem. Most scientists engaged in solving the thermonuclear fusion will occur. During this Testing a nuclear weapon at the Nevada complex problems outlined above feel that process unstable motions of the plasma may test site costs several million dollars. Run- an increase of speed of at least two orders of occur that make it impossible to attain the ning a modern wind tunnel to test airfoil magnitude is required to make significant required final conditions. In addition, state designs costs 150 million dollars a year. progress. The accompanying figure shows variables in the plasma may change by many Supercomputers can explore a much larger that the increase in speed, while very rapid at orders of magnitude. Analysis of this process range of designs than can actually be tested, the start, now appears to be leveling off. is possible only by means of large-scale and expensive test facilities can be reserved numerical computations, and scientists work- for the most promising designs. ing on the problem report the need for Supercomputers are needed to interpret computers one hundred times faster and with test results. For example, in nuclear weapons larger memories than those now available. tests many of the crucial physical parameters Another example concerns computing the are not accessible to direct observation, and equation of state of a classical one-compo- the measured signals are only indirectly nent plasma. A one-component plasma is an related to the underlying physical processes. idealized system of one ionic species im- D. Henderson of Los Alamos characterized mersed in a uniform sea of electrons such the situation well: “The crucial linkage be- that the whole system is electrically neutral. tween these signals, the physical processes, If the equation of state of this “simple” and the device design parameters is available material could be computed, it would only through simulation." provide a framework from which to study Finally, supercomputers permit scientists the equations of state of more complicated and engineers to circumvent some real-world substances. Before the advent of the Cray-1, constraints. In some cases environmental the equation of state of a one-component considerations impose constraints. Clearly, Year plasma could not be computed throughout the safety of nuclear reactors is not well the parameter range of interest. suited to experimental study. But with Using the Cray-1, scientists developed a powerful computers and reliable models, better, more complex approximation to the scientists can simulate reactor accidents, The rapid growth has been due primarily interparticle potential. Improved software either minor or catastrophic, without en- to advances in microelectronics. First came and very efficient algorithms made it possible dangering the environment. the switch from cumbersome and capricious for the Cray-1 to execute the new calculation Time can also impose severe constraints vacuum tubes to small and reliable semicon- at about 90 million operations per second. on the scientist. Consider, for example, ductor transistors. Then in 1958 Jack Kilby Even so, it took some seven hours; but, for chemical reactions that take place within invented a method for fabricating many the first time, the equation of state for a one- microseconds. One of the advantages of transistors on a single silicon chip a fraction component plasma was calculated through- large-scale numerical simulation is that the of an inch on a side, the so-called integrated out the interesting range.
Recommended publications
  • Annual Reports of FCCSET Subcommittee Annual Trip Reports To
    Annual Reports of FCCSET Subcommittee Annual trip reports to supercomputer manufacturers trace the changes in technology and in the industry, 1985-1989. FY 1986 Annual Report of the Federal Coordinating Council on Science, Engineering and Technology (FCCSET). by the FCCSET Ocnmittee. n High Performance Computing Summary During the past year, the Committee met on a regular basis to review government and industry supported programs in research, development, and application of new supercomputer technology. The Committee maintains an overview of commercial developments in the U.S. and abroad. It regularly receives briefings from Government agency sponsored R&D efforts and makes such information available, where feasible, to industry and universities. In addition, the committee coordinates agency supercomputer access programs and promotes cooperation with particular emphasis on aiding the establish- ment of new centers and new communications networks. The Committee made its annual visit to supercomputer manufacturers in August and found that substantial progress had been made by Cray Research and ETA Systems toward developing their next generations of machines. The Cray II and expanded Cray XMP series supercomputers are now being marketed commercially; the Committee was briefed on plans for the next generation of Cray machines. ETA Systems is beyond the prototype stage for the ETA-10 and planning to ship one machine this year. A ^-0 A 1^'Tr 2 The supercomputer vendors continue to have difficulty in obtaining high performance IC's from U.S. chip makers, leaving them dependent on Japanese suppliers. In some cases, the Japanese chip suppliers are the same companies, e.g., Fujitsu, that provide the strongest foreign competition in the supercomputer market.
    [Show full text]
  • Balanced Multithreading: Increasing Throughput Via a Low Cost Multithreading Hierarchy
    Balanced Multithreading: Increasing Throughput via a Low Cost Multithreading Hierarchy Eric Tune Rakesh Kumar Dean M. Tullsen Brad Calder Computer Science and Engineering Department University of California at San Diego {etune,rakumar,tullsen,calder}@cs.ucsd.edu Abstract ibility of SMT comes at a cost. The register file and rename tables must be enlarged to accommodate the architectural reg- A simultaneous multithreading (SMT) processor can issue isters of the additional threads. This in turn can increase the instructions from several threads every cycle, allowing it to clock cycle time and/or the depth of the pipeline. effectively hide various instruction latencies; this effect in- Coarse-grained multithreading (CGMT) [1, 21, 26] is a creases with the number of simultaneous contexts supported. more restrictive model where the processor can only execute However, each added context on an SMT processor incurs a instructions from one thread at a time, but where it can switch cost in complexity, which may lead to an increase in pipeline to a new thread after a short delay. This makes CGMT suited length or a decrease in the maximum clock rate. This pa- for hiding longer delays. Soon, general-purpose micropro- per presents new designs for multithreaded processors which cessors will be experiencing delays to main memory of 500 combine a conservative SMT implementation with a coarse- or more cycles. This means that a context switch in response grained multithreading capability. By presenting more virtual to a memory access can take tens of cycles and still provide contexts to the operating system and user than are supported a considerable performance benefit.
    [Show full text]
  • Amdahl's Law Threading, Openmp
    Computer Science 61C Spring 2018 Wawrzynek and Weaver Amdahl's Law Threading, OpenMP 1 Big Idea: Amdahl’s (Heartbreaking) Law Computer Science 61C Spring 2018 Wawrzynek and Weaver • Speedup due to enhancement E is Exec time w/o E Speedup w/ E = ---------------------- Exec time w/ E • Suppose that enhancement E accelerates a fraction F (F <1) of the task by a factor S (S>1) and the remainder of the task is unaffected Execution Time w/ E = Execution Time w/o E × [ (1-F) + F/S] Speedup w/ E = 1 / [ (1-F) + F/S ] 2 Big Idea: Amdahl’s Law Computer Science 61C Spring 2018 Wawrzynek and Weaver Speedup = 1 (1 - F) + F Non-speed-up part S Speed-up part Example: the execution time of half of the program can be accelerated by a factor of 2. What is the program speed-up overall? 1 1 = = 1.33 0.5 + 0.5 0.5 + 0.25 2 3 Example #1: Amdahl’s Law Speedup w/ E = 1 / [ (1-F) + F/S ] Computer Science 61C Spring 2018 Wawrzynek and Weaver • Consider an enhancement which runs 20 times faster but which is only usable 25% of the time Speedup w/ E = 1/(.75 + .25/20) = 1.31 • What if its usable only 15% of the time? Speedup w/ E = 1/(.85 + .15/20) = 1.17 • Amdahl’s Law tells us that to achieve linear speedup with 100 processors, none of the original computation can be scalar! • To get a speedup of 90 from 100 processors, the percentage of the original program that could be scalar would have to be 0.1% or less Speedup w/ E = 1/(.001 + .999/100) = 90.99 4 Amdahl’s Law If the portion of Computer Science 61C Spring 2018 the program that Wawrzynek and Weaver can be parallelized is small, then the speedup is limited The non-parallel portion limits the performance 5 Strong and Weak Scaling Computer Science 61C Spring 2018 Wawrzynek and Weaver • To get good speedup on a parallel processor while keeping the problem size fixed is harder than getting good speedup by increasing the size of the problem.
    [Show full text]
  • Parallel Generation of Image Layers Constructed by Edge Detection
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 10 (2018) pp. 7323-7332 © Research India Publications. http://www.ripublication.com Speed Up Improvement of Parallel Image Layers Generation Constructed By Edge Detection Using Message Passing Interface Alaa Ismail Elnashar Faculty of Science, Computer Science Department, Minia University, Egypt. Associate professor, Department of Computer Science, College of Computers and Information Technology, Taif University, Saudi Arabia. Abstract A. Elnashar [29] introduced two parallel techniques, NASHT1 and NASHT2 that apply edge detection to produce a set of Several image processing techniques require intensive layers for an input image. The proposed techniques generate computations that consume CPU time to achieve their results. an arbitrary number of image layers in a single parallel run Accelerating these techniques is a desired goal. Parallelization instead of generating a unique layer as in traditional case; this can be used to save the execution time of such techniques. In helps in selecting the layers with high quality edges among this paper, we propose a parallel technique that employs both the generated ones. Each presented parallel technique has two point to point and collective communication patterns to versions based on point to point communication "Send" and improve the speedup of two parallel edge detection techniques collective communication "Bcast" functions. The presented that generate multiple image layers using Message Passing techniques achieved notable relative speedup compared with Interface, MPI. In addition, the performance of the proposed that of sequential ones. technique is compared with that of the two concerned ones. In this paper, we introduce a speed up improvement for both Keywords: Image Processing, Image Segmentation, Parallel NASHT1 and NASHT2.
    [Show full text]
  • CS650 Computer Architecture Lecture 10 Introduction to Multiprocessors
    NJIT Computer Science Dept CS650 Computer Architecture CS650 Computer Architecture Lecture 10 Introduction to Multiprocessors and PC Clustering Andrew Sohn Computer Science Department New Jersey Institute of Technology Lecture 10: Intro to Multiprocessors/Clustering 1/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Key Issues Run PhotoShop on 1 PC or N PCs Programmability • How to program a bunch of PCs viewed as a single logical machine. Performance Scalability - Speedup • Run PhotoShop on 1 PC (forget the specs of this PC) • Run PhotoShop on N PCs • Will it run faster on N PCs? Speedup = ? Lecture 10: Intro to Multiprocessors/Clustering 2/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Types of Multiprocessors Key: Data and Instruction Single Instruction Single Data (SISD) • Intel processors, AMD processors Single Instruction Multiple Data (SIMD) • Array processor • Pentium MMX feature Multiple Instruction Single Data (MISD) • Systolic array • Special purpose machines Multiple Instruction Multiple Data (MIMD) • Typical multiprocessors (Sun, SGI, Cray,...) Single Program Multiple Data (SPMD) • Programming model Lecture 10: Intro to Multiprocessors/Clustering 3/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Shared-Memory Multiprocessor Processor Prcessor Prcessor Interconnection network Main Memory Storage I/O Lecture 10: Intro to Multiprocessors/Clustering 4/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Distributed-Memory Multiprocessor Processor Processor Processor IO/S MM IO/S MM IO/S MM Interconnection network IO/S MM IO/S MM IO/S MM Processor Processor Processor Lecture 10: Intro to Multiprocessors/Clustering 5/15 12/7/2003 A.
    [Show full text]
  • (52) Cont~Ol Data
    C) (52) CONT~OL DATA literature and Distribution Services ~~.) 308 North Dale Street I st. Paul. Minnesota 55103 rJ 1 August 29, 1983 "r--"-....." (I ~ __ ,I Dear Customer: Attached is the third (3) catalog supplement since the 1938 catalog was published . .. .·Af ~ ~>J if-?/t~--62--- G. F. Moore, Manager Literature & Distribution Services ,~-" l)""... ...... I _._---------_._----_._----_._-------- - _......... __ ._.- - LOS CATALOG SUPPLEPtENT -- AUGUST 1988 Pub No. Rev [Page] TITLE' [ extracted from catalog entry] Bind Price + = New Publication r = Revision - = Obsolete r 15190060 [4-07] FULL SCREEN EDITOR (FSEDIT) RM (NOS 1 & 2) .......•...•.•...•••........... 12.00 r 15190118 K [4-07] NETWORK JOB ENTRY FACILITY (NJEF) IH8 (NOS 2) ........................... 5.00 r 15190129 F [4-07] NETWORK JOB ENTRY FACILITY (NJEF) RM (NOS 2) .........•.......•........... + 15190150 C [4-07] NETWORK TRANSFER FACILITY (NTF) USAGE (NOS/VE) .......................... 15.00 r 15190762 [4-07] TIELINE/NP V2 IHB (L642) (NOS 2) ........................................ 12.00 r 20489200 o [4-29] WREN II HALF-HEIGHT 5-114" DISK DRIVE ................................... + 20493400 [4-20] CDCNET DEVICE INTERFACE UNITS ........................................... + 20493600 [4-20] CDCNET ETHERNET EQUIPMENT ............................................... r 20523200 B [4-14] COMPUTER MAINTENANCE SERVICES - DEC ..................................... r 20535300 A [4-29] WREN II 5-1/4" RLL CERTIFIED ............................................ r 20537300 A [4-18] SOFTWARE
    [Show full text]
  • High-Performance Message Passing Over Generic Ethernet Hardware with Open-MX Brice Goglin
    High-Performance Message Passing over generic Ethernet Hardware with Open-MX Brice Goglin To cite this version: Brice Goglin. High-Performance Message Passing over generic Ethernet Hardware with Open-MX. Parallel Computing, Elsevier, 2011, 37 (2), pp.85-100. 10.1016/j.parco.2010.11.001. inria-00533058 HAL Id: inria-00533058 https://hal.inria.fr/inria-00533058 Submitted on 5 Nov 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. High-Performance Message Passing over generic Ethernet Hardware with Open-MX Brice Goglin INRIA Bordeaux - Sud-Ouest – LaBRI 351 cours de la Lib´eration – F-33405 Talence – France Abstract In the last decade, cluster computing has become the most popular high-performance computing architec- ture. Although numerous technological innovations have been proposed to improve the interconnection of nodes, many clusters still rely on commodity Ethernet hardware to implement message passing within parallel applications. We present Open-MX, an open-source message passing stack over generic Ethernet. It offers the same abilities as the specialized Myrinet Express stack, without requiring dedicated support from the networking hardware. Open-MX works transparently in the most popular MPI implementations through its MX interface compatibility.
    [Show full text]
  • An Investigation of Symmetric Multi-Threading Parallelism for Scientific Applications
    An Investigation of Symmetric Multi-Threading Parallelism for Scientific Applications Installation and Performance Evaluation of ASCI sPPM Frank Castaneda [email protected] Nikola Vouk [email protected] North Carolina State University CSC 591c Spring 2003 May 2, 2003 Dr. Frank Mueller An Investigation of Symmetric Multi-Threading Parallelism for Scientific Applications Introduction Our project is to investigate the use of Symmetric Multi-Threading (SMT), or “Hyper-threading” in Intel parlance, applied to course-grain parallelism in large-scale distributed scientific applications. The processors provide the capability to run two streams of instruction simultaneously to fully utilize all available functional units in the CPU. We are investigating the speedup available when running two threads on a single processor that uses different functional units. The idea we propose is to utilize the hyper-thread for asynchronous communications activity to improve course-grain parallelism. The hypothesis is that there will be little contention for similar processor functional units when splitting the communications work from the computational work, thus allowing better parallelism and better exploiting the hyper-thread technology. We believe with minimal changes to the 2.5 Linux kernel we can achieve 25-50% speedup depending on the amount of communication by utilizing a hyper-thread aware scheduler and a custom communications API. Experiment Setup Software Setup Hardware Setup ?? Custom Linux Kernel 2.5.68 with ?? IBM xSeries 335 Single/Dual Processor Red Hat distribution 2.0 Ghz Xeon ?? Modification of Kernel Scheduler to run processes together ?? Custom Test Code In order to test our code we focus on the following scenarios: ?? A serial execution of communication / computation sections o Executing in serial is used to test the expected back-to-back run-time.
    [Show full text]
  • Computer Systems Architecture
    CS 352H: Computer Systems Architecture Topic 14: Multicores, Multiprocessors, and Clusters University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell Introduction Goal: connecting multiple computers to get higher performance Multiprocessors Scalability, availability, power efficiency Job-level (process-level) parallelism High throughput for independent jobs Parallel processing program Single program run on multiple processors Multicore microprocessors Chips with multiple processors (cores) University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell 2 Hardware and Software Hardware Serial: e.g., Pentium 4 Parallel: e.g., quad-core Xeon e5345 Software Sequential: e.g., matrix multiplication Concurrent: e.g., operating system Sequential/concurrent software can run on serial/parallel hardware Challenge: making effective use of parallel hardware University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell 3 What We’ve Already Covered §2.11: Parallelism and Instructions Synchronization §3.6: Parallelism and Computer Arithmetic Associativity §4.10: Parallelism and Advanced Instruction-Level Parallelism §5.8: Parallelism and Memory Hierarchies Cache Coherence §6.9: Parallelism and I/O: Redundant Arrays of Inexpensive Disks University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell 4 Parallel Programming Parallel software is the problem Need to get significant performance improvement Otherwise, just use a faster uniprocessor,
    [Show full text]
  • Computer Architecture: Parallel Processing Basics
    Computer Architecture: Parallel Processing Basics Onur Mutlu & Seth Copen Goldstein Carnegie Mellon University 9/9/13 Today What is Parallel Processing? Why? Kinds of Parallel Processing Multiprocessing and Multithreading Measuring success Speedup Amdhal’s Law Bottlenecks to parallelism 2 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi PDL'09 © 2007-9 Goldstein5 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi Parallel PDL'09 © 2007-9 Goldstein6 Concurrent Systems Physical Geographical Cloud Parallel Geophysical +++ ++ --- --- location Relative +++ +++ + - location Faults ++++ +++ ++++ -- Number of +++ +++ + - Processors + Network varies varies fixed fixed structure Network --- --- + + connectivity 7 Concurrent System Challenge: Programming The old joke: How long does it take to write a parallel program? One Graduate Student Year 8 Parallel Programming Again?? Increased demand (multicore) Increased scale (cloud) Improved compute/communicate Change in Application focus Irregular Recursive data structures PDL'09 © 2007-9 Goldstein9 Why Parallel Computers? Parallelism: Doing multiple things at a time Things: instructions,
    [Show full text]
  • Your Speedup Is a Megaflop!
    KOLUMNE 22 CHI MIA 47 (1993) Nr. II~ (lnnuor/Fcbru",) COMPUTATIONAL CHEMISTRY Column Editors: Prof. Dr. J. Weber, University of Geneva COLUMN Prof. Dr. H. Huber, University of Basel Dr. H. P. Weber, Sandoz AG, Basel Clzilllia 47 (/993) 22-23 ond instruction on the first number at the © Neue Sclz\i'eizerisclze Clzelllische Gesellschaft same time as the first worker performs the /SSN 0009-4293 first instruction on the second number, etc .. For the first 20 steps no product is leaving your assembly line (no result is Your Speedup is a Megaflop! leaving your vector-processor) but after that you get a product (result) in each step, 'I just bought a new personal compu- Another common measure is the mil- i.e. after a total of 120 steps your loop is ter, the LAST, with a speed of 50 MHz, lions of instructions performed per sec- processed! You gain a factor of about 17, whereas your old MADOS has only a 25 ond (Mips). Again this is not a very useful i.e. if your scalar processor performs I MHz frequency'. 'You are joking, the measure for number crunching as differ- Mflops, your vector-processor yields 17 LAST makes only 2 Mflops, whereas mine ent processors need a different number of Mflops, which would roughly con'espond makes 25 Mips.' 'Never mind, your old instructions to perform a useful calcula- to the performance advertized by the com- scalar processors will not do it for long, I tion, e.g. a multiplication. For example puter manufacturer. However, any scien- just read about a supercomputer of the workstations usually have risc-processors tific program has instructions other than Teraflop generation with a speedup of (reduced instruction set computer) which loops which do not vectorize and, there- over 100!' in average need slightly more instructions fore, the efficiency is in practice much to perform floating point operations than lower! Depending on your problem and on Not far from reality - perhaps virtual the 'classical' main-frame processors.
    [Show full text]
  • Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures
    Computer Architecture A Quantitative Approach, Fifth Edition Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures Copyright © 2012, Elsevier Inc. All rights reserved. 1 Contents 1. SIMD architecture 2. Vector architectures optimizations: Multiple Lanes, Vector Length Registers, Vector Mask Registers, Memory Banks, Stride, Scatter-Gather, 3. Programming Vector Architectures 4. SIMD extensions for media apps 5. GPUs – Graphical Processing Units 6. Fermi architecture innovations 7. Examples of loop-level parallelism 8. Fallacies Copyright © 2012, Elsevier Inc. All rights reserved. 2 Classes of Computers Classes Flynn’s Taxonomy SISD - Single instruction stream, single data stream SIMD - Single instruction stream, multiple data streams New: SIMT – Single Instruction Multiple Threads (for GPUs) MISD - Multiple instruction streams, single data stream No commercial implementation MIMD - Multiple instruction streams, multiple data streams Tightly-coupled MIMD Loosely-coupled MIMD Copyright © 2012, Elsevier Inc. All rights reserved. 3 Introduction Advantages of SIMD architectures 1. Can exploit significant data-level parallelism for: 1. matrix-oriented scientific computing 2. media-oriented image and sound processors 2. More energy efficient than MIMD 1. Only needs to fetch one instruction per multiple data operations, rather than one instr. per data op. 2. Makes SIMD attractive for personal mobile devices 3. Allows programmers to continue thinking sequentially SIMD/MIMD comparison. Potential speedup for SIMD twice that from MIMID! x86 processors expect two additional cores per chip per year SIMD width to double every four years Copyright © 2012, Elsevier Inc. All rights reserved. 4 Introduction SIMD parallelism SIMD architectures A. Vector architectures B. SIMD extensions for mobile systems and multimedia applications C.
    [Show full text]