5-Ghz 32-Bit Integer Execution Core in 130-Nm Dual-V/Sub T/ CMOS

Total Page:16

File Type:pdf, Size:1020Kb

5-Ghz 32-Bit Integer Execution Core in 130-Nm Dual-V/Sub T/ CMOS IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 37, NO. 11, NOVEMBER 2002 1421 5-GHz 32-bit Integer Execution Core in 130-nm Dual- CMOS Sriram Vangal, Member, IEEE, Mark A. Anders, Nitin Borkar, Erik Seligman, Venkatesh Govindarajulu, Vasantha Erraguntla, Member, IEEE, Howard Wilson, Amaresh Pangal, Venkat Veeramachaneni, James W. Tschanz, Yibin Ye, Dinesh Somasekhar, Member, IEEE, Bradley A. Bloechel, Associate Member, IEEE, Gregory E. Dermer, Ram K. Krishnamurthy, Member, IEEE, K. Soumyanath, Sanu Mathew, Siva G. Narendra, Mircea R. Stan, Senior Member, IEEE, Scott Thompson, Vivek De, Member, IEEE, and Shekhar Borkar Abstract—A 32-bit integer execution core containing a achieving: 1) high noise robustness; 2) low active leakage power Han-Carlson arithmetic-logic unit (ALU), an 8-entry 2 ALU dissipation; 3) maximum low- usage; 4) simplified 50% instruction scheduler loop and a 32-entry 32-bit register file duty-cycle timing scheme with seamless scheduler/ALU is described. In a 130 nm, six-metal, dual- CMOS technology, the 2.3 mmP prototype contains 160 K transistors. Measurements interface time-borrowing; and 5) scalable performance up to demonstrate capability for 5-GHz single-cycle integer execution 10 GHz, measured at 1.7 V, 25 C. Stack node preconditioning at 25 C. Single-ended, leakage-tolerant dynamic scheme used in enables further ALU performance improvement. In addition, ALU and scheduler enables up to 9-wide ORs with 23% critical the RF employs semi-dynamic flip-flops for increased speed path speed improvement and 40% active leakage power reduction and a static design for increased leakage tolerance. On-chip when compared to a conventional Kogge–Stone implementation. On-chip body-bias circuits provide additional performance body-bias circuits are used to improve performance or reduce improvement or leakage tolerance. Stack node preconditioning standby leakage power. The chip also supports full at-speed improves ALU performance by 10%. At 5 GHz, ALU power is result capture and scan-out. 95 mW at 0.95 V and the register file consumes 172 mW at 1.37 V. In the execution core, both the ALU and scheduler are The ALU performance is scalable to 6.5 GHz at 1.1 V and to organized as a loop [3], enabling single-cycle latency and 10 GHz at 1.7 V, 25 C. throughput both for ALU operations and for resolving instruc- Index Terms—CMOS integrated circuits, integrated circuit de- tion dependencies and priorities (Fig. 1). The scheduler updates sign, logic design, microprocessors, very-high-speed integrated cir- interinstruction dependency information each cycle, choosing cuits. the highest priority from among those instructions ready to execute. The chosen instruction controls ALU input selection I. INTRODUCTION as well as RF address. The 32-bit ALU executes add/subtract UT-OF-ORDER execution engines of superscalar operations each cycle, allowing the previous results to be used O processors require: 1) wide instruction schedulers ca- directly in the following cycle. This architecture enables fast pable of scheduling back-to-back instructions into multiple parallel out-of-order execution in superscalar microprocessors. arithmetic-logic units (ALUs) in the execution core; 2) fast ALUs capable of executing these instructions with single-cycle II. PROTOTYPE ARCHITECTURE latency and throughput; and 3) leakage-tolerant register files The core architecture contains: 1) three first in first out (FIFO) capable of feeding the ALU units. A high-speed execution core buffers, one FIFO for instructions and two FIFOs for data; 2) is therefore essential to maximize processor performance [1]. In a tightly coupled RF-ALU-Scheduler loop; 3) a FIFO to cap- this paper, we describe key components of a integer execution ture output results (Fig. 2) [4]. The core executes instructions core: a 32-bit ALU, an 8-entry 2-ALU instruction scheduler stored in the 32-bit wide, 4-deep circular instruction FIFO, op- and a 32-entry 32-bit register file (RF), fabricated in 130 erating at core speed. Data FIFOs (D0-D1) provide the desired nm dual- CMOS technology [2]. High-speed single-ended operands. A central block forwards data and control signals dynamic circuit techniques enable the evaluation of complex to all units. RF-ALU instructions are single-cycle and can be (up to 2 9-way OR) logic operations while simultaneously scheduled back to back. A 416-bit long input scan chain feeds the data and control words. Results are captured at speed by Manuscript received March 15, 2002; revised June 10, 2002. a 32-bit wide, 4-deep result FIFO. Capture timing and interval S. Vangal, M. A. Anders, N. Borkar, E. Seligman, V. Govindarajulu, V. Erra- guntla, H. Wilson, A. Pangal, V. Veeramachaneni, J. W. Tschanz, Y. Ye, D. So- are fully programmable via scan. Output results are scanned out masekhar, B. A. Bloechel, G. E. Dermer, R. K. Krishnamurthy, K. Soumyanath, using a 128-bit scan chain. The scan control block manages op- S. Mathew, S. G. Narendra, V. De, and S. Borkar are with the Intel Corpora- erations of all four FIFOs on chip. Separate power grids for tion, Circuits Research, Intel Labs, Hillsboro, OR 97124-6497 USA (e-mail: [email protected]). ALU, RF and circuits in the rest of the core allow individual S. Thompson is with the Portland Technology Development, Intel Corpora- power measurement of each unit. In addition, RF and ALU units tion, Hillsboro, OR 97124-6497 USA. have on-chip body bias generator circuits to improve perfor- M. R.Stan is with the Department of Electrical Engineering, University of Virginia, Charlottesville, VA22904-4743 USA. mance by applying 450 mV of forward body bias (FBB) to all Digital Object Identifier 10.1109/JSSC.2002.803944 pMOS devices during active operation. Body biases of high- 0018-9200/02$17.00 © 2002 IEEE 1422 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 37, NO. 11, NOVEMBER 2002 Fig. 1. Out-of-order execution core. Fig. 3. Scheduler organization. Fig. 2. Block diagram of integer execution core and test circuits. and low- devices can be controlled separately. On-chip FBB can be disabled; and forward, zero, or reverse body bias can be applied externally to all nMOS and pMOS devices to improve performance or reduce standby leakage power. The chip organi- zation provides flexibility to characterize individual units or the complete core. III. 8-ENTRY 2INSTRUCTION SCHEDULER The instruction scheduler is capable of scheduling dependent Fig. 4. Scheduler bitslice logic. instructions to two 32-bit ALUs, choosing one of eight poten- tially ready instructions to execute in each ALU per cycle. An bit slices, with one ready logic evaluation and one priority en- instruction is ready for execution if it is not dependent upon re- coder operation per bit slice (Fig. 3). The 15 dependencies for sults of any other pending instructions and has not been sched- the 16 instructions currently in the pool, , are evalu- uled in the previous cycle. The scheduler is organized into 16 ated and stored in a 1-bit 240-entry dependency matrix during VANGAL et al.: A 5-GHz 32-bit INTEGER EXECUTION CORE IN 130 nm DUAL- CMOS 1423 (a) (b) Fig. 5. Scheduler designs. (a) Dual-rail domino. (b) Single-rail CSG. Fig. 7. Scheduler priority encoder CSG circuit. Fig. 6. Scheduler ready logic CSG circuit. wide AND paths and realizes the complete critical path with single-ended 2 9-way (Fig. 6) and 8-way (Fig. 7) dynamic OR the previous cycle. The ready logic resolves dependencies be- circuits, respectively. The CSG circuit enables domino-com- tween the 16 instructions in the pool and two external depen- patible dual-rail outputs but requires only a single-rail input. dency signals ( ), essentially requiring an 18-way AND It contains two dynamic nodes, a traditional complementary operation (Fig. 4). An 8-way AND priority encoder then chooses dynamic node and a true dynamic node. Both nodes precharge from among the ready instructions using dynamically controlled using the same clock. During evaluation, one of these two nodes priorities ( ) and drives a 140- m loopback bus into the transitions low, causing the nonswitching node to be actively ready logic and the shared ALU tri-state bus. The ready logic, held by a pMOS device turned on by the evaluating node. These using the priority encoder outputs from all other bit slices, de- cross-coupled pMOS transistors provide additional noise immu- termines if its instruction is dependent on any other instruction. nity, allowing wider OR-gates than those possible when leakage The priority encoder, then, using the ready logic outputs only is compensated only by a normal half-keeper. Dual- opti- from the other 7 bit slices in its portion of the instruction queue, mization is conducted for high performance and to meet target indicates if its instruction is the highest priority. noise margin constraints. High- is used on the 9- and 8-way A domino implementation of the scheduler logic requires domino-OR nMOS pull down transistors and low- is used for a fully dual-rail design, since both true and complementary all other transistors. The complete scheduler path requires only domino-compatible inputs are required for both the ready logic 6 gate-stages, improving critical path performance by 23% over as well as the priority encoder. An optimal dual-rail domino the corresponding dual-rail implementation. Furthermore, the design requires 8 gate stages due to decreasing performance as single-ended design achieves 67% layout area reduction and evaluation stack heights are increased on the complementary 25% loopback interconnect length reduction due to eliminating path [Fig. 5(a)]. Fig. 5(b) shows the single-ended to domino- 50% of the scheduler logic transistors, enabling a dense layout compatible complementary signal generator (CSG) based ready occupying 210 m 210 m.
Recommended publications
  • The ASCI Red TOPS Supercomputer
    The ASCI Red TOPS Supercomputer http://www.sandia.gov/ASCI/Red/RedFacts.htm The ASCI Red TOPS Supercomputer Introduction The ASCI Red TOPS Supercomputer is the first step in the ASCI Platforms Strategy, which is aimed at giving researchers the five-order-of-magnitude increase in computing performance over current technology that is required to support "full-physics," "full-system" simulation by early next century. This supercomputer, being installed at Sandia National Laboratories, is a massively parallel, MIMD computer. It is noteworthy for several reasons. It will be the world's first TOPS supercomputer. I/O, memory, compute nodes, and communication are scalable to an extreme degree. Standard parallel interfaces will make it relatively simple to port parallel applications to this system. The system uses two operating systems to make the computer both familiar to the user (UNIX) and non-intrusive for the scalable application (Cougar). And it makes use of Commercial Commodity Off The Shelf (CCOTS) technology to maintain affordability. Hardware The ASCI TOPS system is a distributed memory, MIMD, message-passing supercomputer. All aspects of this system architecture are scalable, including communication bandwidth, main memory, internal disk storage capacity, and I/O. Artist's Concept The TOPS Supercomputer is organized into four partitions: Compute, Service, System, and I/O. The Service Partition provides an integrated, scalable host that supports interactive users, application development, and system administration. The I/O Partition supports a scalable file system and network services. The System Partition supports system Reliability, Availability, and Serviceability (RAS) capabilities. Finally, the Compute Partition contains nodes optimized for floating point performance and is where parallel applications execute.
    [Show full text]
  • 2017 HPC Annual Report Team Would Like to Acknowledge the Invaluable Assistance Provided by John Noe
    sandia national laboratories 2017 HIGH PERformance computing The 2017 High Performance Computing Annual Report is dedicated to John Noe and Dino Pavlakos. Building a foundational framework Editor in high performance computing Yasmin Dennig Contributing Writers Megan Davidson Sandia National Laboratories has a long history of significant contributions to the high performance computing Mattie Hensley community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraflop barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing capabilities to gain a tremendous competitive edge in the marketplace. Contributing Editor Laura Sowko As part of our continuing commitment to provide modern computing infrastructure and systems in support of Sandia’s missions, we made a major investment in expanding Building 725 to serve as the new home of high performance computer (HPC) systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) prototype Design platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing Stacey Long advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment, and application code performance.
    [Show full text]
  • An Extensible Administration and Configuration Tool for Linux Clusters
    An extensible administration and configuration tool for Linux clusters John D. Fogarty B.Sc A dissertation submitted to the University of Dublin, in partial fulfillment of the requirements for the degree of Master of Science in Computer Science 1999 Declaration I declare that the work described in this dissertation is, except where otherwise stated, entirely my own work and has not been submitted as an exercise for a degree at this or any other university. Signed: ___________________ John D. Fogarty 15th September, 1999 Permission to lend and/or copy I agree that Trinity College Library may lend or copy this dissertation upon request. Signed: ___________________ John D. Fogarty 15th September, 1999 ii Summary This project addresses the lack of system administration tools for Linux clusters. The goals of the project were to design and implement an extensible system that would facilitate the administration and configuration of a Linux cluster. Cluster systems are inherently scalable and therefore the cluster administration tool should also scale well to facilitate the addition of new nodes to the cluster. The tool allows the administration and configuration of the entire cluster from a single node. Administration of the cluster is simplified by way of command replication across one, some or all nodes. Configuration of the cluster is made possible through the use of a flexible, variables substitution scheme, which allows common configuration files to reflect differences between nodes. The system uses a GUI interface and is intuitively simple to use. Extensibility is incorporated into the system, by allowing the dynamic addition of new commands and output display types to the system.
    [Show full text]
  • Scalability and Performance of Salinas on the Computational Plant
    Scalability and performance of salinas on the computa- tional plant Manoj Bhardwaj, Ron Brightwell & Garth Reese Sandia National Laboratories Abstract This paper presents performance results from a general-purpose, finite element struc- tural dynamics code called Salinas, on the Computational Plant (CplantTM), which is a large-scale Linux cluster. We compare these results to a traditional supercomputer, the ASCI/Red machine at Sandia National Labs. We describe the hardware and software environment of the Computational Plant, including its unique ability to support eas- ily switching a large section of compute nodes between multiple different independent cluster heads. We provide an overview of the Salinas application and present results from up to 1000 nodes on both machines. In particular, we discuss one of the chal- lenges related to scaling Salinas beyond several hundred processors on CplantTM and how this challenge was addressed and overcome. We have been able to demonstrate that the performance and scalability of Salinas is comparable to a proprietary large- scale parallel computing platform. 1 Introduction Parallel computing platforms composed of commodity personal computers (PCs) in- terconnected by gigabit network technology are a viable alternative to traditional pro- prietary supercomputing platforms. Small- and medium-sized clusters are now ubiqui- tous, and larger-scale procurements, such as those made recently by National Science Foundation for the Distributed Terascale Facility [1] and by Pacific Northwest National Lab [13], are becoming more prevalent. The cost effectiveness of these platforms has allowed for larger numbers of processors to be purchased. In spite of the continued increase in the number of processors, few real-world ap- plication results on large-scale clusters have been published.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Test, Evaluation, and Build Procedures for Sandia's ASCI Red (Janus) Teraflops Operating System
    SANDIA REPORT SAND2005-3356 Unlimited Release Printed October 2005 Test, Evaluation, and Build Procedures For Sandia's ASCI Red (Janus) Teraflops Operating System Daniel W. Barnette Prepared by Sandia National Laboratories Albuquerque, New Mexico 87185 and Livermore, California 94550 Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under Contract DE-AC04-94AL85000. Approved for public release; further dissemination unlimited. Issued by Sandia National Laboratories, operated for the United States Department of Energy by Sandia Corporation. NOTICE: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government, nor any agency thereof, nor any of their employees, nor any of their contractors, subcontractors, or their employees, make any warranty, express or implied, or assume any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represent that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government, any agency thereof, or any of their contractors or subcontractors. The views and opinions expressed herein do not necessarily state or reflect those of the United States Government, any agency thereof, or any of their contractors. Printed in the United States of America. This report has been reproduced directly from the best available copy. Available to DOE and DOE contractors from U.S.
    [Show full text]
  • High Performance Computing
    C R C P R E S S . T A Y L O R & F R A N C I S High Performance Computing A Chapter Sampler www.crcpress.com Contents 1. Overview of Parallel Computing From: Elements of Parallel Computing, by Eric Aubanel 2. Introduction to GPU Parallelism and CUDA From: GPU Parallel Program Development Using CUDA, by Tolga Soyata 3. Optimization Techniques and Best Practices for Parallel Codes From: Parallel Programming for Modern High Performance Computing Systems, by Pawel Czarnul 4. Determining an Exaflop Strategy From: Programming for Hybrid Multi/Manycore MMP Systems, by John Levesque, Aaron Vose 5. Contemporary High Performance Computing From: Contemporary High Performance Computing: From Petascale toward Exascale, by Jeffrey S. Vetter 6. Introduction to Computational Modeling From: Introduction to Modeling and Simulation with MATLAB® and Python, by Steven I. Gordon, Brian Guilfoos 20% Discount Available We're offering you 20% discount on all our entire range of CRC Press titles. Enter the code HPC10 at the checkout. Please note: This discount code cannot be combined with any other discount or offer and is only valid on print titles purchased directly from www.crcpress.com. www.crcpress.com Copyright Taylor & Francis Group. Do Not Distribute. CHAPTER 1 Overview of Parallel Computing 1.1 INTRODUCTION In the first 60 years of the electronic computer, beginning in 1940, computing performance per dollar increased on average by 55% per year [52]. This staggering 100 billion-fold increase hit a wall in the middle of the first decade of this century. The so-called power wall arose when processors couldn't work any faster because they couldn't dissipate the heat they pro- duced.
    [Show full text]
  • Delivering Insight: the History of the Accelerated Strategic Computing
    Lawrence Livermore National Laboratory Computation Directorate Dona L. Crawford Computation Associate Director Lawrence Livermore National Laboratory 7000 East Avenue, L-559 Livermore, CA 94550 September 14, 2009 Dear Colleague: Several years ago, I commissioned Alex R. Larzelere II to research and write a history of the U.S. Department of Energy’s Accelerated Strategic Computing Initiative (ASCI) and its evolution into the Advanced Simulation and Computing (ASC) Program. The goal was to document the first 10 years of ASCI: how this integrated and sustained collaborative effort reached its goals, became a base program, and changed the face of high-performance computing in a way that independent, individually funded R&D projects for applications, facilities, infrastructure, and software development never could have achieved. Mr. Larzelere has combined the documented record with first-hand recollections of prominent leaders into a highly readable, 200-page account of the history of ASCI. The manuscript is a testament to thousands of hours of research and writing and the contributions of dozens of people. It represents, most fittingly, a collaborative effort many years in the making. I’m pleased to announce that Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI) has been approved for unlimited distribution and is available online at https://asc.llnl.gov/asc_history/. Sincerely, Dona L. Crawford Computation Associate Director Lawrence Livermore National Laboratory An Equal Opportunity Employer • University of California • P.O. Box 808 Livermore, California94550 • Telephone (925) 422-2449 • Fax (925) 423-1466 Delivering Insight The History of the Accelerated Strategic Computing Initiative (ASCI) Prepared by: Alex R.
    [Show full text]
  • Sandia's ASCI Red, World's First Teraflop Supercomputer, Is
    Sandia’s ASCI Red, world’s first teraflop supercomputer, is decommissioned Participants at informal wake recall struggles and glories of nine-year run ALBUQUERQUE, N.M. — On a table in a small meeting room at Sandia National Laboratories rested a picture of the deceased — a row of identical cabinets that formed part of the entity known as ASCI Red, the world’s first teraflop supercomputer. Still one of the world’s 500 fastest supercomputers after all these years — nine — ASCI Red was being YOU'RE STILL THE ONE — By supercomputer standards, Sandia’s decommissioned. ASCI Red, the world’s first teraflop machine, was ancient, but what a run “I’ve never buried a computer before,” said Justin Rattner, it had! Here, designer Jim Tomkins (left) talks about ASCI Red and its Intel Chief Technology Officer, to 30 people from Sandia accomplishments with Intel officials and the Intel Corp. who gathered in mid-June to pay their Justin Rattner and Stephen Wheat. Rob Leland looks on at right. (Photos by Paul respects. “We should go around the room so everyone can Edward Sanchez) say their final farewells.” On a nearby table sat a simple white frosted cake. Encircled top and bottom by two strings of small simulated pearls and topped by pink flowers and a silver ribbon, it resembled a hat that could be worn by a very elderly lady, and indeed, ASCI Red was very old by supercomputer standards. Sandia vice-president Rick Stulen eulogized, “ASCI Red broke all records and most importantly ushered the world into the teraflop regime.
    [Show full text]
  • Performance of Various Computers Using Standard Linear Equations Software
    ———————— CS - 89 - 85 ———————— Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra* Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester CS - 89 - 85 June 15, 2014 * Electronic mail address: [email protected]. An up-to-date version of this report can be found at http://www.netlib.org/benchmark/performance.ps This work was supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-96OR22464, and in part by the Science Alliance a state supported program at the University of Tennessee. 6/15/2014 2 Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester June 15, 2014 Abstract This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately a hundred computers, ranging from the Earth Simulator to personal computers. 1. Introduction and Objectives The timing information presented here should in no way be used to judge the overall performance of a computer system. The results reflect only one problem area: solving dense systems of equations. This report provides performance information on a wide assortment of computers ranging from the home-used PC up to the most powerful supercomputers. The information has been collected over a period of time and will undergo change as new machines are added and as hardware and software systems improve.
    [Show full text]
  • Supercomputers: the Amazing Race Gordon Bell November 2014
    Supercomputers: The Amazing Race Gordon Bell November 2014 Technical Report MSR-TR-2015-2 Gordon Bell, Researcher Emeritus Microsoft Research, Microsoft Corporation 555 California, 94104 San Francisco, CA Version 1.0 January 2015 1 Submitted to STARS IEEE Global History Network Supercomputers: The Amazing Race Timeline (The top 20 significant events. Constrained for Draft IEEE STARS Article) 1. 1957 Fortran introduced for scientific and technical computing 2. 1960 Univac LARC, IBM Stretch, and Manchester Atlas finish 1956 race to build largest “conceivable” computers 3. 1964 Beginning of Cray Era with CDC 6600 (.48 MFlops) functional parallel units. “No more small computers” –S R Cray. “First super”-G. A. Michael 4. 1964 IBM System/360 announcement. One architecture for commercial & technical use. 5. 1965 Amdahl’s Law defines the difficulty of increasing parallel processing performance based on the fraction of a program that has to be run sequentially. 6. 1976 Cray 1 Vector Processor (26 MF ) Vector data. Sid Karin: “1st Super was the Cray 1” 7. 1982 Caltech Cosmic Cube (4 node, 64 node in 1983) Cray 1 cost performance x 50. 8. 1983-93 Billion dollar SCI--Strategic Computing Initiative of DARPA IPTO response to Japanese Fifth Gen. 1990 redirected to supercomputing after failure to achieve AI goals 9. 1982 Cray XMP (1 GF) Cray shared memory vector multiprocessor 10. 1984 NSF Establishes Office of Scientific Computing in response to scientists demand and to counteract the use of VAXen as personal supercomputers 11. 1987 nCUBE (1K computers) achieves 400-600 speedup, Sandia winning first Bell Prize, stimulated Gustafson’s Law of Scalable Speed-Up, Amdahl’s Law Corollary 12.
    [Show full text]
  • Architectural Specification for Massively
    CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.: Pract. Exper. 2005; 17:1271–1316 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cpe.893 Architectural specification for massively parallel computers: an experience and measurement-based approach‡ Ron Brightwell1,∗,†, William Camp1,BenjaminCole1, Erik DeBenedictis1, Robert Leland1, James Tomkins1 and Arthur B. Maccabe2 1Sandia National Laboratories, Scalable Computer Systems, P.O. Box 5800, Albuquerque, NM 87185-1110, U.S.A. 2Department of Computer Science, University of New Mexico, Albuquerque, NM 87131-0001, U.S.A. SUMMARY In this paper, we describe the hardware and software architecture of the Red Storm system developed at Sandia National Laboratories. We discuss the evolution of this architecture and provide reasons for the different choices that have been made. We contrast our approach of leveraging high-volume, mass-market commodity processors to that taken for the Earth Simulator. We present a comparison of benchmarks and application performance that support our approach. We also project the performance of Red Storm and the Earth Simulator. This projection indicates that the Red Storm architecture is a much more cost-effective approach to massively parallel computing. Published in 2005 by John Wiley & Sons, Ltd. KEY WORDS: massively parallel computing; supercomputing; commodity processors; vector processors; Amdahl; shared memory; distributed memory 1. INTRODUCTION In the early 1980s the performance of commodity microprocessors reached a level that made it feasible to consider aggregating large numbers of them into a massively parallel processing (MPP) computer intended to compete in performance with traditional vector supercomputers based on moderate numbers of custom processors.
    [Show full text]