Programming Environment, Run-Time System and Simulator for Many-Core Machines Olivier Certner

Total Page:16

File Type:pdf, Size:1020Kb

Programming Environment, Run-Time System and Simulator for Many-Core Machines Olivier Certner Programming Environment, Run-Time System and Simulator for Many-Core Machines Olivier Certner To cite this version: Olivier Certner. Programming Environment, Run-Time System and Simulator for Many-Core Ma- chines. Distributed, Parallel, and Cluster Computing [cs.DC]. Université Paris Sud - Paris XI, 2010. English. tel-00826616 HAL Id: tel-00826616 https://tel.archives-ouvertes.fr/tel-00826616 Submitted on 27 May 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. UNIVERSITÉ de PARIS-SUD U.F.R. SCIENTIFIQUE d’ORSAY THÈSE présentée pour obtenir le grade de DOCTEUR en SCIENCES de l’UNIVERSITÉ de PARIS XI ORSAY Spécialité : Informatique par Olivier CERTNER Sujet : Environnement de programmation, support à l’exécution et simulateur pour machines à grand nombre de cœurs Programming Environment, Run-Time System and Simulator for Many-Core Machines Soutenue le 15 décembre 2010 devant le jury composé de : M. Lieven Eeckhout Rapporteur M. François Bodin Rapporteur M. Pascal Sainrat Examinateur M. Jean-José Bérenguer Examinateur M. Bruno Jégo Examinateur M. Olivier Temam Directeur © 2010–2011, 2013 Olivier Certner Tous droits réservés pour tous pays. Imprimé en France. All rights reserved. Printed in France. Revisions of this document: 0.1 This is the first version released to the jury. It missed the content of the “Distributed Work Management” chapter, the “General Conclusion”, some appendices and a summary of the thesis in French. 0.2 The contents of the “Distributed Work Management” (Chapter9) and of the “General Conclusion” are now released. An appendix containing some example Capsule code has been added. 1.0 This is the version that was released to the jury on the defense day. It includes minor modifications. Most of the suggestions by some members of the jury are among them. 1.1 Added dedications and acknowledgements. 1.2 Added an appendix (some example code for the Capsule distributed-memory run-time system). Added an index. Reworked some figures. Minor corrections throughout the manuscript. À mes grands-parents À mes professeurs Résumé L’accroissement régulier de la fréquence des micro-processeurs et des importants gains de puissance qui en avaient résulté ont pris fin en 2005. Les autres principales techniques matérielles d’amélioration de performance pour les programmes séquentiels (exécution dans le désordre, antémémoires, prédiction de branchement, etc.) se sont largement essouflées. Conduisant à une consommation de puissance toujours plus élevée, elles posent de délicats problèmes économiques et techniques (refroidissement des puces). Pour ces raisons, les fabricants de micro-processeurs ont choisi d’exploiter le nombre croissant de transistors disponibles en plaçant plusieurs cœurs de processeurs sur une même puce. Par cette thèse, nous avons pour objectif et ambition de préparer l’arrivée probable, dans les prochaines années, de processeurs multi-cœur à grand nombre de cœurs (une centaine ou plus). À cette fin, nos recherches se sont orientées dans trois grandes di- rections. Premièrement, nous améliorons l’environnement de parallélisation Capsule, dont le principe est la parallélisation conditionnelle, en lui adjoignant des primitives de synchronization de tâches robustes et en améliorant sa portabilité. Nous étudions ses performances et montrons ses gains par rapport aux approches usuelles, aussi bien en terme de rapidité que de stabilité du temps d’exécution. Deuxièmement, la compétition entre de nombreux cœurs pour accéder à des mémoires aux performances bien plus faibles va vraisemblablement conduire à répartir la mémoire au sein des futures architectures multi-cœur. Nous adaptons donc Capsule à cette évolution en présentant un modèle de données simple et général qui permet au système de déplacer automatiquement les données en fonction des accès effectués par les programmes. De nouveaux algorithmes répartis et locaux sont également proposés pour décider de la création effective des tâches et de leur répartition. Troisièmement, nous développons un nouveau simulateur d’évènements discrets, Si- Many, pouvant prendre en charge des centaines à des milliers de cœurs. Nous montrons qu’il reproduit fidèlement les variations de temps d’exécution de programmes observées sur un simulateur de niveau cycle jusqu’à 64 cœurs. SiMany est au moins 100 fois plus rapide que les meilleurs simulateurs flexibles actuels. Il permet l’exploration d’un champ plus large d’architectures ainsi que l’étude des grandes lignes du comportement des logiciels sur celles-ci, ce qui en fait un outil majeur pour le choix et l’organisation des futures architectures multi-cœur et des solutions logicielles qui les exploiteront. Abstract Since 2005, chip manufacturers have stopped raising processor frequencies, which had been the primary mean to increase processing power since the end of the 90s. Other hardware techniques to improve sequential execution time (out-of-order processing, caches, branch prediction, etc.) have also shown diminishing returns, while raising the power envelope. For these reasons, commonly referred to as the frequency and power walls, manufacturers have turned to multiple processor cores to exploit the growing number of available transistors on a die. In this thesis, we anticipate the probable evolution of multi-core processors into many-core ones, comprising hundreds of cores and more, by focusing on three important directions. First, we improve the Capsule programming environment, based on condi- tional parallelization, by adding robust coarse-grain synchronization primitives and by enhancing its portability. We study its performance and show its benefits over common parallelization approaches, both in terms of speedups and execution time stability. Second, because of increased contention and the memory wall, many-core architectures are likely to become more distributed. We thus adapt Capsule to distributed-memory architectures by proposing a simple but general data structure model that allows the associated run-time system to automatically handle data location based on program accesses. New distributed and local schemes to implement conditional parallelization and work dispatching are also proposed. Third, we develop a new discrete-event-based simulator, SiMany, able to sustain hundreds to thousands of cores with practical execution time. We show that it success- fully captures the main program execution trends by comparing its results to those of a cycle-accurate simulator up to 64 cores. SiMany is more than a hundred times faster than the current best flexible approaches. It pushes further the limits of high-level architectural design-space exploration and software trend prediction, making it a key tool to design future many-core architectures and to assess software scalability on them. ix Contents Remerciements xv Introduction1 I Capsule: Parallel Programming Made Easier7 1 Parallel Programming Is Hard9 1.1 Work Splitting and Dispatching . 10 1.2 Working on the Same Data at the Same Time . 10 1.2.1 Atomic Operations and Mutual Exclusion Primitives . 11 1.2.2 Transactional Memory . 11 1.3 Task Dependencies . 12 1.4 The Capsule Environment . 13 2 The Capsule Programming Model 15 2.1 Tasks . 16 2.2 Conditional Parallelization . 17 2.3 Recursive Work Declaration . 19 2.4 Coarse-Grain Task Synchronization . 22 2.5 Other Primitives and Abstractions . 27 3 Run-Time System Implementation 29 3.1 Task Abstraction and Scheduling . 30 3.2 Conditional Parallelization . 32 3.3 Synchronization Groups . 35 3.4 Portability Abstractions . 37 x CONTENTS 4 Performance Study 41 4.1 Performance Scalability and Stability . 41 4.1.1 Motivating Example . 43 4.1.2 Benchmarks and Experimental Framework . 46 4.1.3 Iterative Execution Sampling . 48 4.1.4 Experimental Results . 50 4.2 Performance Dependence on the Run-Time Platform . 54 4.2.1 Run-Time System Implementation . 54 4.2.2 Hardware Architecture . 56 4.2.3 Task Granularity and Other Program Characteristics . 58 5 Related Work 63 5.1 Data Parallel Environments . 63 5.1.1 Languages . 63 5.1.2 STAPL . 64 5.2 Asynchronous Function Calls . 65 5.2.1 Cool . 65 5.2.2 Cilk . 66 5.2.3 Thread Building Blocks . 68 5.3 Parallel Semantics Through Shared-Objects . 69 5.3.1 Orca . 69 5.3.2 Charm++ . 70 6 Conclusion and Future Work 73 II Distributed Architecture Support in Capsule 75 7 Towards Distributed Architectures 77 8 Distributed Data Structures 81 8.1 Data Structure Model . 84 8.1.1 Concepts . 85 8.1.2 Programming Interface . 87 8.2 Implementation Traits . 90 8.2.1 Object Management Policy . 91 8.2.2 Object References . 92 8.2.3 Cell Structuration . 96 CONTENTS xi 8.2.4 Hardware Support . 98 8.3 Status and Future Directions . 100 9 Distributed Work Management 103 9.1 Probe Policy and Task Queues . 104 9.2 Design Considerations . 106 9.2.1 Classical Strategies . 106 9.2.2 Push or Pull? . 108 9.3 Load-Balancing Policy . 108 9.3.1 Migration Decision Algorithm . 109 9.3.2 From Local Decisions to Global Balance . 111 9.4 Migration Protocol and Interactions With Task Queues . 113 9.4.1 Concurrent Task Migrations . 113 9.4.2 Preserving Locality . 115 10 Related Work 117 10.1 SPMD and Task-Based Languages . 117 10.1.1 SPMD Languages . 117 10.1.2 Cilk . 119 10.2 Memory Consistency . 120 10.2.1 Sequential Consistency . 120 10.2.2 Strong Ordering . 122 10.2.3 Practical Sufficient Conditions for Sequential Consistency . 124 10.2.4 Weak Ordering . 125 10.2.5 Processor Consistency . 127 10.2.6 Slow and Causal Memories . 128 10.2.7 Release Consistency .
Recommended publications
  • Introduction to the Cell Multiprocessor
    Introduction J. A. Kahle M. N. Day to the Cell H. P. Hofstee C. R. Johns multiprocessor T. R. Maeurer D. Shippy This paper provides an introductory overview of the Cell multiprocessor. Cell represents a revolutionary extension of conventional microprocessor architecture and organization. The paper discusses the history of the project, the program objectives and challenges, the design concept, the architecture and programming models, and the implementation. Introduction: History of the project processors in order to provide the required Initial discussion on the collaborative effort to develop computational density and power efficiency. After Cell began with support from CEOs from the Sony several months of architectural discussion and contract and IBM companies: Sony as a content provider and negotiations, the STI (SCEI–Toshiba–IBM) Design IBM as a leading-edge technology and server company. Center was formally opened in Austin, Texas, on Collaboration was initiated among SCEI (Sony March 9, 2001. The STI Design Center represented Computer Entertainment Incorporated), IBM, for a joint investment in design of about $400,000,000. microprocessor development, and Toshiba, as a Separate joint collaborations were also set in place development and high-volume manufacturing technology for process technology development. partner. This led to high-level architectural discussions A number of key elements were employed to drive the among the three companies during the summer of 2000. success of the Cell multiprocessor design. First, a holistic During a critical meeting in Tokyo, it was determined design approach was used, encompassing processor that traditional architectural organizations would not architecture, hardware implementation, system deliver the computational power that SCEI sought structures, and software programming models.
    [Show full text]
  • Download Presentation
    Computer architecture in the past and next quarter century CNNA 2010 Berkeley, Feb 3, 2010 H. Peter Hofstee Cell/B.E. Chief Scientist IBM Systems and Technology Group CMOS Microprocessor Trends, The First ~25 Years ( Good old days ) Log(Performance) Single Thread More active transistors, higher frequency 2005 Page 2 SPECINT 10000 3X From Hennessy and Patterson, Computer Architecture: A ??%/year 1000 Quantitative Approach , 4th edition, 2006 52%/year 100 Performance (vs. VAX-11/780) VAX-11/780) (vs. Performance 10 25%/year 1 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 • VAX : 25%/year 1978 to 1986 RISC + x86: 52%/year 1986 to 2002 Page 3 CMOS Devices hit a scaling wall 1000 Air Cooling limit ) 100 2 Active 10 Power Passive Power 1 0.1 Power Density (W/cm Power Density 0.01 1994 2004 0.001 1 0.1 0.01 Gate Length (microns) Isaac e.a. IBM Page 4 Microprocessor Trends More active transistors, higher frequency Log(Performance) Single Thread More active transistors, higher frequency 2005 Page 5 Microprocessor Trends Multi-Core More active transistors, higher frequency Log(Performance) Single Thread More active transistors, higher frequency 2005 2015(?) Page 6 Multicore Power Server Processors Power 4 Power 5 Power 7 2001 2004 2009 Introduces Dual core Dual Core – 4 threads 8 cores – 32 threads Page 7 Why are (shared memory) CMPs dominant? A new system delivers nearly twice the throughput performance of the previous one without application-level changes. Applications do not degrade in performance when ported (to a next-generation processor).
    [Show full text]
  • Keir, Paul G (2012) Design and Implementation of an Array Language for Computational Science on a Heterogeneous Multicore Architecture
    Keir, Paul G (2012) Design and implementation of an array language for computational science on a heterogeneous multicore architecture. PhD thesis. http://theses.gla.ac.uk/3645/ Copyright and moral rights for this thesis are retained by the author A copy can be downloaded for personal non-commercial research or study, without prior permission or charge This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given Glasgow Theses Service http://theses.gla.ac.uk/ [email protected] Design and Implementation of an Array Language for Computational Science on a Heterogeneous Multicore Architecture Paul Keir Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy School of Computing Science College of Science and Engineering University of Glasgow July 8, 2012 c Paul Keir, 2012 Abstract The packing of multiple processor cores onto a single chip has become a mainstream solution to fundamental physical issues relating to the microscopic scales employed in the manufac- ture of semiconductor components. Multicore architectures provide lower clock speeds per core, while aggregate floating-point capability continues to increase. Heterogeneous multicore chips, such as the Cell Broadband Engine (CBE) and modern graphics chips, also address the related issue of an increasing mismatch between high pro- cessor speeds, and huge latency to main memory.
    [Show full text]
  • Introducing the Cell Processor
    Many of the designations used by manufacturers and sellers to distinguish their products Editor-in-Chief are claimed as trademarks. Where those designations appear in this book, and the publish- Mark Taub er was aware of a trademark claim, the designations have been printed with initial capital Acquisitions Editor letters or in all capitals. Bernard Goodwin The author and publisher have taken care in the preparation of this book, but make no Managing Editor expressed or implied warranty of any kind and assume no responsibility for errors or omis- Kristy Hart sions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. Project Editor Betsy Harris The publisher offers excellent discounts on this book when ordered in quantity for bulk pur- chases or special sales, which may include electronic versions and/or custom covers and Copy Editor content particular to your business, training goals, marketing focus, and branding interests. Keith Cline For more information, please contact: Indexer Erika Millen U.S. Corporate and Government Sales (800) 382-3419 Proofreader [email protected] Editorial Advantage For sales outside the United States please contact: Technical International Sales Reviewers [email protected] Duc Vianney Sean Curry Visit us on the Web: www.informit.com/ph Hema Reddy Library of Congress Cataloging-in-Publication Data: Michael Kistler Michael Brutman Scarpino, Matthew, 1975- Brian Watt Yu Dong Yang Programming the Cell processor : for games, graphics, and computation / Matthew Daniel Brokenshire Scarpino. — 1st ed. Nicholas Blachford Alex Chungen Chow p. cm.
    [Show full text]
  • Real-Time Supercomputing and Technology for Games and Entertainment H
    Real-time Supercomputing and Technology for Games and Entertainment H. Peter Hofstee, Ph. D. Cell BE Chief Scientist and IBM Systems and Technology Group SCEI/Sony Toshiba IBM (STI) Design Center Austin, Texas 1 © 2006 IBM Corporation Collaborative Innovation: Gaming Triple Crown All IBM designed processors! All Power ArchitectureTM based! 2 © 2006 IBM Corporation SPECINT 10000 3X From Hennessy and Patterson, Computer Architecture: A ??%/year 1000 Quantitative Approach, 4th edition, 2006 52%/year 100 Performance (vs. VAX-11/780) (vs. Performance 10 ⇒ Sea change in chip 25%/year design: multiple “cores” or processors per chip 1 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 • VAX : 25%/year 1978 to 1986 • RISC + x86: 52%/year 1986 to 2002 • RISC + x86: ??%/year 2002 to present 3 © 2006 IBM Corporation Microprocessor Trends Single Thread Hybrid performance power limited Multi-core throughput Multi-Core performance extended Performance Hybrid extends Single Thread performance and efficiency Power 4 © 2006 IBM Corporation Traditional (very good!) General Purpose Processor IBM Power5+ 5 © 2006 IBM Corporation Cell Broadband Engine TM: A Heterogeneous Multi-core Architecture * Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. 6 © 2006 IBM Corporation Memory Managing Processor vs. Traditional General Purpose Processor Cell AMD BE IBM Intel 7 © 2006 IBM Corporation Cell Broadband Engine Architecture™ (CBEA) Technology Competitive Roadmap Next Gen (2PPE’+32SPE’) 45nm SOI Performance ~1 TFlop (est.) Enhancements/ Advanced Scaling Cell BE (1+8eDP SPE) 65nm SOI Cost Cell BE Cell BE Reduction (1+8) (1+8) 90nm SOI 65nm SOI 2006 2007 2008 2009 2010 All future dates and specifications are estimations only; Subject to change without notice.
    [Show full text]
  • The Cell Broadband Engine: Exploiting Multiple Levels of Parallelism in a Chip Multiprocessor
    RC24128 (W0610-005) October 2, 2006 Computer Science IBM Research Report The Cell Broadband Engine: Exploiting Multiple Levels of Parallelism in a Chip Multiprocessor Michael Gschwind IBM Research Division Thomas J. Watson Research Center P.O. Box 218 Yorktown Heights, NY 10598 Research Division Almaden - Austin - Beijing - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g. , payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . The Cell Broadband Engine: Exploiting multiple levels of parallelism in a chip multiprocessor Michael Gschwind IBM T.J. Watson Research Center Yorktown Heights, NY Abstract As CMOS feature sizes continue to shrink and traditional microarchitectural methods for delivering high per- formance (e.g., deep pipelining) become too expensive and power-hungry, chip multiprocessors (CMPs) become an exciting new direction by which system designers can deliver increased performance. Exploiting parallelism in such designs is the key to high performance, and we find that parallelism must be exploited at multiple levels of the sys- tem: the thread-level parallelism that has become popular in many designs fails to exploit all the levels of available parallelism in many workloads for CMP systems.
    [Show full text]
  • Heterogeneous Processors: the Cell Broadband Engine (Plus Processor History & Outlook)
    Heterogeneous Processors: The Cell Broadband Engine (plus processor history & outlook) IEEE SSC/CAS Joint Chapter, Jan. 18, 2011 H. Peter Hofstee IBM Austin Research Laboratory CMOS Microprocessor Trends, The First ~25 Years ( Good old days ) Log(Performance) Single Thread More active transistors, higher frequency 2005 Page 2 SPECINT 10000 3X From Hennessy and Patterson, Computer Architecture: A ??%/year 1000 Quantitative Approach , 4th edition, 2006 52%/year 100 Performance (vs. VAX-11/780) VAX-11/780) (vs. Performance 10 25%/year 1 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 • VAX : 25%/year 1978 to 1986 RISC + x86: 52%/year 1986 to 2002 Page 3 1971 – 2000, A Comparison 1971(Intel 4004) 2000 ( Intel Pentium III Xeon ) Technology 10 micron (PMOS) 180 nm (CMOS) Voltage 15V 1.7V ( 0.27V ) #Transistors 2,312 28M ( 69M ) Frequency 740KHz 600MHz – 1GHz ( 41MHz ) Cycles per Inst. 8 ~1 Chip size 11mm2 106mm2 Power 0.45W 20.4 W( [email protected] ) Power density 0.04W/mm2 0.18W/mm Inst/(Hz * #tr) 5.4 E-5 3.6 E-8 Page 4 ~2004: CMOS Devices hit a scaling wall 1000 Air Cooling limit ) 100 2 Active 10 Power Passive Power 1 0.1 Power Density (W/cm Power Density 0.01 1994 2004 0.001 1 0.1 0.01 Gate Length (microns) Isaac e.a. IBM Page 5 Microprocessor Trends More active transistors, higher frequency Log(Performance) Single Thread More active transistors, higher frequency 2005 Page 6 Microprocessor Trends Multi-Core More active transistors, higher frequency Log(Performance) Single Thread More active transistors, higher frequency
    [Show full text]
  • CELL Processor
    Sony/Toshiba/IBM (STI) CELL Processor Scientific Computing for Engineers: Spring 2008 Nec Hercules Contra Plures ➢ Chip's performance is related to its cross section same area 2 ´ performance (Pollack's Rule) Cray's two oxen Cray's 1024 chicken 01/15/08 22:22 2 Three Performance-Limiting Walls ➢ Power Wall Increasingly, microprocessor performance is limited by achievable power dissipation rather than by the number of available integrated-circuit resources (transistors and wires). Thus, the only way to significantly increase the performance of microprocessors is to improve power efficiency at about the same rate as the performance increase. ➢ Frequency Wall Conventional processors require increasingly deeper instruction pipelines to achieve higher operating frequencies. This technique has reached a point of diminishing returns, and even negative returns if power is taken into account. ➢ Memory Wall On multi-gigahertz symmetric multiprocessors – even those with integrated memory controllers – latency to DRAM memory is currently approaching 1,000 cycles. As a result, program performance is dominated by the activity of moving data between main storage (the effective-address space that includes main memory) and the processor. 01/15/08 22:22 3 Conventional Processors Don't Cut It ... shallower pipelines with in-order execution have proven to be the most area and energy efficient. [...] we believe the efficient building blocks of future architectures are likely to be simple, modestly pipelined (5-9 stages) processors, floating point units, vector, and SIMD processing elements. Note that these constraints fly in the face of the conventional wisdom of simplifying parallel programming by using the largest processors available.
    [Show full text]
  • HC17.S1T1 a Novel SIMD Architecture for the Cell Heterogeneous Chip-Multiprocessor
    Broadband Processor Architecture A novel SIMD architecture for the Cell heterogeneous chip-multiprocessor Michael Gschwind, Peter Hofstee, Brian Flachs, Martin Hopkins, Yukio Watanabe, Takeshi Yamazaki © 2005 IBM Corporation Broadband Processor Architecture Acknowledgements . Cell is the result of a partnership between SCEI/Sony, Toshiba, and IBM . Cell represents the work of more than 400 people starting in 2000 and a design investment of about $400M 2 © 2005 IBM Corporation Broadband Processor Architecture Cell Design Goals . Provide the platform for the future of computing – 10× performance of desktop systems shipping in 2005 – Ability to reach 1 TF with a 4-node Cell system . Computing density as main challenge – Dramatically increase performance per X • X = Area, Power, Volume, Cost,… . Single core designs offer diminishing returns on investment – In power, area, design complexity and verification cost . Exploit application parallelism to provide a quantum leap in performance 3 © 2005 IBM Corporation Broadband Processor Architecture Shifting the Balance of Power with Cell . Today’s architectures are built on a 40 year old data model – Efficiency as defined in 1964 – Big overhead per data operation – Data parallelism added as an after-thought . Cell provides parallelism at all levels of system abstraction – Thread-level parallelism multi-core design approach – Instruction-level parallelism statically scheduled & power aware – Data parallelism data-paralleI instructions . Data processor instead of control system 4 © 2005 IBM Corporation Broadband Processor Architecture Cell Features SPE . Heterogeneous multi- SPU SPU SPU SPU SPU SPU SPU SPU core system architecture SXU SXU SXU SXU SXU SXU SXU SXU – Power Processor Element LS LS LS LS LS LS LS LS for control tasks – Synergistic Processor SMF SMF SMF SMF SMF SMF SMF SMF Elements for data-intensive processing 16B/cycle .
    [Show full text]
  • An Open Source Environment for Cell Broadband Engine System Software
    RC24296 (W0603-061) March 8, 2006 Computer Science IBM Research Report An Open Source Environment for Cell Broadband Engine System Software Michael Gschwind IBM Research Division Thomas J. Watson Research Center P.O. Box 218 Yorktown Heights, NY 10598 David Erb, Sidney Manning, Mark Nutter IBM Research Division Austin Research Laboratory 11501 Burnet Road Austin, TX 78758 Research Division Almaden - Austin - Beijing - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g. , payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . An Open Source Environment for Cell Broadband Engine System Software Michael Gschwind, IBM T.J. Watson Research Center David Erb, Sid Manning, and Mark Nutter, IBM Austin Abstract magnitude over that of desktop systems shipping in 2005 [10, 9, 6]. To meet that goal, designers had to The Cell Broadband Engine provides the first imple- optimize performance against area, power, volume, mentation of a chip multiprocessor with a significant and cost in a manner not possible with legacy archi- number of general-purpose programmable cores tar- tectures.
    [Show full text]
  • Tutorial Hardware and Software Architectures for the CELL
    IBM Systems and Technology Group Tutorial Hardware and Software Architectures for the CELL BROADBAND ENGINE processor Michael Day, Peter Hofstee IBM Systems & Technology Group, Austin, Texas CODES+ISSS Conference, September 2005 © 2005 IBM Corporation IBM Systems and Technology Group Agenda Trends in Processors and Systems Introduction to Cell Challenges to processor performance Cell Broadband Engine Architecture (CBEA) Cell Broadband Engine (CBE) Processor Overview Cell Programming Models Prototype Software Environment TRE Demo 2 STI Technology © 2005 IBM Corporation IBM Systems and Technology Group Trends in Microprocessors and Systems 3 STI Technology © 2005 IBM Corporation IBM Systems and Technology Group Processor Performance over Time (Game processors take the lead on media performance) Flops (SP) 1000000 100000 10000 Game Processors 1000 PC Processors 100 10 1993 1995 1996 1998 2000 2002 2004 4 STI Technology © 2005 IBM Corporation IBM Systems and Technology Group System Trends toward Integration Memory Northbridge Memory Next-Gen Accel Processor Processor IO Southbridge IO Implied loss of system configuration flexibility Must compensate with generality of acceleration function to maintain market size. 5 STI Technology © 2005 IBM Corporation IBM Systems and Technology Group Motivation: Cell Goals Outstanding performance, especially on game/multimedia applications. • Challenges: Memory Wall, Power Wall, Frequency Wall Real time responsiveness to the user and the network. • Challenges: Real-time in an SMP environment,
    [Show full text]
  • Der Cell Prozessor
    Der Cell Prozessor Seminarvortrag von Viacheslav Mlotok SS2005 Universität Mannheim, Lehrstuhl für Rechnerarchitektur 1 Inhalt • Entwicklungsziele • Cell Architektur – Power Processor Element – Synergistic Processor Element – Memory Interface Controller – Bus Interface Controller – Element Interconnect Bus • Software Cells • Anwendungs- und Produktbeispiele • Fazit • Quellenangaben 2 Entwicklungsziele Quelle: [9] • Hauptziel: neue Architektur für die Storage effiziente Verarbeitung von Logic Multimedia- und Grafikdaten der Cache nächsten Generation • Aufbau eines modernen RISC Control Logic Prozessors IF ID – großer Cache Dispatch – out-of-order execution, speculative execution, hardware branch prediction, predication, etc. – superscalare Architektur, tiefe Pipelines Issue Logic • 40% Performancezuwachs bei Execution Verdoppelung der Anzahl von Logic Vector FP Units Unit Integer Units LS Unit Transistoren Execution Core Reorder Commit Commit Unit 3 Entwicklungsziele • Performance pro Transistor als Quelle: [9] Maß Storage Logic • Mikroarchitektur vereinfachen und Komplexität auf die Software Local Storage verlagern • Control Logic kein Cache - nur ein schneller IF lokaler Speicher ID • großes allgemeines Registerfile, Dispatch data fetch und branch prediction Execution Logic in Software • Multicore anstatt tiefen Pipelines Vector Arithmetic Units Permute LS Channel Branch Unit Unit Unit Unit Execution Core Commit Commit Unit 4 Cell Architektur Quelle: [8] • gemeinsame Entwicklung von IBM, Sony und Toshiba • 90nm SOI, 8 Metalllagen • 234
    [Show full text]