Improving the Scalability of Multicore Systems with a Focus on H.264 Video Decoding

Total Page:16

File Type:pdf, Size:1020Kb

Improving the Scalability of Multicore Systems with a Focus on H.264 Video Decoding Improving the Scalability of Multicore Systems With a Focus on H.264 Video Decoding Improving the Scalability of Multicore Systems With a Focus on H.264 Video Decoding PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof.ir. K.Ch.A.M Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op vrijdag 9 juli 2010 om 15:00 uur door Cornelis Hermanus MEENDERINCK elektrotechnisch ingenieur geboren te Amsterdam, Nederland. Dit proefschrift is goedgekeurd door de promotoren: Prof. dr. B.H.H. Juurlink Prof. dr. K.G.W. Goossens Samenstelling promotiecommissie: Rector Magnificus, voorzitter Technische Universiteit Delft Prof. dr. B.H.H. Juurlink, promotor Technische Universitat¨ Berlin Prof. dr. K.G.W. Goossens, promotor Technische Universiteit Delft Prof. dr. H. Corporaal Technische Universiteit Eindhoven Prof. dr. ir. A.J.C. van Gemund Technische Universiteit Delft Dr. H.P. Hofstee IBM Systems and Technology Group Prof. dr. K.G. Langendoen Technische Universiteit Delft Dr. A. Ramirez Universitat Politecnica de Catalunya Prof. dr. B.H.H. Juurlink was werkzaam aan de Technische Universiteit van Delft tot eind december 2009, en heeft als begeleider in belangrijke mate bij- gedragen aan de totstandkoming van dit proefschrift. ISBN: 978-90-72298-08-9 Keywords: multicore, Chip MultiProcessor (CMP), scalability, power efficiency, thread level-parallelism (TLP), H.264, video decoding, instruction set architecture, domain specific accelerator, task management Acknowledgements: This work was supported by the European Com- mission in the context of the SARC integrated project #27648 (FP6). Copyright °c 2010 C.H. Meenderinck All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without permission of the author. Printed in the Netherlands Dedicated to my wife for loving and supporting me. Improving the Scalability of Multicore Systems With a Focus on H.264 Video Decoding Abstract n pursuit of ever increasing performance, more and more processor archi- tectures have become multicore processors. As clock frequency was no I longer increasing rapidly and ILP techniques showed diminishing results, increasing the number of cores per chip was the natural choice. The transis- tor budget is still increasing and thus it is expected that within ten years chips can contain hundreds of high performance cores. Scaling the number of cores, however, does not necessarily translate into an equal scaling of performance. In this thesis, we propose several techniques to improve the performance scal- ability of multicore systems. With those techniques we address several key challenges of the multicore area. First, we investigate the effect of the power wall on future multicore architec- ture. Our model includes predictions of technology improvements, analysis of symmetric and asymmetric multicores, as well as the influence of Amdahl’s Law. Second, we investigate the parallelization of the H.264 video decoding ap- plication, thereby addressing application scalability. Existing parallelization strategies are discussed and a novel strategy is proposed. Analysis shows that using the new parallelization strategy the amount of available parallelism is in the order of thousands. Several implementations of the strategy are discussed, which show the difficulty and the possibility of actually exploiting the avail- able parallelism. Third, we propose an Application Specific Instruction Set (ASIP) processor for H.264 decoding, based on the Cell SPE. ASIPs are energy efficient and allow performance scaling in systems that are limited by the power budget. Finally, we propose hardware support for task management, of which the ben- efits are two-fold. First, it supports the SARC programming model, which is a task-based dataflow programming model based on StarSS. By providing hardware support for the most time-consuming part of the runtime system, it improves the scalability. Second, it reduces the parallelization overhead, such as synchronization, by providing fast hardware primitives. i Acknowledgments Many people contributed to this work in some way and I owe them a debt of gratitude. To reach this point it takes professors who teach you, colleagues who assist you, and family and friends who support you. First of all, I thank Ben Juurlink for supervising me. You helped focusing this work, while at the same time pointing me to numerous potential ideas. You showed me what others had done in our research area and how we might ei- ther build on that or fill a gap. You suggested me to visit BSC and collaborate with them closely. Also when I wanted to visit HiPEAC meetings you sup- ported me. Al those trips, where I met other researchers, other ideas, and other scientific work, have been of great value. I’m also thankful to Stamatis Vassiliadis for giving me the opportunity to work on the SARC project. I profited from the synergy effect within such a large project. The shift to the SARC project also meant the end of the collaboration with Sorin Cotofana. Sorin, I thank you for the time we worked together. You were the one who introduced me into academia, you taught me how to write papers, and I still remember the conversations we had about life, religion, gas- tronomy, and jazz. I also thank Kees Goossens for being one of my promoters. Over the years I have worked together with several other PhD students and some MSc students. I thank Arnaldo and Mauricio for all the work we did together on parallelizing H.264 and porting it to the Cell processor. I thank Alejandro, Felipe, Augusto, and David for all the effort they put into the sim- ulator and helping me to use and extend it. I thank Efren for implementing the Nexus system in the simulator. I thank Carsten, Martijn, and Chi for doing their MSc project with us, by which you helped me and others. I thank Pepijn for helping me with Linux issues and being a pleasant room mate. I thank Alex Ramirez and Xavi Martorell for warmly welcoming me in Barcelona, the collaboration, and your insight. My visits to Barcelona car- ried a lot of fruit. It helped me to put my research in a larger perspective; it iii both broadened and deepened my knowledge, and taught me how to enjoy life in an intense way. I thank Jan Hoogerbrugge for the valuable discussions on parallelizing H.264. I thank Stefanos Kaxiras for his input on the methodology used to estimate the power consumption of future multicores. I thank Peter Hofstee and Brian Flachs for commenting on the SPE specialization. I thank Ayal Zaks for help- ing me with compiler issues. I thank Yoav Etsion for the discussions and col- laboration on hardware dependency resolution. Secretaries and coffee; they always seem to go together. Whether I was in Delft or in Barcelona, the secretary’s office was always my source of coffee and a chat. Lidwina and Monique, thank you for the coffee, the chats, and helping me with the administrative stuff. Lourdes, thank you for the warm welcome in BSC, the coffee, and the conversations. I thank the European Network of Excellence on High Performance and Em- bedded Architecture and Compilation (HiPEAC) for financing the collabora- tion with BSC and some trips to the HiPEAC workshops and conferences. I’m very grateful to my parents who raised me, who loved me, and who sup- ported me continuously throughout the years. I thank God for giving me the capabilities of doing a PhD and being my guidance through life. Many, many thanks to my dearest wife; the princess of my life. Thank you for loving me, for supporting me, and for taking this wonderful journey of life together with me. C.H. Meenderinck Delft, The Netherlands, 2010 iv Contents Abstract i Acknowledgments iii List of Figures xii List of Tables xiv Abbreviations xv 1 Introduction 1 1.1 Motivation . 2 1.2 Objectives . 7 1.3 Organization and Contributions . 9 2 (When) Will Multicores hit the Power Wall? 11 2.1 Methodology . 13 2.2 Scaling of the Alpha 21264 . 13 2.3 Power and Performance Assuming Perfect Parallelization . 18 2.4 Intermezzo: Amdahl vs. Gustafson . 20 2.5 Performance Assuming Non-Perfect Parallelization . 25 2.6 Conclusions . 32 3 Parallel Scalability of Video Decoders 35 v 3.1 Overview of the H.264 Standard . 36 3.2 Benchmarks . 41 3.3 Parallelization Strategies for H.264 . 42 3.3.1 GOP-level Parallelism . 43 3.3.2 Frame-level Parallelism . 44 3.3.3 Slice-level Parallelism . 44 3.3.4 Macroblock-level Parallelism . 45 3.3.5 Block-level Parallelism . 48 3.4 Scalable MB-level Parallelism: The 3D-Wave . 49 3.5 Parallel Scalability of the Dynamic 3D-Wave . 52 3.6 Case Study: Mobile Video . 62 3.7 Experiences with Parallel Implementations . 63 3.7.1 3D-Wave on a TriMedia-based Multicore . 65 3.7.2 2D-Wave on a Multiprocessor System . 67 3.7.3 2D-Wave on the Cell Processor . 68 3.8 Conclusions . 70 4 The SARC Media Accelerator 73 4.1 Related Work . 74 4.2 The SPE Architecture . 76 4.3 Experimental Setup . 79 4.3.1 Benchmarks . 79 4.3.2 Compiler . 80 4.3.3 Simulator . 81 4.4 Enhancements to the SPE Architecture . 84 4.4.1 Accelerating Scalar Operations . 85 4.4.2 Accelerating Saturation and Packing . 92 4.4.3 Accelerating Matrix Transposition . 99 4.4.4 Accelerating Arithmetic Operations . 106 4.4.5 Accelerating Unaligned Memory Accesses . 124 4.5 Performance Evaluation . 127 vi 4.5.1 Results for the IDCT8 Kernel . 127 4.5.2 Results for the IDCT4 kernel .
Recommended publications
  • Introduction to the Cell Multiprocessor
    Introduction J. A. Kahle M. N. Day to the Cell H. P. Hofstee C. R. Johns multiprocessor T. R. Maeurer D. Shippy This paper provides an introductory overview of the Cell multiprocessor. Cell represents a revolutionary extension of conventional microprocessor architecture and organization. The paper discusses the history of the project, the program objectives and challenges, the design concept, the architecture and programming models, and the implementation. Introduction: History of the project processors in order to provide the required Initial discussion on the collaborative effort to develop computational density and power efficiency. After Cell began with support from CEOs from the Sony several months of architectural discussion and contract and IBM companies: Sony as a content provider and negotiations, the STI (SCEI–Toshiba–IBM) Design IBM as a leading-edge technology and server company. Center was formally opened in Austin, Texas, on Collaboration was initiated among SCEI (Sony March 9, 2001. The STI Design Center represented Computer Entertainment Incorporated), IBM, for a joint investment in design of about $400,000,000. microprocessor development, and Toshiba, as a Separate joint collaborations were also set in place development and high-volume manufacturing technology for process technology development. partner. This led to high-level architectural discussions A number of key elements were employed to drive the among the three companies during the summer of 2000. success of the Cell multiprocessor design. First, a holistic During a critical meeting in Tokyo, it was determined design approach was used, encompassing processor that traditional architectural organizations would not architecture, hardware implementation, system deliver the computational power that SCEI sought structures, and software programming models.
    [Show full text]
  • Download Presentation
    Computer architecture in the past and next quarter century CNNA 2010 Berkeley, Feb 3, 2010 H. Peter Hofstee Cell/B.E. Chief Scientist IBM Systems and Technology Group CMOS Microprocessor Trends, The First ~25 Years ( Good old days ) Log(Performance) Single Thread More active transistors, higher frequency 2005 Page 2 SPECINT 10000 3X From Hennessy and Patterson, Computer Architecture: A ??%/year 1000 Quantitative Approach , 4th edition, 2006 52%/year 100 Performance (vs. VAX-11/780) VAX-11/780) (vs. Performance 10 25%/year 1 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 • VAX : 25%/year 1978 to 1986 RISC + x86: 52%/year 1986 to 2002 Page 3 CMOS Devices hit a scaling wall 1000 Air Cooling limit ) 100 2 Active 10 Power Passive Power 1 0.1 Power Density (W/cm Power Density 0.01 1994 2004 0.001 1 0.1 0.01 Gate Length (microns) Isaac e.a. IBM Page 4 Microprocessor Trends More active transistors, higher frequency Log(Performance) Single Thread More active transistors, higher frequency 2005 Page 5 Microprocessor Trends Multi-Core More active transistors, higher frequency Log(Performance) Single Thread More active transistors, higher frequency 2005 2015(?) Page 6 Multicore Power Server Processors Power 4 Power 5 Power 7 2001 2004 2009 Introduces Dual core Dual Core – 4 threads 8 cores – 32 threads Page 7 Why are (shared memory) CMPs dominant? A new system delivers nearly twice the throughput performance of the previous one without application-level changes. Applications do not degrade in performance when ported (to a next-generation processor).
    [Show full text]
  • Keir, Paul G (2012) Design and Implementation of an Array Language for Computational Science on a Heterogeneous Multicore Architecture
    Keir, Paul G (2012) Design and implementation of an array language for computational science on a heterogeneous multicore architecture. PhD thesis. http://theses.gla.ac.uk/3645/ Copyright and moral rights for this thesis are retained by the author A copy can be downloaded for personal non-commercial research or study, without prior permission or charge This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given Glasgow Theses Service http://theses.gla.ac.uk/ [email protected] Design and Implementation of an Array Language for Computational Science on a Heterogeneous Multicore Architecture Paul Keir Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy School of Computing Science College of Science and Engineering University of Glasgow July 8, 2012 c Paul Keir, 2012 Abstract The packing of multiple processor cores onto a single chip has become a mainstream solution to fundamental physical issues relating to the microscopic scales employed in the manufac- ture of semiconductor components. Multicore architectures provide lower clock speeds per core, while aggregate floating-point capability continues to increase. Heterogeneous multicore chips, such as the Cell Broadband Engine (CBE) and modern graphics chips, also address the related issue of an increasing mismatch between high pro- cessor speeds, and huge latency to main memory.
    [Show full text]
  • Introducing the Cell Processor
    Many of the designations used by manufacturers and sellers to distinguish their products Editor-in-Chief are claimed as trademarks. Where those designations appear in this book, and the publish- Mark Taub er was aware of a trademark claim, the designations have been printed with initial capital Acquisitions Editor letters or in all capitals. Bernard Goodwin The author and publisher have taken care in the preparation of this book, but make no Managing Editor expressed or implied warranty of any kind and assume no responsibility for errors or omis- Kristy Hart sions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. Project Editor Betsy Harris The publisher offers excellent discounts on this book when ordered in quantity for bulk pur- chases or special sales, which may include electronic versions and/or custom covers and Copy Editor content particular to your business, training goals, marketing focus, and branding interests. Keith Cline For more information, please contact: Indexer Erika Millen U.S. Corporate and Government Sales (800) 382-3419 Proofreader [email protected] Editorial Advantage For sales outside the United States please contact: Technical International Sales Reviewers [email protected] Duc Vianney Sean Curry Visit us on the Web: www.informit.com/ph Hema Reddy Library of Congress Cataloging-in-Publication Data: Michael Kistler Michael Brutman Scarpino, Matthew, 1975- Brian Watt Yu Dong Yang Programming the Cell processor : for games, graphics, and computation / Matthew Daniel Brokenshire Scarpino. — 1st ed. Nicholas Blachford Alex Chungen Chow p. cm.
    [Show full text]
  • Real-Time Supercomputing and Technology for Games and Entertainment H
    Real-time Supercomputing and Technology for Games and Entertainment H. Peter Hofstee, Ph. D. Cell BE Chief Scientist and IBM Systems and Technology Group SCEI/Sony Toshiba IBM (STI) Design Center Austin, Texas 1 © 2006 IBM Corporation Collaborative Innovation: Gaming Triple Crown All IBM designed processors! All Power ArchitectureTM based! 2 © 2006 IBM Corporation SPECINT 10000 3X From Hennessy and Patterson, Computer Architecture: A ??%/year 1000 Quantitative Approach, 4th edition, 2006 52%/year 100 Performance (vs. VAX-11/780) (vs. Performance 10 ⇒ Sea change in chip 25%/year design: multiple “cores” or processors per chip 1 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 • VAX : 25%/year 1978 to 1986 • RISC + x86: 52%/year 1986 to 2002 • RISC + x86: ??%/year 2002 to present 3 © 2006 IBM Corporation Microprocessor Trends Single Thread Hybrid performance power limited Multi-core throughput Multi-Core performance extended Performance Hybrid extends Single Thread performance and efficiency Power 4 © 2006 IBM Corporation Traditional (very good!) General Purpose Processor IBM Power5+ 5 © 2006 IBM Corporation Cell Broadband Engine TM: A Heterogeneous Multi-core Architecture * Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. 6 © 2006 IBM Corporation Memory Managing Processor vs. Traditional General Purpose Processor Cell AMD BE IBM Intel 7 © 2006 IBM Corporation Cell Broadband Engine Architecture™ (CBEA) Technology Competitive Roadmap Next Gen (2PPE’+32SPE’) 45nm SOI Performance ~1 TFlop (est.) Enhancements/ Advanced Scaling Cell BE (1+8eDP SPE) 65nm SOI Cost Cell BE Cell BE Reduction (1+8) (1+8) 90nm SOI 65nm SOI 2006 2007 2008 2009 2010 All future dates and specifications are estimations only; Subject to change without notice.
    [Show full text]
  • The Cell Broadband Engine: Exploiting Multiple Levels of Parallelism in a Chip Multiprocessor
    RC24128 (W0610-005) October 2, 2006 Computer Science IBM Research Report The Cell Broadband Engine: Exploiting Multiple Levels of Parallelism in a Chip Multiprocessor Michael Gschwind IBM Research Division Thomas J. Watson Research Center P.O. Box 218 Yorktown Heights, NY 10598 Research Division Almaden - Austin - Beijing - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g. , payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . The Cell Broadband Engine: Exploiting multiple levels of parallelism in a chip multiprocessor Michael Gschwind IBM T.J. Watson Research Center Yorktown Heights, NY Abstract As CMOS feature sizes continue to shrink and traditional microarchitectural methods for delivering high per- formance (e.g., deep pipelining) become too expensive and power-hungry, chip multiprocessors (CMPs) become an exciting new direction by which system designers can deliver increased performance. Exploiting parallelism in such designs is the key to high performance, and we find that parallelism must be exploited at multiple levels of the sys- tem: the thread-level parallelism that has become popular in many designs fails to exploit all the levels of available parallelism in many workloads for CMP systems.
    [Show full text]
  • Heterogeneous Processors: the Cell Broadband Engine (Plus Processor History & Outlook)
    Heterogeneous Processors: The Cell Broadband Engine (plus processor history & outlook) IEEE SSC/CAS Joint Chapter, Jan. 18, 2011 H. Peter Hofstee IBM Austin Research Laboratory CMOS Microprocessor Trends, The First ~25 Years ( Good old days ) Log(Performance) Single Thread More active transistors, higher frequency 2005 Page 2 SPECINT 10000 3X From Hennessy and Patterson, Computer Architecture: A ??%/year 1000 Quantitative Approach , 4th edition, 2006 52%/year 100 Performance (vs. VAX-11/780) VAX-11/780) (vs. Performance 10 25%/year 1 1978 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 • VAX : 25%/year 1978 to 1986 RISC + x86: 52%/year 1986 to 2002 Page 3 1971 – 2000, A Comparison 1971(Intel 4004) 2000 ( Intel Pentium III Xeon ) Technology 10 micron (PMOS) 180 nm (CMOS) Voltage 15V 1.7V ( 0.27V ) #Transistors 2,312 28M ( 69M ) Frequency 740KHz 600MHz – 1GHz ( 41MHz ) Cycles per Inst. 8 ~1 Chip size 11mm2 106mm2 Power 0.45W 20.4 W( [email protected] ) Power density 0.04W/mm2 0.18W/mm Inst/(Hz * #tr) 5.4 E-5 3.6 E-8 Page 4 ~2004: CMOS Devices hit a scaling wall 1000 Air Cooling limit ) 100 2 Active 10 Power Passive Power 1 0.1 Power Density (W/cm Power Density 0.01 1994 2004 0.001 1 0.1 0.01 Gate Length (microns) Isaac e.a. IBM Page 5 Microprocessor Trends More active transistors, higher frequency Log(Performance) Single Thread More active transistors, higher frequency 2005 Page 6 Microprocessor Trends Multi-Core More active transistors, higher frequency Log(Performance) Single Thread More active transistors, higher frequency
    [Show full text]
  • CELL Processor
    Sony/Toshiba/IBM (STI) CELL Processor Scientific Computing for Engineers: Spring 2008 Nec Hercules Contra Plures ➢ Chip's performance is related to its cross section same area 2 ´ performance (Pollack's Rule) Cray's two oxen Cray's 1024 chicken 01/15/08 22:22 2 Three Performance-Limiting Walls ➢ Power Wall Increasingly, microprocessor performance is limited by achievable power dissipation rather than by the number of available integrated-circuit resources (transistors and wires). Thus, the only way to significantly increase the performance of microprocessors is to improve power efficiency at about the same rate as the performance increase. ➢ Frequency Wall Conventional processors require increasingly deeper instruction pipelines to achieve higher operating frequencies. This technique has reached a point of diminishing returns, and even negative returns if power is taken into account. ➢ Memory Wall On multi-gigahertz symmetric multiprocessors – even those with integrated memory controllers – latency to DRAM memory is currently approaching 1,000 cycles. As a result, program performance is dominated by the activity of moving data between main storage (the effective-address space that includes main memory) and the processor. 01/15/08 22:22 3 Conventional Processors Don't Cut It ... shallower pipelines with in-order execution have proven to be the most area and energy efficient. [...] we believe the efficient building blocks of future architectures are likely to be simple, modestly pipelined (5-9 stages) processors, floating point units, vector, and SIMD processing elements. Note that these constraints fly in the face of the conventional wisdom of simplifying parallel programming by using the largest processors available.
    [Show full text]
  • HC17.S1T1 a Novel SIMD Architecture for the Cell Heterogeneous Chip-Multiprocessor
    Broadband Processor Architecture A novel SIMD architecture for the Cell heterogeneous chip-multiprocessor Michael Gschwind, Peter Hofstee, Brian Flachs, Martin Hopkins, Yukio Watanabe, Takeshi Yamazaki © 2005 IBM Corporation Broadband Processor Architecture Acknowledgements . Cell is the result of a partnership between SCEI/Sony, Toshiba, and IBM . Cell represents the work of more than 400 people starting in 2000 and a design investment of about $400M 2 © 2005 IBM Corporation Broadband Processor Architecture Cell Design Goals . Provide the platform for the future of computing – 10× performance of desktop systems shipping in 2005 – Ability to reach 1 TF with a 4-node Cell system . Computing density as main challenge – Dramatically increase performance per X • X = Area, Power, Volume, Cost,… . Single core designs offer diminishing returns on investment – In power, area, design complexity and verification cost . Exploit application parallelism to provide a quantum leap in performance 3 © 2005 IBM Corporation Broadband Processor Architecture Shifting the Balance of Power with Cell . Today’s architectures are built on a 40 year old data model – Efficiency as defined in 1964 – Big overhead per data operation – Data parallelism added as an after-thought . Cell provides parallelism at all levels of system abstraction – Thread-level parallelism multi-core design approach – Instruction-level parallelism statically scheduled & power aware – Data parallelism data-paralleI instructions . Data processor instead of control system 4 © 2005 IBM Corporation Broadband Processor Architecture Cell Features SPE . Heterogeneous multi- SPU SPU SPU SPU SPU SPU SPU SPU core system architecture SXU SXU SXU SXU SXU SXU SXU SXU – Power Processor Element LS LS LS LS LS LS LS LS for control tasks – Synergistic Processor SMF SMF SMF SMF SMF SMF SMF SMF Elements for data-intensive processing 16B/cycle .
    [Show full text]
  • An Open Source Environment for Cell Broadband Engine System Software
    RC24296 (W0603-061) March 8, 2006 Computer Science IBM Research Report An Open Source Environment for Cell Broadband Engine System Software Michael Gschwind IBM Research Division Thomas J. Watson Research Center P.O. Box 218 Yorktown Heights, NY 10598 David Erb, Sidney Manning, Mark Nutter IBM Research Division Austin Research Laboratory 11501 Burnet Road Austin, TX 78758 Research Division Almaden - Austin - Beijing - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g. , payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . An Open Source Environment for Cell Broadband Engine System Software Michael Gschwind, IBM T.J. Watson Research Center David Erb, Sid Manning, and Mark Nutter, IBM Austin Abstract magnitude over that of desktop systems shipping in 2005 [10, 9, 6]. To meet that goal, designers had to The Cell Broadband Engine provides the first imple- optimize performance against area, power, volume, mentation of a chip multiprocessor with a significant and cost in a manner not possible with legacy archi- number of general-purpose programmable cores tar- tectures.
    [Show full text]
  • Tutorial Hardware and Software Architectures for the CELL
    IBM Systems and Technology Group Tutorial Hardware and Software Architectures for the CELL BROADBAND ENGINE processor Michael Day, Peter Hofstee IBM Systems & Technology Group, Austin, Texas CODES+ISSS Conference, September 2005 © 2005 IBM Corporation IBM Systems and Technology Group Agenda Trends in Processors and Systems Introduction to Cell Challenges to processor performance Cell Broadband Engine Architecture (CBEA) Cell Broadband Engine (CBE) Processor Overview Cell Programming Models Prototype Software Environment TRE Demo 2 STI Technology © 2005 IBM Corporation IBM Systems and Technology Group Trends in Microprocessors and Systems 3 STI Technology © 2005 IBM Corporation IBM Systems and Technology Group Processor Performance over Time (Game processors take the lead on media performance) Flops (SP) 1000000 100000 10000 Game Processors 1000 PC Processors 100 10 1993 1995 1996 1998 2000 2002 2004 4 STI Technology © 2005 IBM Corporation IBM Systems and Technology Group System Trends toward Integration Memory Northbridge Memory Next-Gen Accel Processor Processor IO Southbridge IO Implied loss of system configuration flexibility Must compensate with generality of acceleration function to maintain market size. 5 STI Technology © 2005 IBM Corporation IBM Systems and Technology Group Motivation: Cell Goals Outstanding performance, especially on game/multimedia applications. • Challenges: Memory Wall, Power Wall, Frequency Wall Real time responsiveness to the user and the network. • Challenges: Real-time in an SMP environment,
    [Show full text]
  • Der Cell Prozessor
    Der Cell Prozessor Seminarvortrag von Viacheslav Mlotok SS2005 Universität Mannheim, Lehrstuhl für Rechnerarchitektur 1 Inhalt • Entwicklungsziele • Cell Architektur – Power Processor Element – Synergistic Processor Element – Memory Interface Controller – Bus Interface Controller – Element Interconnect Bus • Software Cells • Anwendungs- und Produktbeispiele • Fazit • Quellenangaben 2 Entwicklungsziele Quelle: [9] • Hauptziel: neue Architektur für die Storage effiziente Verarbeitung von Logic Multimedia- und Grafikdaten der Cache nächsten Generation • Aufbau eines modernen RISC Control Logic Prozessors IF ID – großer Cache Dispatch – out-of-order execution, speculative execution, hardware branch prediction, predication, etc. – superscalare Architektur, tiefe Pipelines Issue Logic • 40% Performancezuwachs bei Execution Verdoppelung der Anzahl von Logic Vector FP Units Unit Integer Units LS Unit Transistoren Execution Core Reorder Commit Commit Unit 3 Entwicklungsziele • Performance pro Transistor als Quelle: [9] Maß Storage Logic • Mikroarchitektur vereinfachen und Komplexität auf die Software Local Storage verlagern • Control Logic kein Cache - nur ein schneller IF lokaler Speicher ID • großes allgemeines Registerfile, Dispatch data fetch und branch prediction Execution Logic in Software • Multicore anstatt tiefen Pipelines Vector Arithmetic Units Permute LS Channel Branch Unit Unit Unit Unit Execution Core Commit Commit Unit 4 Cell Architektur Quelle: [8] • gemeinsame Entwicklung von IBM, Sony und Toshiba • 90nm SOI, 8 Metalllagen • 234
    [Show full text]