Ibm Power5 Chip: a Dual-Core Multithreaded Processor

Total Page:16

File Type:pdf, Size:1020Kb

Ibm Power5 Chip: a Dual-Core Multithreaded Processor IBM POWER5 CHIP: A DUAL-CORE MULTITHREADED PROCESSOR FEATURING SINGLE- AND MULTITHREADED EXECUTION, THE POWER5 PROVIDES HIGHER PERFORMANCE IN THE SINGLE-THREADED MODE THAN ITS POWER4 PREDECESSOR AT EQUIVALENT FREQUENCIES. ENHANCEMENTS INCLUDE DYNAMIC RESOURCE BALANCING TO EFFICIENTLY ALLOCATE SYSTEM RESOURCES TO EACH THREAD, SOFTWARE-CONTROLLED THREAD PRIORITIZATION, AND DYNAMIC POWER MANAGEMENT TO REDUCE POWER CONSUMPTION WITHOUT AFFECTING PERFORMANCE. IBM introduced Power4-based sys- that base requirement, we specified increased tems in 2001.1 The Power4 design integrates performance and other functional enhance- two processor cores on a single chip, a shared ments of server virtualization, reliability, second-level cache, a directory for an off-chip availability, and serviceability at both chip and third-level cache, and the necessary circuitry system levels. In this article, we describe the to connect it to other Power4 chips to form a approach we used to improve chip-level system. The dual-processor chip provides nat- performance. ural thread-level parallelism at the chip level. Additionally, the Power4’s out-of-order exe- Multithreading cution design lets the hardware bypass instruc- Conventional processors execute instruc- Ron Kalla tions whose operands are not yet available tions from a single instruction stream. Despite (perhaps because of an earlier cache miss dur- microarchitectural advances, execution unit Balaram Sinharoy ing register loading) and execute other instruc- utilization remains low in today’s micro- tions whose operands are ready. Later, when processors. It is not unusual to see average exe- Joel M. Tendler the operands become available, the hardware cution unit utilization rates of approximately can execute the skipped instruction. Coupled 25 percent across a broad spectrum of envi- IBM with a superscalar design, out-of-order exe- ronments. To increase execution unit utiliza- cution results in higher instruction execution tion, designers use thread-level parallelism, in parallelism than otherwise possible. which the physical processor core executes The Power5 is the next-generation chip in instructions from more than one instruction this line. One of our key goals in designing stream. To the operating system, the physical the Power5 was to maintain both binary and processor core appears as if it is a symmetric structural compatibility with existing Power4 multiprocessor containing two logical proces- systems to ensure that binaries continue exe- sors. There are at least three different meth- cuting properly and all application optimiza- ods for handling multiple threads. tions carry forward to newer systems. With In coarse-grained multithreading, only one 40 Published by the IEEE Computer Society 0272-1732/04/$20.00 2004 IEEE thread executes at any instance. When a Processor Processor Processor Processor thread encounters a long-latency event, such as a cache miss, the hardware swaps in a sec- L2 L2 ond thread to use the machine’s resources, cache cache rather than letting the machine remain idle. By allowing other work to use what otherwise Fabric Fabric would be idle cycles, this scheme increases controller controller overall system throughput. To conserve resources, both threads share many system L3 L3 resources, such as architectural registers. cache cache Hence, swapping program control from one Memory Memory thread to another requires several cycles. IBM controller controller implemented coarse-grained multithreading Memory Memory in the IBM eServer pSeries Model 680.2 (a) A variant of coarse-grained multithreading is fine-grained multithreading. Machines of Processor Processor Processor Processor this class execute threads in successive cycles, 3 in round-robin fashion. Accommodating this L3 L2 L2 L3 design requires duplicate hardware facilities. cache cache cache cache When a thread encounters a long-latency event, its cycles remain unused. Fabric Fabric Finally, in simultaneous multithreading controller controller (SMT), as in other multithreaded implemen- Memory Memory tations, the processor fetches instructions controller controller from more than one thread.4 What differen- tiates this implementation is its ability to Memory Memory schedule instructions for execution from all (b) threads concurrently. With SMT, the system Figure 1. Power4 (a) and Power5 (b) system structures. dynamically adjusts to the environment, allowing instructions to execute from each thread if possible, and allowing instructions fabric. This can cause greater contention and from one thread to utilize all the execution negatively affect system scalability. Moving the units if the other thread encounters a long- level-three (L3) cache from the memory side to latency event. the processor side of the fabric lets the Power5 The Power5 design implements two-way more frequently satisfy level-two (L2) cache SMT on each of the chip’s two processor cores. misses with hits in the 36-Mbyte off-chip L3 Although a higher level of multithreading is cache, avoiding traffic on the interchip fabric. possible, our simulations showed that the References to data not resident in the on-chip added complexity was unjustified. As design- L2 cache cause the system to check the L3 ers add simultaneous threads to a single phys- cache before sending requests onto the inter- ical processor, the marginal performance connection fabric. Moving the L3 cache pro- benefit decreases. In fact, additional multi- vides significantly more cache on the processor threading might decrease performance because side than previously available, thus reducing of cache thrashing, as data from one thread traffic on the fabric and allowing Power5-based displaces data needed by another thread. systems to scale to higher levels of symmetric multiprocessing. Initial Power5 systems sup- Power5 system structure port 64 physical processors. Figure 1 shows the high-level structures of The Power4 includes a 1.41-Mbyte on-chip Power4- and Power5-based systems. The L2 cache. Power4+ chips are similar in design Power4 handles up to a 32-way symmetric to the Power4 but are fabricated in 130-nm multiprocessor. Going beyond 32 processors technology rather than the Power4’s 180-nm increases interprocessor communication, technology. The Power4+ includes a 1.5- resulting in high traffic on the interconnection Mbyte on-chip L2 cache, whereas the Power5 MARCH–APRIL 2004 41 HOT CHIPS 15 ing paths. In 130 nm lithography, the chip uses eight metal levels and measures 389 mm2. The Power5 processor supports the 64-bit PowerPC architecture. A single die contains two identical processor cores, each supporting two logical threads. This architecture makes the chip appear as a four-way symmetric mul- tiprocessor to the operating system. The two cores share a 1.875-Mbyte (1,920-Kbyte) L2 cache. We implemented the L2 cache as three identical slices with separate controllers for each. The L2 slices are 10-way set-associative with 512 congruence classes of 128-byte lines. The data’s real address determines which L2 slice the data is cached in. Either processor core can independently access each L2 controller. We also integrated the directory for an off- chip 36-Mbyte L3 cache on the Power5 chip. Having the L3 cache directory on chip allows the processor to check the directory after an Figure 2. Power5 chip (FXU = fixed-point execution unit, ISU L2 miss without experiencing off-chip delays. = instruction sequencing unit, IDU = instruction decode unit, To reduce memory latencies, we integrated LSU = load/store unit, IFU = instruction fetch unit, FPU = the memory controller on the chip. This elim- floating-point unit, and MC = memory controller). inates driver and receiver delays to an exter- nal controller. supports a 1.875-Mbyte on-chip L2 cache. Processor core Power4 and Power4+ systems both have 32- We designed the Power5 processor core to Mbyte L3 caches, whereas Power5 systems support both enhanced SMT and single- have a 36-Mbyte L3 cache. threaded (ST) operation modes. Figure 3 The L3 cache operates as a backdoor with shows the Power5’s instruction pipeline, separate buses for reads and writes that oper- which is identical to the Power4’s. All pipeline ate at half processor speed. In Power4 and latencies in the Power5, including the branch Power4+ systems, the L3 was an inline cache misprediction penalty and load-to-use laten- for data retrieved from memory. Because of cy with an L1 data cache hit, are the same as the higher transistor density of the Power5’s in the Power4. The identical pipeline struc- 130-nm technology, we could move the mem- ture lets optimizations designed for Power4- ory controller on chip and eliminate a chip based systems perform equally well on previously needed for the memory controller Power5-based systems. Figure 4 shows the function. These two changes in the Power5 Power5’s instruction flow diagram. also have the significant side benefits of reduc- In SMT mode, the Power5 uses two sepa- ing latency to the L3 cache and main memo- rate instruction fetch address registers to store ry, as well as reducing the number of chips the program counters for the two threads. necessary to build a system. Instruction fetches (IF stage) alternate between the two threads. In ST mode, the Chip overview Power5 uses only one program counter and Figure 2 shows the Power5 chip, which can fetch instructions for that thread every IBM fabricates using silicon-on-insulator cycle. It can fetch up to eight instructions (SOI) devices and copper interconnect. SOI from the instruction cache (IC stage) every technology reduces device capacitance to cycle. The two threads share the instruction increase transistor performance.5 Copper cache and the instruction translation facility. interconnect decreases wire resistance and In a given cycle, all fetched instructions come reduces delays in wire-dominated chip-tim- from the same thread. 42 IEEE MICRO Branch redirects Out-of-order processing Branch Instruction fetch pipeline MP ISS RF EX WB Xfer Load/store IF pipeline IC BP MP ISS RF EA DC Fmt WB Xfer CP D0 D1 D2 D3 Xfer GD MP ISS RF EX WB Xfer Fixed-point Group formation and pipeline instruction decode MP ISS RF F6 WB Xfer Floating- point pipeline Interrupts and flushes Figure 3.
Recommended publications
  • Ray Tracing on the Cell Processor
    Ray Tracing on the Cell Processor Carsten Benthin† Ingo Wald Michael Scherbaum† Heiko Friedrich‡ †inTrace Realtime Ray Tracing GmbH SCI Institute, University of Utah ‡Saarland University {benthin, scherbaum}@intrace.com, [email protected], [email protected] Abstract band”) architectures1 to hide memory latencies, at exposing par- Over the last three decades, higher CPU performance has been allelism through SIMD units, and at multi-core architectures. At achieved almost exclusively by raising the CPU’s clock rate. Today, least for specialized tasks such as triangle rasterization, these con- the resulting power consumption and heat dissipation threaten to cepts have been proven as very powerful, and have made modern end this trend, and CPU designers are looking for alternative ways GPUs as fast as they are; for example, a Nvidia 7800 GTX offers of providing more compute power. In particular, they are looking 313 GFlops [16], 35 times more than a 2.2 GHz AMD Opteron towards three concepts: a streaming compute model, vector-like CPU. Today, there seems to be a convergence between CPUs and SIMD units, and multi-core architectures. One particular example GPUs, with GPUs—already using the streaming compute model, of such an architecture is the Cell Broadband Engine Architecture SIMD, and multi-core—become increasingly programmable, and (CBEA), a multi-core processor that offers a raw compute power CPUs are getting equipped with more and more cores and stream- of up to 200 GFlops per 3.2 GHz chip. The Cell bears a huge po- ing functionalities. Most commodity CPUs already offer 2–8 tential for compute-intensive applications like ray tracing, but also cores [1, 9, 10, 25], and desktop PCs with more cores can be built requires addressing the challenges caused by this processor’s un- by stacking and interconnecting smaller multi-core systems (mostly conventional architecture.
    [Show full text]
  • IBM System P5 Quad-Core Module Based on POWER5+ Technology: Technical Overview and Introduction
    Giuliano Anselmi Redpaper Bernard Filhol SahngShin Kim Gregor Linzmeier Ondrej Plachy Scott Vetter IBM System p5 Quad-Core Module Based on POWER5+ Technology: Technical Overview and Introduction The quad-core module (QCM) is based on the well-known POWER5™ dual-core module (DCM) technology. The dual-core POWER5 processor and the dual-core POWER5+™ processor are packaged with the L3 cache chip into a cost-effective DCM package. The QCM is a package that enables entry-level or midrange IBM® System p5™ servers to achieve additional processing density without increasing the footprint. Figure 1 shows the DCM and QCM physical views and the basic internal architecture. to I/O to I/O Core Core 1.5 GHz 1.5 GHz Enhanced Enhanced 1.9 MB MB 1.9 1.9 MB MB 1.9 L2 cache L2 L2 cache switch distributed switch distributed switch 1.9 GHz POWER5 Core Core core or CPU or core 1.5 GHz L3 Mem 1.5 GHz L3 Mem L2 cache L2 ctrl ctrl ctrl ctrl Enhanced distributed Enhanced 1.9 MB Shared Shared MB 1.9 Ctrl 1.9 GHz L3 Mem Ctrl POWER5 core or CPU or core 36 MB 36 MB 36 L3 cache L3 cache L3 DCM 36 MB QCM L3 cache L3 to memory to memory DIMMs DIMMs POWER5+ Dual-Core Module POWER5+ Quad-Core Module One Dual-Core-chip Two Dual-Core-chips plus two L3-cache-chips plus one L3-cache-chip Figure 1 DCM and QCM physical views and basic internal architecture © Copyright IBM Corp. 2006.
    [Show full text]
  • Implementing Powerpc Linux on System I Platform
    Front cover Implementing POWER Linux on IBM System i Platform Planning and configuring Linux servers on IBM System i platform Linux distribution on IBM System i Platform installation guide Tips to run Linux servers on IBM System i platform Yessong Johng Erwin Earley Rico Franke Vlatko Kosturjak ibm.com/redbooks International Technical Support Organization Implementing POWER Linux on IBM System i Platform February 2007 SG24-6388-01 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. Second Edition (February 2007) This edition applies to i5/OS V5R4, SLES10 and RHEL4. © Copyright International Business Machines Corporation 2005, 2007. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix The team that wrote this redbook. ix Become a published author . xi Comments welcome. xi Chapter 1. Introduction to Linux on System i platform . 1 1.1 Concepts and terminology . 2 1.1.1 System i platform . 2 1.1.2 Hardware management console . 4 1.1.3 Virtual Partition Manager (VPM) . 10 1.2 Brief introduction to Linux and Linux on System i platform . 12 1.2.1 Linux on System i platform . 12 1.3 Differences between existing Power5-based System i and previous System i models 13 1.3.1 Linux enhancements on Power5 / Power5+ . 14 1.4 Where to go for more information . 15 Chapter 2. Configuration planning . 17 2.1 Concepts and terminology . 18 2.1.1 Processor concepts .
    [Show full text]
  • From Blue Gene to Cell Power.Org Moscow, JSCC Technical Day November 30, 2005
    IBM eServer pSeries™ From Blue Gene to Cell Power.org Moscow, JSCC Technical Day November 30, 2005 Dr. Luigi Brochard IBM Distinguished Engineer Deep Computing Architect [email protected] © 2004 IBM Corporation IBM eServer pSeries™ Technology Trends As frequency increase is limited due to power limitation Dual core is a way to : 2 x Peak Performance per chip (and per cycle) But at the expense of frequency (around 20% down) Another way is to increase Flop/cycle © 2004 IBM Corporation IBM eServer pSeries™ IBM innovations POWER : FMA in 1990 with POWER: 2 Flop/cycle/chip Double FMA in 1992 with POWER2 : 4 Flop/cycle/chip Dual core in 2001 with POWER4: 8 Flop/cycle/chip Quadruple core modules in Oct 2005 with POWER5: 16 Flop/cycle/module PowerPC: VMX in 2003 with ppc970FX : 8 Flops/cycle/core, 32bit only Dual VMX+ FMA with pp970MP in 1Q06 Blue Gene: Low frequency , system on a chip, tight integration of thousands of cpus Cell : 8 SIMD units and a ppc970 core on a chip : 64 Flop/cycle/chip © 2004 IBM Corporation IBM eServer pSeries™ Technology Trends As needs diversify, systems are heterogeneous and distributed GRID technologies are an essential part to create cooperative environments based on standards © 2004 IBM Corporation IBM eServer pSeries™ IBM innovations IBM is : a sponsor of Globus Alliances contributing to Globus Tool Kit open souce a founding member of Globus Consortium IBM is extending its products Global file systems : – Multi platform and multi cluster GPFS Meta schedulers : – Multi platform
    [Show full text]
  • IBM Powerpc 970 (A.K.A. G5)
    IBM PowerPC 970 (a.k.a. G5) Ref 1 David Benham and Yu-Chung Chen UIC – Department of Computer Science CS 466 PPC 970FX overview ● 64-bit RISC ● 58 million transistors ● 512 KB of L2 cache and 96KB of L1 cache ● 90um process with a die size of 65 sq. mm ● Native 32 bit compatibility ● Maximum clock speed of 2.7 Ghz ● SIMD instruction set (Altivec) ● 42 watts @ 1.8 Ghz (1.3 volts) ● Peak data bandwidth of 6.4 GB per second A picture is worth a 2^10 words (approx.) Ref 2 A little history ● PowerPC processor line is a product of the AIM alliance formed in 1991. (Apple, IBM, and Motorola) ● PPC 601 (G1) - 1993 ● PPC 603 (G2) - 1995 ● PPC 750 (G3) - 1997 ● PPC 7400 (G4) - 1999 ● PPC 970 (G5) - 2002 ● AIM alliance dissolved in 2005 Processor Ref 3 Ref 3 Core details ● 16(int)-25(vector) stage pipeline ● Large number of 'in flight' instructions (various stages of execution) - theoretical limit of 215 instructions ● 512 KB L2 cache ● 96 KB L1 cache – 64 KB I-Cache – 32 KB D-Cache Core details continued ● 10 execution units – 2 load/store operations – 2 fixed-point register-register operations – 2 floating-point operations – 1 branch operation – 1 condition register operation – 1 vector permute operation – 1 vector ALU operation ● 32 64 bit general purpose registers, 32 64 bit floating point registers, 32 128 vector registers Pipeline Ref 4 Benchmarks ● SPEC2000 ● BLAST – Bioinformatics ● Amber / jac - Structure biology ● CFD lab code SPEC CPU2000 ● IBM eServer BladeCenter JS20 ● PPC 970 2.2Ghz ● SPECint2000 ● Base: 986 Peak: 1040 ● SPECfp2000 ● Base: 1178 Peak: 1241 ● Dell PowerEdge 1750 Xeon 3.06Ghz ● SPECint2000 ● Base: 1031 Peak: 1067 Apple’s SPEC Results*2 ● SPECfp2000 ● Base: 1030 Peak: 1044 BLAST Ref.
    [Show full text]
  • IBM Power Systems Performance Report Apr 13, 2021
    IBM Power Performance Report Power7 to Power10 September 8, 2021 Table of Contents 3 Introduction to Performance of IBM UNIX, IBM i, and Linux Operating System Servers 4 Section 1 – SPEC® CPU Benchmark Performance 4 Section 1a – Linux Multi-user SPEC® CPU2017 Performance (Power10) 4 Section 1b – Linux Multi-user SPEC® CPU2017 Performance (Power9) 4 Section 1c – AIX Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 5 Section 1d – Linux Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 6 Section 2 – AIX Multi-user Performance (rPerf) 6 Section 2a – AIX Multi-user Performance (Power8, Power9 and Power10) 9 Section 2b – AIX Multi-user Performance (Power9) in Non-default Processor Power Mode Setting 9 Section 2c – AIX Multi-user Performance (Power7 and Power7+) 13 Section 2d – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power8) 15 Section 2e – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power7 and Power7+) 20 Section 3 – CPW Benchmark Performance 19 Section 3a – CPW Benchmark Performance (Power8, Power9 and Power10) 22 Section 3b – CPW Benchmark Performance (Power7 and Power7+) 25 Section 4 – SPECjbb®2015 Benchmark Performance 25 Section 4a – SPECjbb®2015 Benchmark Performance (Power9) 25 Section 4b – SPECjbb®2015 Benchmark Performance (Power8) 25 Section 5 – AIX SAP® Standard Application Benchmark Performance 25 Section 5a – SAP® Sales and Distribution (SD) 2-Tier – AIX (Power7 to Power8) 26 Section 5b – SAP® Sales and Distribution (SD) 2-Tier – Linux on Power (Power7 to Power7+)
    [Show full text]
  • IBM's POWER5 Micro Processor Design and Methodology
    IBM's POWER5 Micro Processor Design and Methodology Ron Kalla IBM Systems Group © 2003 IBM Corporation IBM’s POWER5 Micro Processor Design and Methodology Outline POWER5 Overview Design Process Power UT CS352 , April 2005 © 2003 IBM Corporation IBM’s POWER5 Micro Processor Design and Methodology POWER Server Roadmap 2001 2002-3 2004* 2005* 2006* POWER4 POWER4+ POWER5 POWER5+ POWER6 90 nm 65 nm 130 nm 130 nm Ultra high 180 nm >> GHz >> GHz frequency cores Core Core 1.7 GHz 1.7 GHz > GHz > GHz L2 caches Core Core Core Core 1.3 GHz 1.3 GHz Shared L2 Advanced System Features Core Core Distributed Switch Shared L2 Shared L2 Distributed Switch Distributed Switch Shared L2 Simultaneous multi-threading Distributed Switch Sub-processor partitioning Reduced size Dynamic firmware updates Enhanced scalability, parallelism Chip Multi Processing Lower power High throughput performance - Distributed Switch Larger L2 Enhanced memory subsystem - Shared L2 More LPARs (32) Dynamic LPARs (16) Autonomic Computing Enhancements *Planned to be offered by IBM. All statements about IBM’s future direction and intent are* subject to change or withdrawal without notice and represent goals and objectives only. UT CS352 , April 2005 © 2003 IBM Corporation IBM’s POWER5 Micro Processor Design and Methodology POWER5 Technology: 130nm lithography, Cu, SOI 389mm2 276M Transistors Dual processor core 8-way superscalar Simultaneous multithreaded (SMT) core Up to 2 virtual processors per real processor UT CS352 , April 2005 © 2003 IBM Corporation IBM’s POWER5 Micro
    [Show full text]
  • Microprocessor
    MICROPROCESSOR www.MPRonline.com THE REPORTINSIDER’S GUIDE TO MICROPROCESSOR HARDWARE POWER5 TOPS ON BANDWIDTH IBM’s Design Is Still Elegant, But Itanium Provides Competition By Kevin Krewell {12/22/03-02} On large multiprocessing systems, often the dominant attributes needed to assure good incremental processor performance include memory coherency issues, aggregate memory bandwidth, and I/O performance. The Power4 processor, now shipping from IBM, nicely balances integration with performance. The Power4 has an processor speed, but are eight bytes wide, offering more than eight-issue superscalar core, 12.8GB/s of memory bandwidth, 4GB/s of bandwidth per bus. The fast buses are used to 1.5MB of L2 cache, and 128MB of external L3 cache. The enable what IBM terms aggressive cache-to-cache transfers. recently introduced Power5 steps up integration and per- This tightly coupled multiprocessing is designed to allow formance by integrating the distributed switch fabric between processing threads to operate in parallel over the processor memory controller and core/caches. (See MPR 10/14/03-01, array. With eight modules, Power5 supports up to 128-way “IBM Raises Curtain on Power5.”) The Power5 has an on-die multithreaded processing. memory controller that will support both DDR and DDR2 The bus structure and distributed switch fabric also SDRAM memory. The Power5 also improves system per- allow IBM to create a 64-way (processor cores) configuration. formance, as each processor core is now multithreaded. (See MPR 9/08/03-02, “IBM Previews Power5.”) IBM has packed four processor die (eight processor cores) and four L3 cache chips on one 95- × 95mm MCM, seen in Figure 1.
    [Show full text]
  • Introduction to the Cell Multiprocessor
    Introduction J. A. Kahle M. N. Day to the Cell H. P. Hofstee C. R. Johns multiprocessor T. R. Maeurer D. Shippy This paper provides an introductory overview of the Cell multiprocessor. Cell represents a revolutionary extension of conventional microprocessor architecture and organization. The paper discusses the history of the project, the program objectives and challenges, the design concept, the architecture and programming models, and the implementation. Introduction: History of the project processors in order to provide the required Initial discussion on the collaborative effort to develop computational density and power efficiency. After Cell began with support from CEOs from the Sony several months of architectural discussion and contract and IBM companies: Sony as a content provider and negotiations, the STI (SCEI–Toshiba–IBM) Design IBM as a leading-edge technology and server company. Center was formally opened in Austin, Texas, on Collaboration was initiated among SCEI (Sony March 9, 2001. The STI Design Center represented Computer Entertainment Incorporated), IBM, for a joint investment in design of about $400,000,000. microprocessor development, and Toshiba, as a Separate joint collaborations were also set in place development and high-volume manufacturing technology for process technology development. partner. This led to high-level architectural discussions A number of key elements were employed to drive the among the three companies during the summer of 2000. success of the Cell multiprocessor design. First, a holistic During a critical meeting in Tokyo, it was determined design approach was used, encompassing processor that traditional architectural organizations would not architecture, hardware implementation, system deliver the computational power that SCEI sought structures, and software programming models.
    [Show full text]
  • IBM Power Roadmap
    POWER6™ Processor and Systems Jim McInnes [email protected] Compiler Optimization IBM Canada Toronto Software Lab © 2007 IBM Corporation All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only Role .I am a Technical leader in the Compiler Optimization Team . Focal point to the hardware development team . Member of the Power ISA Architecture Board .For each new microarchitecture I . help direct the design toward helpful features . Design and deliver specific compiler optimizations to enable hardware exploitation 2 IBM POWER6 Overview © 2007 IBM Corporation All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only POWER5 Chip Overview High frequency dual-core chip . 8 execution units 2LS, 2FP, 2FX, 1BR, 1CR . 1.9MB on-chip shared L2 – point of coherency, 3 slices . On-chip L3 directory and controller . On-chip memory controller Technology & Chip Stats . 130nm lithography, Cu, SOI . 276M transistors, 389 mm2 . I/Os: 2313 signal, 3057 Power/Gnd 3 IBM POWER6 Overview © 2007 IBM Corporation All statements regarding IBM future directions and intent are subject to change or withdrawal without notice and represent goals and objectives only POWER6 Chip Overview SDU RU Ultra-high frequency dual-core chip IFU FXU . 8 execution units L2 Core0 VMX L2 2LS, 2FP, 2FX, 1BR, 1VMX QUAD LSU FPU QUAD . 2 x 4MB on-chip L2 – point of L3 coherency, 4 quads L2 CNTL CNTL . On-chip L3 directory and controller . Two on-chip memory controllers M FBC GXC M Technology & Chip stats C C .
    [Show full text]
  • POWER8: the First Openpower Processor
    POWER8: The first OpenPOWER processor Dr. Michael Gschwind Senior Technical Staff Member & Senior Manager IBM Power Systems #OpenPOWERSummit Join the conversation at #OpenPOWERSummit 1 OpenPOWER is about choice in large-scale data centers The choice to The choice to The choice to differentiate innovate grow . build workload • collaborative • delivered system optimized innovation in open performance solutions ecosystem • new capabilities . use best-of- • with open instead of breed interfaces technology scaling components from an open ecosystem Join the conversation at #OpenPOWERSummit Why Power and Why Now? . Power is optimized for server workloads . Power8 was optimized to simplify application porting . Power8 includes CAPI, the Coherent Accelerator Processor Interconnect • Building on a long history of IBM workload acceleration Join the conversation at #OpenPOWERSummit POWER8 Processor Cores • 12 cores (SMT8) 96 threads per chip • 2X internal data flows/queues • 64K data cache, 32K instruction cache Caches • 512 KB SRAM L2 / core • 96 MB eDRAM shared L3 • Up to 128 MB eDRAM L4 (off-chip) Accelerators • Crypto & memory expansion • Transactional Memory • VMM assist • Data Move / VM Mobility • Coherent Accelerator Processor Interface (CAPI) Join the conversation at #OpenPOWERSummit 4 POWER8 Core •Up to eight hardware threads per core (SMT8) •8 dispatch •10 issue •16 execution pipes: •2 FXU, 2 LSU, 2 LU, 4 FPU, 2 VMX, 1 Crypto, 1 DFU, 1 CR, 1 BR •Larger Issue queues (4 x 16-entry) •Larger global completion, Load/Store reorder queue •Improved branch prediction •Improved unaligned storage access •Improved data prefetch Join the conversation at #OpenPOWERSummit 5 POWER8 Architecture . High-performance LE support – Foundation for a new ecosystem . Organic application growth Power evolution – Instruction Fusion 1600 PowerPC .
    [Show full text]
  • Power8 Quser Mspl Nov 2015 Handout
    IBM Power Systems Power Systems Hardware: Today and Tomorrow November 2015 Mark Olson [email protected] © 2015 IBM Corporation IBM Power Systems POWER8 Chip © 2015 IBM Corporation IBM Power Systems Processor Technology Roadmap POWER11 Or whatever it is POWER10 named Or whatever it is POWER9 named Or whatever it is named POWER7 POWER8 POWER6 45 nm 22 nm POWER5 65 nm 130 nm POWER4 90 nm 180 nm 130 nm 2001 2004 2007 2010 2014 Future 3 © 2015 IBM Corporation IBM Power Systems Processor Chip Comparisons POWER5 POWER6 POWER7 POWER7+ POWER8 2004 2007 2010 2012 45nm SOI 32nm SOI 22nm SOI Technology 130nm SOI 65nm SOI eDRAM eDRAM eDRAM Compute Cores 2 2 8 8 12 Threads SMT2 SMT2 SMT4 SMT4 SMT8 Caching On-chip 1.9MB (L2) 8MB (L2) 2 + 32MB (L2+3) 2 + 80MB (L2+3) 6 + 96MB (L2+3) Off-chip 36MB (L3) 32MB (L3) None None 128MB (L4) Bandwidth Sust. Mem. 15GB/s 30GB/s 100GB/s 100GB/s 230GB/s Peak I/O 6GB/s 20GB/s 40GB/s 40GB/s 96GB/s 4 © 2015 IBM Corporation IBM Power Systems Processor Designs POWER5+ POWER6 POWER7 POWER7+ POWER8 Max cores 4 2 8 8 12 Technology 90nm 65nm 45nm 32nm 22nm Size 245 mm2 341 mm2 567 mm2 567 mm2 650 mm2 * Transistors 276 M 790 M 1.2 B 2.1 B 4.2 B * 1.9 4 - 5 3 – 4 Up to 4.4 Up to 4.1 Frequencies GHz GHz GHz GHz GHz ** SMT (threads) 2 2 4 4 8 L2 Cache 1.9MB Shared 4MB / Core 256KB / core 256KB / core 512KB/core 4MB / Core 10MB / Core 8MB / Core L3 Cache 36MB 32MB On chip On chip On chip L4 Cache -- -- -- -- Up to 128MB Bandwidth Sust memory 15GB/s 30GB/s 100GB/s 100GB/s 230GB/s Peak I/O 6GB/s 20GB/s 40GB/s 40GB/s 96GB/s * with 12-core
    [Show full text]