Evaluation of Architectural Paradigms for Addressing the Processor

Total Page:16

File Type:pdf, Size:1020Kb

Evaluation of Architectural Paradigms for Addressing the Processor Identifying Performance Bottlenecks on Modern Microarchitectures using an Adaptable Probe Gorden Griem*, Leonid Oliker*, John Shalf*, and Katherine Yelick*+ *Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA, 94720 +Computer Science Division, University of California, 387 Soda Hall #1776, Berkeley, CA 94720 {ggriem, loliker, jshalf, kayelick}@lbl.gov Abstract Benchmarks, often present a narrow view of a broad, multi- The gap between peak and delivered performance for scien- dimensional parameter space of machine characteristics. We tific applications running on microprocessor-based systems therefore differentiate a “probe” from a “microbenchmark” or has grown considerably in recent years. The inability to synthetic benchmark on the basis that the latter typically offers achieve the desired performance even on a single processor is a single-valued result in order to rank processor performance often attributed to an inadequate memory system, but without consecutively – a few points of reference in a multidimen- identification or quantification of a specific bottleneck. In this sional space. A probe, by contrast, is used to explore a con- work, we use an adaptable synthetic benchmark to isolate ap- tinuous, multidimensional parameter space. The probe’s plication characteristics that cause a significant drop in per- parameterization helps the researcher uncover the peaks and formance, giving application programmers and architects valleys in a continuum of performance characteristics and ex- information about possible optimizations. Our adaptable plore the ambiguities of computer architectural comparisons probe, called sqmat, uses only four parameters to capture key that cannot be captured by a single-valued ranking. characteristics of scientific workloads: working-set size, com- In this paper, we introduce the sqmat probe [9], which at- putational intensity, indirection, and irregularity. This paper tempts to bridge the gap between these competing require- describes the implementation of sqmat and uses its tunable ments. It maintains the simplicity of a microbenchmark while parameters to evaluate four leading 64-bit microprocessors offering four distinct parameters to capture different types of that are popular building blocks for current high performance application workloads: working-set size (parameter “N”), systems: Intel Itanium2, AMD Opteron, IBM Power3, and computational intensity (parameter “M”), indirection (parame- IBM Power4. ter “I”), and irregularity (parameter “S”). By varying the parameters of sqmat, one can capture the 1. INTRODUCTION memory system behavior of a more diverse set of algorithms as shown in Table 1. With a high computational intensity (M), There is a growing gap between the peak speed of micro- the benchmark matches the characteristics of dense linear processor-based systems and the delivered performance for solvers that can be tiled into matrix-matrix operations (the so- scientific computing applications. This gap has raised the im- called “BLAS-3” operations). For example PARATEC [10] is portance of developing better benchmarking methods to im- a material science application that performs ab-initio quantum- prove performance understanding and prediction, while identi- mechanical total energy calculations using pseudopotentials fying hardware and application features that work well or and a plane wave basis set. This code relies on BLAS-3 librar- poorly together. Benchmarks are typically designed with two ies with direct memory addressing, thus having a high compu- competing interests in mind – capturing real application work- tational intensity, little indirection, and low irregularity. How- loads and identifying specific architectural bottlenecks that ever, not all dense linear algebra problems can be tiled in this inhibit the performance. Benchmarking suites like the NAS manner; instead they are organized as dense matrix-vector Parallel Benchmarks [7] and the Standard Performance (BLAS-2) or vector-vector (BLAS-1) operations, which re- Evaluation Corporation (SPEC) [1] emphasize the first goal of quire fewer operations on each element. This behavior is cap- representing real applications, but they are typically too large tured in sqmat by reducing the computational intensity, possi- to run on simulated architectures and are too complex to em- bly in combination with a reduced working set size (N). ploy for identification of specific architectural bottlenecks. Indirection, sometimes called scatter/gather style memory Likewise, the complexity of these benchmarks can often end access, occurs in sparse linear algebra, particle methods, and up measuring the quality of the compiler’s optimizations as grid-based applications with irregular domain boundaries. much as it does the underlying hardware architecture. At the Most are characterized by noncontiguous memory access; thus other extreme, microbenchmarks such as STREAM [6] are placing additional stress on memory systems that rely on large used to measure the performance of a specific feature of a cache lines to mask memory latency. The amount of irregular- given computer architecture. Such synthetic benchmarks are ity varies enormously in practice. For example, sparse matri- often easier to optimize so as to minimize the dependence on ces that arise in Finite Element applications often contain the maturity of the compiler technology. However, the sim- small dense sub-blocks, which cause a string of consecutive plicity and narrow focus of these codes often makes it quite indexes in an indirect access of a sparse matrix-vector product difficult to relate to real application codes. Indeed, it is rare (SPMV). Table 1 shows an example of SPMV where one-third that such probes offer any predictive value for the perform- of data are irregularly accessed; in general the irregularity (S) ance of full-fledged scientific application. would depend on the sparse matrix structure. Another example Floating-point M N CI S % 1 2 3 4 … 73 74 75 76 orig:sqmat irreg memory array DAXPY 1 1 0.5:0.5 - 0% Pointer memory 1 2 3 4 5 6 7 8 DGEMM 1 4 3.5:3.5 - 0% array MADCAP[12] 2 4 7.5:7.0 - 0% SPMV 1 4 3.5:3.5 3 33% Figure 1: Example of Sqmat indirection for S=4 Table 1: Mapping Sqmat parameters onto algorithms the results are written back to memory. We use a Java pro- gram to generate optimally hand-unrolled C-code [9], greatly of algorithmic irregularity can be found in GTC - a magnetic reducing the influence of the C-compiler’s code generation fusion code that solves the gyro-averaged Vlasov-Poisson and thereby making sure that the hardware architecture rather system of equations using the particle-in-cell (PIC) approach than the compiler is benchmarked. The innermost loop is un- [11]. PIC is a sparse method for calculating particle motion rolled enough to ensure that all available floating-point regis- and is characterized by relatively low computational intensity, ters are occupied by operands during each cycle of the loop. array indirection, and high irregularity. In both cases, the The unrolling is not so extreme as to cause processor stalls stream of memory accesses may contain sequences of con- either due to the increased number of instructions or additional tiguous memory accesses broken up by random-access jumps. branch-prediction penalties If enough registers are available In this paper, we describe the implementation of the sqmat . probe and focus on how its four parameters enable us to on the target machine, several matrices are squared at the same evaluate the behavior of four microprocessors that are popular time. Since squaring the matrix cannot be done in situ, an in- put and output matrix is needed for computation, thus a total building blocks for current high performance machines. The 2 processors are compared on a basis of the delivered percentage of 2⋅N registers are required per matrix. of peak performance rather than absolute performance so as to For direct access, each double-precision floating-point value limit the bias inherent in comparisons between different gen- has to be loaded and stored, creating 8 bytes memory-load and erations of microprocessor implementations. We evaluate 8 bytes memory-store traffic. For indirect access, the value these processors and isolate architectural features responsible and the pointer have to be loaded. As we always use 64-bit for performance bottlenecks, giving application developers pointers in 64-bit mode, each entry creates 16 bytes of mem- valuable hints on where to optimize their codes. Future work ory-load and 8 bytes of memory-store traffic. will focus on correlating sqmat parameters across a spectrum To allow a comparison between the different architectures, of scientific applications. we introduce the algorithmic peak performance (AP) metric. The AP is defined as the performance that could be achieved on the underlying hardware for a given algorithm if all the 2. SQMAT OVERVIEW floating-point units were optimally used. The AP is always The Sqmat benchmark is based on matrix multiplication and equal to or less than the machine peak performance (MP). For is therefore related to the Linpack benchmark and to linear example, some architectures support floating-point multiply- algebra solvers in general. The Linpack benchmark is used to add instructions (FMA) and only achieve their peak rated flop rank the machines of the Top500[5] supercomputer list, al- rate when this instruction is used. However since a scalar mul- though the benchmark reflects only a narrow class of large tiply can
Recommended publications
  • Power4 Focuses on Memory Bandwidth IBM Confronts IA-64, Says ISA Not Important
    VOLUME 13, NUMBER 13 OCTOBER 6,1999 MICROPROCESSOR REPORT THE INSIDERS’ GUIDE TO MICROPROCESSOR HARDWARE Power4 Focuses on Memory Bandwidth IBM Confronts IA-64, Says ISA Not Important by Keith Diefendorff company has decided to make a last-gasp effort to retain control of its high-end server silicon by throwing its consid- Not content to wrap sheet metal around erable financial and technical weight behind Power4. Intel microprocessors for its future server After investing this much effort in Power4, if IBM fails business, IBM is developing a processor it to deliver a server processor with compelling advantages hopes will fend off the IA-64 juggernaut. Speaking at this over the best IA-64 processors, it will be left with little alter- week’s Microprocessor Forum, chief architect Jim Kahle de- native but to capitulate. If Power4 fails, it will also be a clear scribed IBM’s monster 170-million-transistor Power4 chip, indication to Sun, Compaq, and others that are bucking which boasts two 64-bit 1-GHz five-issue superscalar cores, a IA-64, that the days of proprietary CPUs are numbered. But triple-level cache hierarchy, a 10-GByte/s main-memory IBM intends to resist mightily, and, based on what the com- interface, and a 45-GByte/s multiprocessor interface, as pany has disclosed about Power4 so far, it may just succeed. Figure 1 shows. Kahle said that IBM will see first silicon on Power4 in 1Q00, and systems will begin shipping in 2H01. Looking for Parallelism in All the Right Places With Power4, IBM is targeting the high-reliability servers No Holds Barred that will power future e-businesses.
    [Show full text]
  • Power Architecture® ISA 2.06 Stride N Prefetch Engines to Boost Application's Performance
    Power Architecture® ISA 2.06 Stride N prefetch Engines to boost Application's performance History of IBM POWER architecture: POWER stands for Performance Optimization with Enhanced RISC. Power architecture is synonymous with performance. Introduced by IBM in 1991, POWER1 was a superscalar design that implemented register renaming andout-of-order execution. In Power2, additional FP unit and caches were added to boost performance. In 1996 IBM released successor of the POWER2 called P2SC (POWER2 Super chip), which is a single chip implementation of POWER2. P2SC is used to power the 30-node IBM Deep Blue supercomputer that beat world Chess Champion Garry Kasparov at chess in 1997. Power3, first 64 bit SMP, featured a data prefetch engine, non-blocking interleaved data cache, dual floating point execution units, and many other goodies. Power3 also unified the PowerPC and POWER Instruction set and was used in IBM's RS/6000 servers. The POWER3-II reimplemented POWER3 using copper interconnects, delivering double the performance at about the same price. Power4 was the first Gigahertz dual core processor launched in 2001 which was awarded the MicroProcessor Technology Award in recognition of its innovations and technology exploitation. Power5 came in with symmetric multi threading (SMT) feature to further increase application's performance. In 2004, IBM with 15 other companies founded Power.org. Power.org released the Power ISA v2.03 in September 2006, Power ISA v.2.04 in June 2007 and Power ISA v.2.05 with many advanced features such as VMX, virtualization, variable length encoding, hyper visor functionality, logical partitioning, virtual page handling, Decimal Floating point and so on which further boosted the architecture leadership in the market place and POWER5+, Cell, POWER6, PA6T, Titan are various compliant cores.
    [Show full text]
  • POWER8: the First Openpower Processor
    POWER8: The first OpenPOWER processor Dr. Michael Gschwind Senior Technical Staff Member & Senior Manager IBM Power Systems #OpenPOWERSummit Join the conversation at #OpenPOWERSummit 1 OpenPOWER is about choice in large-scale data centers The choice to The choice to The choice to differentiate innovate grow . build workload • collaborative • delivered system optimized innovation in open performance solutions ecosystem • new capabilities . use best-of- • with open instead of breed interfaces technology scaling components from an open ecosystem Join the conversation at #OpenPOWERSummit Why Power and Why Now? . Power is optimized for server workloads . Power8 was optimized to simplify application porting . Power8 includes CAPI, the Coherent Accelerator Processor Interconnect • Building on a long history of IBM workload acceleration Join the conversation at #OpenPOWERSummit POWER8 Processor Cores • 12 cores (SMT8) 96 threads per chip • 2X internal data flows/queues • 64K data cache, 32K instruction cache Caches • 512 KB SRAM L2 / core • 96 MB eDRAM shared L3 • Up to 128 MB eDRAM L4 (off-chip) Accelerators • Crypto & memory expansion • Transactional Memory • VMM assist • Data Move / VM Mobility • Coherent Accelerator Processor Interface (CAPI) Join the conversation at #OpenPOWERSummit 4 POWER8 Core •Up to eight hardware threads per core (SMT8) •8 dispatch •10 issue •16 execution pipes: •2 FXU, 2 LSU, 2 LU, 4 FPU, 2 VMX, 1 Crypto, 1 DFU, 1 CR, 1 BR •Larger Issue queues (4 x 16-entry) •Larger global completion, Load/Store reorder queue •Improved branch prediction •Improved unaligned storage access •Improved data prefetch Join the conversation at #OpenPOWERSummit 5 POWER8 Architecture . High-performance LE support – Foundation for a new ecosystem . Organic application growth Power evolution – Instruction Fusion 1600 PowerPC .
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Powerpc 620 Case Study
    Case Studies: The PowerPC 620 and Intel P6 Case Study: The PowerPC 620 Modern Processor Design: Fundamentals of Superscalar Processors 2 IBM/Motorola/Apple Alliance PowerPC 620 Case Study Alliance begun in 1991 with a joint design center (Somerset) in Austin – Ambitious objective: unseat Intel on the desktop – Delays, conflicts, politics…hasn’t happened, alliance largely dissolved today First-generation out-of-order processor PowerPC 601 – Quick design based on RSC compatible with POWER and PowerPC Developed as part of Apple-IBM-Motorola PowerPC 603 alliance – Low power implementation designed for small uniprocessor systems – 5 FUs: branch , integer, system, load/store, FP Aggressive goals, targets PowerPC 604 – 4-wide machine Interesting microarchitectural features – 6 FUs, each with 2-entry RS Hopelessly delayed PowerPC 620 – First 64-bit machine, also 4-wide Led to future, successful designs – Same 6 FUs as 604 – Next slide, also chapter 5 in the textbook PowerPC G3, G4 – Newer derivatives of the PowerPC 603 (3-issue, in-order) – Added Altivec multimedia extensions 1 PowerPC 620 PowerPC 620 Pipeline Fetch stage Instruction buffer (8) Dispatch stage BRU LSU XSU0 XSU1 MC-FXU FPU Reservation stations (6) Execute stage(s) Completion buffer (16) Complete stage Writeback stage Fetch stage – 4-wide, BTAC simple predictor PowerPC 620 Instruction Buffer – Joint IBM/Apple/Motorola design – Decouples fetch from dispatch stalls – Holds up to 8 instructions – Aggressively out-of-order, weak memory order, 64 bits Hopelessly delayed,
    [Show full text]
  • A História Da Família Powerpc
    A História da família PowerPC ∗ Flavio Augusto Wada de Oliveira Preto Instituto de Computação Unicamp fl[email protected] ABSTRACT principal atingir a marca de uma instru¸c~ao por ciclo e 300 Este artigo oferece um passeio hist´orico pela arquitetura liga¸c~oes por minuto. POWER, desde sua origem at´eos dias de hoje. Atrav´es deste passeio podemos analisar como as tecnologias que fo- O IBM 801 foi contra a tend^encia do mercado ao reduzir ram surgindo atrav´esdas quatro d´ecadas de exist^encia da dr´asticamente o n´umero de instru¸c~oes em busca de um con- arquitetura foram incorporadas. E desta forma ´eposs´ıvel junto pequeno e simples, chamado de RISC (reduced ins- verificar at´eos dias de hoje como as tend^encias foram segui- truction set computer). Este conjunto de instru¸c~oes elimi- das e usadas. Al´emde poder analisar como as tendencias nava instru¸c~oes redundantes que podiam ser executadas com futuras na ´area de arquitetura de computadores seguir´a. uma combina¸c~ao de outras intru¸c~oes. Com este novo con- junto reduzido, o IBM 801 possuia metade dos circuitos de Neste artigo tamb´emser´aapresentado sistemas computacio- seus contempor^aneos. nais que empregam comercialmente processadores POWER, em especial os videogames, dado que atualmente os tr^es vi- Apesar do IBM 801 nunca ter se tornado um chaveador te- deogames mais vendidos no mundo fazem uso de um chip lef^onico, ele foi o marco de toda uma linha de processadores POWER, que apesar da arquitetura comum possuem gran- RISC que podemos encontrar at´ehoje: a linha POWER.
    [Show full text]
  • The POWER4 Processor Introduction and Tuning Guide
    Front cover The POWER4 Processor Introduction and Tuning Guide Comprehensive explanation of POWER4 performance Includes code examples and performance measurements How to get the most from the compiler Steve Behling Ron Bell Peter Farrell Holger Holthoff Frank O’Connell Will Weir ibm.com/redbooks International Technical Support Organization The POWER4 Processor Introduction and Tuning Guide November 2001 SG24-7041-00 Take Note! Before using this information and the product it supports, be sure to read the general information in “Special notices” on page 175. First Edition (November 2001) This edition applies to AIX 5L for POWER Version 5.1 (program number 5765-E61), XL Fortran Version 7.1.1 (5765-C10 and 5765-C11) and subsequent releases running on an IBM ^ pSeries POWER4-based server. Unless otherwise noted, all performance values mentioned in this document were measured on a 1.1 GHz machine, then normalized to 1.3 GHz. Note: This book is based on a pre-GA version of a product and may not apply when the product becomes generally available. We recommend that you consult the product documentation or follow-on versions of this redbook for more current information. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 2001. All rights reserved. Note to U.S Government Users – Documentation related to restricted rights – Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
    [Show full text]
  • POWER4 System Microarchitecture
    by J. M. Tendler J. S. Dodson POWER4 J. S. Fields, Jr. H. Le system B. Sinharoy microarchitecture The IBM POWER4 is a new microprocessor commonplace. The RS64 and its follow-on organized in a system structure that includes microprocessors, the RS64-II [6], RS64-III [7], and RS64- new technology to form systems. The name IV [8], were optimized for commercial applications. The POWER4 as used in this context refers not RS64 initially appeared in systems operating at 125 MHz. only to a chip, but also to the structure used Most recently, the RS64-IV has been shipping in systems to interconnect chips to form systems. operating at up to 750 MHz. The POWER3 and its follow- In this paper we describe the processor on, the POWER3-II [9], were optimized for technical microarchitecture as well as the interconnection applications. Initially introduced at 200 MHz, most recent architecture employed to form systems up to systems using the POWER3-II have been operating at a 32-way symmetric multiprocessor. 450 MHz. The RS64 microprocessor and its follow-ons were also used in AS/400* systems (the predecessor Introduction to today’s eServer iSeries*). IBM announced the RISC System/6000* (RS/6000*, the POWER4 was designed to address both commercial and predecessor of today’s IBM eServer pSeries*) family of technical requirements. It implements and extends in a processors in 1990. The initial models ranged in frequency compatible manner the 64-bit PowerPC Architecture [10]. from 20 MHz to 30 MHz [1] and achieved performance First used in pSeries systems, it will be staged into the levels exceeding those of many of their contemporaries iSeries at a later date.
    [Show full text]
  • Low-Level Optimizations in the Powerpc/Linux Kernels
    Low-Level Optimizations in the PowerPC/Linux Kernels Dr. Paul Mackerras Senior Technical Staff Member IBM Linux Technology Center OzLabs Canberra, Australia [email protected] [email protected] Outline Introduction ¡ PowerPC® architecture and implementations ¡ Optimization techniques: profiling and benchmarking Instruction cache coherence Memory copying PTE management Conclusions PowerPC Architecture PowerPC = “POWER Performance Computing” ¡ POWER = “Performance Optimization with Enhanced RISC” PowerPC is a specification for an Instruction Set Architecture ¡ Specifies registers, instructions, encodings, etc. ¡ RISC load/store architecture 32 general-purpose registers, 2 data addressing modes, fixed- length 32-bit instructions, branches do not have delay slots ¡ Designed for efficient superscalar implementation ¡ 64-bit architecture with a 32-bit subset 32-bit mode for 32-bit processes on 64-bit implementations PowerPC Architecture Caches ¡ Instruction and data caches may be separate ¡ Instructions provided for cache management dcbst: Data cache block store dcbf/dcbi: Data cache block flush/invalidate icbi: Instruction cache block invalidate ¡ Instruction cache is not required to snoop Hardware does not maintain coherence with memory or Dcache ¡ Data cache coherence with memory (DMA/other CPUs) Maintained by hardware on desktop/server systems Managed by software on embedded systems PowerPC Architecture Memory management ¡ Architecture specifies hashed page table structure. Implemented in desktop/server CPUs 4kB pages; POWER4TM also has 16MB
    [Show full text]
  • Ilore: Discovering a Lineage of Microprocessors
    iLORE: Discovering a Lineage of Microprocessors Samuel Lewis Furman Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Computer Science & Applications Kirk Cameron, Chair Godmar Back Margaret Ellis May 24, 2021 Blacksburg, Virginia Keywords: Computer history, systems, computer architecture, microprocessors Copyright 2021, Samuel Lewis Furman iLORE: Discovering a Lineage of Microprocessors Samuel Lewis Furman (ABSTRACT) Researchers, benchmarking organizations, and hardware manufacturers maintain repositories of computer component and performance information. However, this data is split across many isolated sources and is stored in a form that is not conducive to analysis. A centralized repository of said data would arm stakeholders across industry and academia with a tool to more quantitatively understand the history of computing. We propose iLORE, a data model designed to represent intricate relationships between computer system benchmarks and computer components. We detail the methods we used to implement and populate the iLORE data model using data harvested from publicly available sources. Finally, we demonstrate the validity and utility of our iLORE implementation through an analysis of the characteristics and lineage of commercial microprocessors. We encourage the research community to interact with our data and visualizations at csgenome.org. iLORE: Discovering a Lineage of Microprocessors Samuel Lewis Furman (GENERAL AUDIENCE ABSTRACT) Researchers, benchmarking organizations, and hardware manufacturers maintain repositories of computer component and performance information. However, this data is split across many isolated sources and is stored in a form that is not conducive to analysis. A centralized repository of said data would arm stakeholders across industry and academia with a tool to more quantitatively understand the history of computing.
    [Show full text]
  • IBM POWER8 CPU Architecture
    POWER8 Jeff Stuecheli IBM Power Systems IBM Systems & Technology Group Development © 2013 International Business Machines Corporation 1 POWER7+ POWER7 2012 POWER6 2010 POWER5 2007 2004 45nm SOI 32nm SOI Technology 130nm SOI 65nm SOI eDRAM eDRAM Compute Cores 2 2 8 8 Threads SMT2 SMT2 SMT4 SMT4 Caching On-chip 1.9MB 8MB 2 + 32MB 2 + 80MB Off-chip 36MB 32MB None None Bandwidth Sust. Mem. 15GB/s 30GB/s 100GB/s 100GB/s Peak I/O 3GB/s 10GB/s 20GB/s 20GB/s © 2013 International Business Machines Corporation 2 POWER8 POWER7+ POWER7 2012 POWER6 2010 POWER5 2007 2004 45nm SOI 32nm SOI Technology 130nm SOI 65nm SOI eDRAM eDRAM Compute Cores 2 2 8 8 Today’s Threads SMT2 SMT2 SMT4 SMT4 Topic Caching On-chip 1.9MB 8MB 2 + 32MB 2 + 80MB Off-chip 36MB 32MB None None Bandwidth Sust. Mem. 15GB/s 30GB/s 100GB/s 100GB/s Peak I/O 3GB/s 10GB/s 20GB/s 20GB/s © 2013 International Business Machines Corporation 3 Leadership System Open System Performance Innovation Innovation • Increase core throughput • Higher capacity cache hierarchy • CAPI at single thread, SMT2, and highly threaded processor • Memory interface SMT4, and SMT8 level • Enhanced memory bandwidth, • Open system software • Large step in per socket capacity, and expansion performance • Flexible SMT • Enable more robust • Dynamic code optimization multi-socket scaling • Hardware-accelerated virtual memory management © 2013 International Business Machines Corporation 4 Technology • 22nm SOI, eDRAM, 15 ML 650mm2 Caches Cores • 512 KB SRAM L2 / core • 12 cores (SMT8) • 96 MB eDRAM shared L3 • 8 dispatch, 10 issue, Local SMP Links SMP Local • Up to 128 MB eDRAM L4 Accelerators 16 exec pipe Core Core Core Core Core Core (off-chip) • 2X internal data flows/queues L2 L2 L2 L2 L2 L2 Memory 8M L3 • Enhanced prefetching Region • Up to 230 GB/s • 64K data cache, Mem .
    [Show full text]
  • Understanding IBM Pseries Performance and Sizing
    Understanding IBM pSeries Performance and Sizing Comprehend IBM RS/6000 and IBM ^ pSeries hardware architectures Get an overview of current industry benchmarks Understand how to size your system Nigel Trickett Tatsuhiko Nakagawa Ravi Mani Diana Gfroerer ibm.com/redbooks SG24-4810-01 International Technical Support Organization Understanding IBM ^ pSeries Performance and Sizing February 2001 Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix A, “Special notices” on page 377. Second Edition (February 2001) This edition applies to IBM RS/6000 and IBM ^ pSeries as of December 2000, and Version 4.3.3 of the AIX operating system. This document was updated on January 24, 2003. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 1997, 2001. All rights reserved. Note to U.S Government Users – Documentation related to restricted rights – Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Contents Preface. 9 The team that wrote this redbook. 9 Comments welcome. 11 Chapter 1. Introduction . 1 Chapter 2. Background . 5 2.1 Performance of processors . 5 2.2 Hardware architectures . 6 2.2.1 RISC/CISC concepts . 6 2.2.2 Superscalar architecture: pipeline and parallelism . 7 2.2.3 Memory management .
    [Show full text]