Vector IRAM: a Microprocessor Architecture for Media Processing

Total Page:16

File Type:pdf, Size:1020Kb

Vector IRAM: a Microprocessor Architecture for Media Processing Vector IRAM: A Microprocessor Architecture for Media Processing Christoforos E. Kozyrakis [email protected] CS252 Graduate Computer Architecture February 10, 2000 Outline • Motivation for IRAM – technology trends – design trends – application trends • Vector IRAM – instruction set – prototype architecture – performance 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 2 1 Processor-DRAM Gap (latency) µProc 1000 CPU “Moore’s Law” 60%/yr. 100 Processor-Memory Performance Gap: 10 (grows 50% / year) DRAM Performance DRAM 7%/yr. 1 1987 1991 1980 1981 1982 1983 1984 1985 1986 1988 1989 1990 1992 1993 1994 1995 1996 1997 1998 1999 2000 Time 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 3 Processor-DRAM Tax logic memory Intel PIII Xeon 8 15 MIPS R12000 3 4.2 HP PA-8500 4 126 Sun Ultra-2 2 1.8 PowerPC G4 4.5 6 IBM Power3 7 8 AMD Athlon 11 11 Alpha 21264 6 9.2 0 5 10 15 20 25 30 Million Transistors 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 4 2 Power Consumption 60 50 Alpha 21264 AMD Athlon 40 IBM Power3 PowerPC G4 30 Sun Ultra-2 HP PA-8500 20 MIPS R12000 Intel PIII Xeon Performance (Spec95FP) 10 0 0 20 40 60 80 Power (W) 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 5 Other Design Challenges • Interconnect scaling problems – multiple cycles to go across the chip – difficult to achieve single cycle result forwarding – need to add extra pipeline stages at the cost of power, complexity, branch and load-use latency • Design complexity of high-end CPUs – 4 to 5 years from scratch to chips for new superscalar architectures – >100 engineers – >50% of resources to design verification 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 6 3 Complexity Vs. Performance Gains R5000 R10000 R10K/R5K • Clock Rate 200 MHz 195 MHz 1.0x • On-Chip Caches 32K/32K 32K/32K 1.0x • Instructions/Cycle 1(+ FP) 4 4.0x • Pipe stages 5 5-7 1.2x • Model In-order Out-of-order --- • Die Size (mm2) 84 298 3.5x – wo cache, TLB 32 205 6.3x • Development 60 300 5.0x (man years) • SPECint_base95 5.7 8.8 1.6x 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 7 Future microprocessor applications • Multimedia applications – image/video processing, voice/pattern recognition, 3D graphics, animation, digital music, encryption etc. – narrow data types, streaming data, real-time requirements • Mobile and embedded environments – notebooks, PDAs, digital cameras, cellular phones, pagers, game consoles, cars etc.. – small devices, limited chip-count, limited power/energy budget • Significantly different environment from the desktop/workstation model 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 8 4 Requirements on microprocessors (1) • High performance for multimedia: – real-time performance guarantees – support for continuous media data-types – exploit fine-grain parallelism – exploit coarse-grain parallelism – exploit high instruction reference locality – code density – high memory bandwidth 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 9 Average vs. real time performance ... 45% Average 40% Which one is the best? 35% Statistical Þ Average Þ C 30% Real time Þ Worst Þ A 25% 20% Inputs 15% 10% 5% A B 0% C Worst Case Performance Best Case 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 10 5 Requirements on microprocessors (2) • Low power and energy consumption – energy efficiency for long battery life – power efficiency for system cost reduction (cooling system, packaging etc...) • Design scalability – performance scalability – physical design scalability • design complexity, verification complexity – immunity to interconnect scaling problems • locality of interconnect, tolerance to latency • System-on-a-chip (SoC) – highly integrated system – low system chip-count 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 11 The IRAM vision statement Proc L $ $ o f Microprocessor & DRAM L2$ I/O I/O g a on a single chip: Bus Bus i b – on-chip memory latency c 5-10X, bandwidth 50-100X – improve energy efficiency D R A M 2X-4X (no off-chip bus) I/O – serial I/O 5-10X v. buses I/O – smaller board area/volume Proc D – adjustable memory size/width f Bus R a A b M D R A M 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 12 6 Vector IRAM • Vector processing – high-performance for media processing – low power/energy for processor control – modularity, low complexity – scalability – well understood software development • Embedded DRAM – high bandwidth for vector processing – low power/energy for memory accesses – modularity, scalability – small system size 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 13 IRAM ISA summary • Full vector instruction set with – 32 vector registers, 32 vector flag registers – support for multiple data types (64b, 32b, 16b, 8b) – support for strided and indexed memory accesses – support for auto-increment addressing – support for DSP operations (multiply-add, saturation etc) – support for conditional execution – support for software speculation – support for fast reductions and butterfly permutations – support for virtual memory – restartable arithmetic (FP & integer) exceptions • Implemented as a coprocessor extension to MIPS64 ISA (coprocessor 2) 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 14 7 Vector architectural state Virtual Processors ($vlr) Control Regs VP VP VP vcr0 0 1 $vlr-1 vcr vr 1 General 0 vr Purpose 1 vcr Registers 31 64b (32) vr31 $vpw vf Scalar Regs Flag 0 vf1 vs Registers 0 vs1 (32) vf31 1b vs31 64b 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 15 Fixed-point Multiply-add Mul & Shift Right & Round Add & Sat z x n n/2 + sat w * n Shift n y Round n n/2 a • Multiply halves & shift instruction provides support for any fixed-point format • Precision is equal to the datatype width; multiplier’s inputs have half the width • Uniform, simple support for all datatypes 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 16 8 VIRAM-1 prototype 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 17 Design Overview • 64b MIPS scalar core • Memory system – coprocessor interface – 8 2MByte eDRAM banks – 16KB I/D caches – single sub-bank per bank • Vector unit – 256-bit synchronous – 8KByte vector register file interface, separate I/O – support for 64b, 32b, and signals 16b data-types – 20ns cycle time, 6.6ns – 2 arithmetic (1 FP), 2 flag column access processing, 1 load-store units – crossbar interconnect for – 4 64-bit datapaths per unit 12.8 GB/sec per direction – DRAM latency included in – no caches vector pipeline • Network interface – 4 addresses/cycle for – user-level message passing strided/indexed accesses – dedicated DMA engines – 2-level TLB – 4 100MByte/s links 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 18 9 Vector Unit Pipeline Structure • Single-issue, in-order pipeline – each instruction can specify up to 128 operations and occupy a functional unit for 8 cycles • DRAM latency is included in the execution pipeline (delayed pipeline) – deep pipeline design, but not caches needed to avoid stalls – worst case DRAM latency does not cause pipeline stalls • Address decoupling buffer – buffers memory addresses in the presence of conflicts (indexed/strided accesses) – memory conflicts do not stall pipeline 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 19 Non-Delayed Pipeline F D X M W . DRAM latency: >=20ns vld mem VLOAD A T VW vadd Long Load-> ALU RAW hazard vst vld VALU VR X1 X2 ... XN VW mem vadd vst . VSTORE A T VR . Load->ALU exposes full DRAM latency (long) 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 20 10 Tolerating Memory Latency Delayed Pipeline F D X M W . DRAM latency: >20ns . vld A T VW VLOAD vadd Load-> ALU RAW hazard vst vld vadd DELAY VR X1 ... XN VW VALU vst . VSTORE A T VR Load ® ALU sees functional unit latency (short) 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 21 Clustered VLSI Design 64b Xbar I/F Xbar I/F Xbar I/F Xbar I/F Integer Integer Integer Integer Datapath 0 Datapath 0 Datapath 0 Datapath 0 Vector Vector Vector Vector Registers Registers Registers Registers Flag Regs. Flag Regs. Flag Regs. Flag Regs. Control & Datapath & Datapath & Datapath & Datapath FP FP FP FP Datapaths Datapaths Datapaths Datapaths Integer Integer Integer Integer Datapath 1 Datapath 1 Datapath 1 Datapath 1 Xbar I/F Xbar I/F Xbar I/F Xbar I/F 256b 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 22 11 VIRAM-1 Floorplan DRAM DRAM DRAM DRAM Bank Bank Bank Bank N 0 2 4 6 I M C I Vector Vector Vector Vector T P Lane 0 Lane 1 Lane 2 Lane 3 S L I DRAM DRAM DRAM DRAM O Bank Bank Bank Bank 1 3 5 7 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 23 Prototype Summary • Technology: – 0.18um eDRAM CMOS process (IBM) – 6 layers of copper interconnect – 1.2V and 1.8V power supply • Memory: 16 MBytes • Clock frequency: 200MHz • Power: 2 W for vector unit and memory • Transistor count: ~140 millions • Peak performance: – GOPS w. multiply-add: 3.2 (64b), 6.4 (32b), 12.8 (16b) – GOPS wo. multiply-add: 1.6 (64b), 3.2 (32b), 6.4 (16b) – GFLOPS: 1.6 (32b) 2/10/2000 C.E. Kozyrakis, U.C. Berkeley Page 24 12 Kernels Performance Peak Sustained % Perf. Perf. of Peak Image Composition 6.4 GOPS 6.40 GOPS 100.0% iDCT 6.4 GOPS 1.97 GOPS 30.7% Color Conversion 3.2 GOPS 3.07 GOPS 96.0% Image Convolution 3.2 GOPS 3.16 GOPS 98.7% Integer MV Multiply 3.2 GOPS 2.77 GOPS 86.5% Integer VM Multiply 3.2 GOPS 3.00 GOPS 93.7% FP MV Multiply 1.6 GFLOPS 1.40 GFLOPS 87.5% FP VM Multiply 1.6 GFLOPS 1.59 GFLOPS 99.6% AVERAGE 86.6% •Note : simulations did not include memory optimizations (address decoupling, small strides optimizations, address hashing), or fixed-point multiply-add integer datapaths 2/10/2000 C.E.
Recommended publications
  • Power4 Focuses on Memory Bandwidth IBM Confronts IA-64, Says ISA Not Important
    VOLUME 13, NUMBER 13 OCTOBER 6,1999 MICROPROCESSOR REPORT THE INSIDERS’ GUIDE TO MICROPROCESSOR HARDWARE Power4 Focuses on Memory Bandwidth IBM Confronts IA-64, Says ISA Not Important by Keith Diefendorff company has decided to make a last-gasp effort to retain control of its high-end server silicon by throwing its consid- Not content to wrap sheet metal around erable financial and technical weight behind Power4. Intel microprocessors for its future server After investing this much effort in Power4, if IBM fails business, IBM is developing a processor it to deliver a server processor with compelling advantages hopes will fend off the IA-64 juggernaut. Speaking at this over the best IA-64 processors, it will be left with little alter- week’s Microprocessor Forum, chief architect Jim Kahle de- native but to capitulate. If Power4 fails, it will also be a clear scribed IBM’s monster 170-million-transistor Power4 chip, indication to Sun, Compaq, and others that are bucking which boasts two 64-bit 1-GHz five-issue superscalar cores, a IA-64, that the days of proprietary CPUs are numbered. But triple-level cache hierarchy, a 10-GByte/s main-memory IBM intends to resist mightily, and, based on what the com- interface, and a 45-GByte/s multiprocessor interface, as pany has disclosed about Power4 so far, it may just succeed. Figure 1 shows. Kahle said that IBM will see first silicon on Power4 in 1Q00, and systems will begin shipping in 2H01. Looking for Parallelism in All the Right Places With Power4, IBM is targeting the high-reliability servers No Holds Barred that will power future e-businesses.
    [Show full text]
  • Power Architecture® ISA 2.06 Stride N Prefetch Engines to Boost Application's Performance
    Power Architecture® ISA 2.06 Stride N prefetch Engines to boost Application's performance History of IBM POWER architecture: POWER stands for Performance Optimization with Enhanced RISC. Power architecture is synonymous with performance. Introduced by IBM in 1991, POWER1 was a superscalar design that implemented register renaming andout-of-order execution. In Power2, additional FP unit and caches were added to boost performance. In 1996 IBM released successor of the POWER2 called P2SC (POWER2 Super chip), which is a single chip implementation of POWER2. P2SC is used to power the 30-node IBM Deep Blue supercomputer that beat world Chess Champion Garry Kasparov at chess in 1997. Power3, first 64 bit SMP, featured a data prefetch engine, non-blocking interleaved data cache, dual floating point execution units, and many other goodies. Power3 also unified the PowerPC and POWER Instruction set and was used in IBM's RS/6000 servers. The POWER3-II reimplemented POWER3 using copper interconnects, delivering double the performance at about the same price. Power4 was the first Gigahertz dual core processor launched in 2001 which was awarded the MicroProcessor Technology Award in recognition of its innovations and technology exploitation. Power5 came in with symmetric multi threading (SMT) feature to further increase application's performance. In 2004, IBM with 15 other companies founded Power.org. Power.org released the Power ISA v2.03 in September 2006, Power ISA v.2.04 in June 2007 and Power ISA v.2.05 with many advanced features such as VMX, virtualization, variable length encoding, hyper visor functionality, logical partitioning, virtual page handling, Decimal Floating point and so on which further boosted the architecture leadership in the market place and POWER5+, Cell, POWER6, PA6T, Titan are various compliant cores.
    [Show full text]
  • POWER8: the First Openpower Processor
    POWER8: The first OpenPOWER processor Dr. Michael Gschwind Senior Technical Staff Member & Senior Manager IBM Power Systems #OpenPOWERSummit Join the conversation at #OpenPOWERSummit 1 OpenPOWER is about choice in large-scale data centers The choice to The choice to The choice to differentiate innovate grow . build workload • collaborative • delivered system optimized innovation in open performance solutions ecosystem • new capabilities . use best-of- • with open instead of breed interfaces technology scaling components from an open ecosystem Join the conversation at #OpenPOWERSummit Why Power and Why Now? . Power is optimized for server workloads . Power8 was optimized to simplify application porting . Power8 includes CAPI, the Coherent Accelerator Processor Interconnect • Building on a long history of IBM workload acceleration Join the conversation at #OpenPOWERSummit POWER8 Processor Cores • 12 cores (SMT8) 96 threads per chip • 2X internal data flows/queues • 64K data cache, 32K instruction cache Caches • 512 KB SRAM L2 / core • 96 MB eDRAM shared L3 • Up to 128 MB eDRAM L4 (off-chip) Accelerators • Crypto & memory expansion • Transactional Memory • VMM assist • Data Move / VM Mobility • Coherent Accelerator Processor Interface (CAPI) Join the conversation at #OpenPOWERSummit 4 POWER8 Core •Up to eight hardware threads per core (SMT8) •8 dispatch •10 issue •16 execution pipes: •2 FXU, 2 LSU, 2 LU, 4 FPU, 2 VMX, 1 Crypto, 1 DFU, 1 CR, 1 BR •Larger Issue queues (4 x 16-entry) •Larger global completion, Load/Store reorder queue •Improved branch prediction •Improved unaligned storage access •Improved data prefetch Join the conversation at #OpenPOWERSummit 5 POWER8 Architecture . High-performance LE support – Foundation for a new ecosystem . Organic application growth Power evolution – Instruction Fusion 1600 PowerPC .
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Powerpc 620 Case Study
    Case Studies: The PowerPC 620 and Intel P6 Case Study: The PowerPC 620 Modern Processor Design: Fundamentals of Superscalar Processors 2 IBM/Motorola/Apple Alliance PowerPC 620 Case Study Alliance begun in 1991 with a joint design center (Somerset) in Austin – Ambitious objective: unseat Intel on the desktop – Delays, conflicts, politics…hasn’t happened, alliance largely dissolved today First-generation out-of-order processor PowerPC 601 – Quick design based on RSC compatible with POWER and PowerPC Developed as part of Apple-IBM-Motorola PowerPC 603 alliance – Low power implementation designed for small uniprocessor systems – 5 FUs: branch , integer, system, load/store, FP Aggressive goals, targets PowerPC 604 – 4-wide machine Interesting microarchitectural features – 6 FUs, each with 2-entry RS Hopelessly delayed PowerPC 620 – First 64-bit machine, also 4-wide Led to future, successful designs – Same 6 FUs as 604 – Next slide, also chapter 5 in the textbook PowerPC G3, G4 – Newer derivatives of the PowerPC 603 (3-issue, in-order) – Added Altivec multimedia extensions 1 PowerPC 620 PowerPC 620 Pipeline Fetch stage Instruction buffer (8) Dispatch stage BRU LSU XSU0 XSU1 MC-FXU FPU Reservation stations (6) Execute stage(s) Completion buffer (16) Complete stage Writeback stage Fetch stage – 4-wide, BTAC simple predictor PowerPC 620 Instruction Buffer – Joint IBM/Apple/Motorola design – Decouples fetch from dispatch stalls – Holds up to 8 instructions – Aggressively out-of-order, weak memory order, 64 bits Hopelessly delayed,
    [Show full text]
  • A História Da Família Powerpc
    A História da família PowerPC ∗ Flavio Augusto Wada de Oliveira Preto Instituto de Computação Unicamp fl[email protected] ABSTRACT principal atingir a marca de uma instru¸c~ao por ciclo e 300 Este artigo oferece um passeio hist´orico pela arquitetura liga¸c~oes por minuto. POWER, desde sua origem at´eos dias de hoje. Atrav´es deste passeio podemos analisar como as tecnologias que fo- O IBM 801 foi contra a tend^encia do mercado ao reduzir ram surgindo atrav´esdas quatro d´ecadas de exist^encia da dr´asticamente o n´umero de instru¸c~oes em busca de um con- arquitetura foram incorporadas. E desta forma ´eposs´ıvel junto pequeno e simples, chamado de RISC (reduced ins- verificar at´eos dias de hoje como as tend^encias foram segui- truction set computer). Este conjunto de instru¸c~oes elimi- das e usadas. Al´emde poder analisar como as tendencias nava instru¸c~oes redundantes que podiam ser executadas com futuras na ´area de arquitetura de computadores seguir´a. uma combina¸c~ao de outras intru¸c~oes. Com este novo con- junto reduzido, o IBM 801 possuia metade dos circuitos de Neste artigo tamb´emser´aapresentado sistemas computacio- seus contempor^aneos. nais que empregam comercialmente processadores POWER, em especial os videogames, dado que atualmente os tr^es vi- Apesar do IBM 801 nunca ter se tornado um chaveador te- deogames mais vendidos no mundo fazem uso de um chip lef^onico, ele foi o marco de toda uma linha de processadores POWER, que apesar da arquitetura comum possuem gran- RISC que podemos encontrar at´ehoje: a linha POWER.
    [Show full text]
  • The POWER4 Processor Introduction and Tuning Guide
    Front cover The POWER4 Processor Introduction and Tuning Guide Comprehensive explanation of POWER4 performance Includes code examples and performance measurements How to get the most from the compiler Steve Behling Ron Bell Peter Farrell Holger Holthoff Frank O’Connell Will Weir ibm.com/redbooks International Technical Support Organization The POWER4 Processor Introduction and Tuning Guide November 2001 SG24-7041-00 Take Note! Before using this information and the product it supports, be sure to read the general information in “Special notices” on page 175. First Edition (November 2001) This edition applies to AIX 5L for POWER Version 5.1 (program number 5765-E61), XL Fortran Version 7.1.1 (5765-C10 and 5765-C11) and subsequent releases running on an IBM ^ pSeries POWER4-based server. Unless otherwise noted, all performance values mentioned in this document were measured on a 1.1 GHz machine, then normalized to 1.3 GHz. Note: This book is based on a pre-GA version of a product and may not apply when the product becomes generally available. We recommend that you consult the product documentation or follow-on versions of this redbook for more current information. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 2001. All rights reserved. Note to U.S Government Users – Documentation related to restricted rights – Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
    [Show full text]
  • POWER4 System Microarchitecture
    by J. M. Tendler J. S. Dodson POWER4 J. S. Fields, Jr. H. Le system B. Sinharoy microarchitecture The IBM POWER4 is a new microprocessor commonplace. The RS64 and its follow-on organized in a system structure that includes microprocessors, the RS64-II [6], RS64-III [7], and RS64- new technology to form systems. The name IV [8], were optimized for commercial applications. The POWER4 as used in this context refers not RS64 initially appeared in systems operating at 125 MHz. only to a chip, but also to the structure used Most recently, the RS64-IV has been shipping in systems to interconnect chips to form systems. operating at up to 750 MHz. The POWER3 and its follow- In this paper we describe the processor on, the POWER3-II [9], were optimized for technical microarchitecture as well as the interconnection applications. Initially introduced at 200 MHz, most recent architecture employed to form systems up to systems using the POWER3-II have been operating at a 32-way symmetric multiprocessor. 450 MHz. The RS64 microprocessor and its follow-ons were also used in AS/400* systems (the predecessor Introduction to today’s eServer iSeries*). IBM announced the RISC System/6000* (RS/6000*, the POWER4 was designed to address both commercial and predecessor of today’s IBM eServer pSeries*) family of technical requirements. It implements and extends in a processors in 1990. The initial models ranged in frequency compatible manner the 64-bit PowerPC Architecture [10]. from 20 MHz to 30 MHz [1] and achieved performance First used in pSeries systems, it will be staged into the levels exceeding those of many of their contemporaries iSeries at a later date.
    [Show full text]
  • Low-Level Optimizations in the Powerpc/Linux Kernels
    Low-Level Optimizations in the PowerPC/Linux Kernels Dr. Paul Mackerras Senior Technical Staff Member IBM Linux Technology Center OzLabs Canberra, Australia [email protected] [email protected] Outline Introduction ¡ PowerPC® architecture and implementations ¡ Optimization techniques: profiling and benchmarking Instruction cache coherence Memory copying PTE management Conclusions PowerPC Architecture PowerPC = “POWER Performance Computing” ¡ POWER = “Performance Optimization with Enhanced RISC” PowerPC is a specification for an Instruction Set Architecture ¡ Specifies registers, instructions, encodings, etc. ¡ RISC load/store architecture 32 general-purpose registers, 2 data addressing modes, fixed- length 32-bit instructions, branches do not have delay slots ¡ Designed for efficient superscalar implementation ¡ 64-bit architecture with a 32-bit subset 32-bit mode for 32-bit processes on 64-bit implementations PowerPC Architecture Caches ¡ Instruction and data caches may be separate ¡ Instructions provided for cache management dcbst: Data cache block store dcbf/dcbi: Data cache block flush/invalidate icbi: Instruction cache block invalidate ¡ Instruction cache is not required to snoop Hardware does not maintain coherence with memory or Dcache ¡ Data cache coherence with memory (DMA/other CPUs) Maintained by hardware on desktop/server systems Managed by software on embedded systems PowerPC Architecture Memory management ¡ Architecture specifies hashed page table structure. Implemented in desktop/server CPUs 4kB pages; POWER4TM also has 16MB
    [Show full text]
  • Ilore: Discovering a Lineage of Microprocessors
    iLORE: Discovering a Lineage of Microprocessors Samuel Lewis Furman Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Computer Science & Applications Kirk Cameron, Chair Godmar Back Margaret Ellis May 24, 2021 Blacksburg, Virginia Keywords: Computer history, systems, computer architecture, microprocessors Copyright 2021, Samuel Lewis Furman iLORE: Discovering a Lineage of Microprocessors Samuel Lewis Furman (ABSTRACT) Researchers, benchmarking organizations, and hardware manufacturers maintain repositories of computer component and performance information. However, this data is split across many isolated sources and is stored in a form that is not conducive to analysis. A centralized repository of said data would arm stakeholders across industry and academia with a tool to more quantitatively understand the history of computing. We propose iLORE, a data model designed to represent intricate relationships between computer system benchmarks and computer components. We detail the methods we used to implement and populate the iLORE data model using data harvested from publicly available sources. Finally, we demonstrate the validity and utility of our iLORE implementation through an analysis of the characteristics and lineage of commercial microprocessors. We encourage the research community to interact with our data and visualizations at csgenome.org. iLORE: Discovering a Lineage of Microprocessors Samuel Lewis Furman (GENERAL AUDIENCE ABSTRACT) Researchers, benchmarking organizations, and hardware manufacturers maintain repositories of computer component and performance information. However, this data is split across many isolated sources and is stored in a form that is not conducive to analysis. A centralized repository of said data would arm stakeholders across industry and academia with a tool to more quantitatively understand the history of computing.
    [Show full text]
  • IBM POWER8 CPU Architecture
    POWER8 Jeff Stuecheli IBM Power Systems IBM Systems & Technology Group Development © 2013 International Business Machines Corporation 1 POWER7+ POWER7 2012 POWER6 2010 POWER5 2007 2004 45nm SOI 32nm SOI Technology 130nm SOI 65nm SOI eDRAM eDRAM Compute Cores 2 2 8 8 Threads SMT2 SMT2 SMT4 SMT4 Caching On-chip 1.9MB 8MB 2 + 32MB 2 + 80MB Off-chip 36MB 32MB None None Bandwidth Sust. Mem. 15GB/s 30GB/s 100GB/s 100GB/s Peak I/O 3GB/s 10GB/s 20GB/s 20GB/s © 2013 International Business Machines Corporation 2 POWER8 POWER7+ POWER7 2012 POWER6 2010 POWER5 2007 2004 45nm SOI 32nm SOI Technology 130nm SOI 65nm SOI eDRAM eDRAM Compute Cores 2 2 8 8 Today’s Threads SMT2 SMT2 SMT4 SMT4 Topic Caching On-chip 1.9MB 8MB 2 + 32MB 2 + 80MB Off-chip 36MB 32MB None None Bandwidth Sust. Mem. 15GB/s 30GB/s 100GB/s 100GB/s Peak I/O 3GB/s 10GB/s 20GB/s 20GB/s © 2013 International Business Machines Corporation 3 Leadership System Open System Performance Innovation Innovation • Increase core throughput • Higher capacity cache hierarchy • CAPI at single thread, SMT2, and highly threaded processor • Memory interface SMT4, and SMT8 level • Enhanced memory bandwidth, • Open system software • Large step in per socket capacity, and expansion performance • Flexible SMT • Enable more robust • Dynamic code optimization multi-socket scaling • Hardware-accelerated virtual memory management © 2013 International Business Machines Corporation 4 Technology • 22nm SOI, eDRAM, 15 ML 650mm2 Caches Cores • 512 KB SRAM L2 / core • 12 cores (SMT8) • 96 MB eDRAM shared L3 • 8 dispatch, 10 issue, Local SMP Links SMP Local • Up to 128 MB eDRAM L4 Accelerators 16 exec pipe Core Core Core Core Core Core (off-chip) • 2X internal data flows/queues L2 L2 L2 L2 L2 L2 Memory 8M L3 • Enhanced prefetching Region • Up to 230 GB/s • 64K data cache, Mem .
    [Show full text]
  • Understanding IBM Pseries Performance and Sizing
    Understanding IBM pSeries Performance and Sizing Comprehend IBM RS/6000 and IBM ^ pSeries hardware architectures Get an overview of current industry benchmarks Understand how to size your system Nigel Trickett Tatsuhiko Nakagawa Ravi Mani Diana Gfroerer ibm.com/redbooks SG24-4810-01 International Technical Support Organization Understanding IBM ^ pSeries Performance and Sizing February 2001 Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix A, “Special notices” on page 377. Second Edition (February 2001) This edition applies to IBM RS/6000 and IBM ^ pSeries as of December 2000, and Version 4.3.3 of the AIX operating system. This document was updated on January 24, 2003. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 1997, 2001. All rights reserved. Note to U.S Government Users – Documentation related to restricted rights – Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Contents Preface. 9 The team that wrote this redbook. 9 Comments welcome. 11 Chapter 1. Introduction . 1 Chapter 2. Background . 5 2.1 Performance of processors . 5 2.2 Hardware architectures . 6 2.2.1 RISC/CISC concepts . 6 2.2.2 Superscalar architecture: pipeline and parallelism . 7 2.2.3 Memory management .
    [Show full text]