Organization of the Motorola 88110 Superscalar RISC Microprocessor

Total Page:16

File Type:pdf, Size:1020Kb

Organization of the Motorola 88110 Superscalar RISC Microprocessor Organization of the Motorola 88110 Superscalar RISC Microprocessor Motorola’s second-generation RISC microprocessor employs advanced techniques for exploit- ing instruction-level parallelism, including superscak instruction issue, out-of-order instruc- tion completion, speculative execution, dynamic hx&ruction rescheduling, and two parallel, high-bandwidth, on-chip caches. Designed to serve as the central processor in low-cost per- sonal computers and workstations, the 88110 supports demanding graphics and digital signal processing applications. Keith Diefendorff otorola designers conceived of the puters and workstation systems. Thus, our de- 88000 RISC (reduced instruction- sign objective was good performance at a given Michael Allen set computer) architecture to cost, rather than ultimate performance at any cost. Dl simplify construction of micropro- We recognized that the personal computer envi- Motorola cessors capable of exploiting high degrees of in- ronment is moving toward highly interactive soft- struction-level parallelism without sacrificing clock ware, user-oriented interfaces, voice and image speed. The designers held architectural complexity processing, and advanced graphics and video, to a minimum to eliminate pipeline bottlenecks all of which would place extremely high demands and remove limitations on concurrent instruction on integer, floating-point, and graphics process- execution. ing capabilities. At the same time, we realized The 88100/200 is the first implementation of the 88110 would have to meet these performance the 88000 architecture. It is a three-chip set, re- demands while operating with the inexpensive quiring one CPU (88100) chip and two (or more> DRAM systems typically found in low-cost per- cache (88200) chips. The CPU’s simple scalar sonal computers. design uses multiple concurrent execution units To achieve the performance goals set for the with out-of-order instruction completion to ap- 88110, we needed to obtain more parallelism than proach a throughput of one instruction per clock was achieved in earlier microprocessors. To this cycle. end, we decided to use a superscalar micro- The second-generation, single-chip 88110 RISC architecture to exploit additional instruction-level microprocessor employs superscalar instruction parallelism. Superscalar machines are distin- issue and out-of-order instruction execution tech- guished by their ability to dispatch multiple in- niques to achieve a throughput greater than one structions each clock cycle from a conventional instruction per clock cycle. linear instruction stream. This approach has shown good speedup on general-purpose applications Overview and was a good match to available CMOS tech- In designing the 88110, we aimed at a general- nology. (We believe Agerwala and Cocke coined purpose microprocessor, suitable primarily for use the superscalar term.‘) as the central processor in low-cost personal com- We selected the superscalar approach over 40 IEEE Micro 0272-1732/92/0400-0040$03.00 0 1992 IEEE . - - - other fine-grain parallelism approaches. such as the vector interlocked pipelines and a precise exception model, but it machine and the VLIW (very long instruction word) approach. allows out-of-order instruction completion, some out-of-order because it appeared to be more effective for our intended instruction issue. and branch prediction with speculative ex- application. With a limited transistor budget. spending tran- ecution past branches. sistors on vector hardware would have meant sacrificing sca- We optimized each execution unit for low latency: The lar performance and would have yielded a machine that branch unit uses a branch target instruction cache to reduce suffered from the Amdahl’s Law phenomenon’ on general- branch latency. The integer and graphics units are one-cycle purpose applications. units; the floating-point adder and multiplier are three-cycle, The VLIW approach3 would have introduced severe soft- fully pipelined. IEEE extended-precision units. The load/store ware compatibility restrictions, by exposing hardware paral- unit provides fast access to the cache on a hit. But it is also lelism to the object code program and thereby limiting future highly buffered to increase tolerance to long memory latency implementation flexibility. Also, the VLIW speedup, while on a miss and to allow dynamic reordering of loads and substantial on code with abundant parallelism (such as sci- stores for runtime overlapping of tight loops. entific applications), is less significant on general-purpose The on-chip caches are organized in a Harvard arrange- applications. This limited speedup is due in part to the code ment. giving the processor simultaneous access to instruc- expansion: The inefficient use of opcode bits to control un- tions and data. Each 8-Kbyte, two-way, set-associative cache used execution units and the aggressive loop unrolling re- provides 64 bits each clock cycle to its respective unit. The quired to schedule the available execution unit parallelism write-back data cache is nonblocking for some types of ac- effectively.’ cesses, and it follows a four-state MESI (modified, exclusive, The superscalar approach appeared to be a better match shared, invalid) protocol” for coherence with other caches in to CMOS technology than a superpipelined approach. Super- multiprocessor systems. It also supports selective cache by- scalar designs rely primarily on spatial parallelism-multiple pass and software prefetching from user mode. Two inde- operations running concurrently on separate hardware- pendent, 40-entry, fully associative address translation achieved by duplicating hardware resources such as execu- look-aside buffers U’LBs) support a demand-paged virtual tion units and register file ports. Superpipelined designs, on the other hand, emphasize temporal parallel- ism--overlapping multiple operations on a common piece of hardware- achieved through more deeply pipe- lined execution units with faster clock cycles. As a result, superscalar ma- chines generally require more tran- sistors, whereas superpipelined designs require faster transistors and more careful circuit design to mini- mize the effects of clock skew. Some literature indicates that superscalar and superpipelined machines of the same degree would perform roughly the same.j We felt that CMOS tech- nology generally favors replicating circuitry over increasing clock cycle rates, since CMOS circuit density his- torically has increased at a much faster rate than circuit speed. The 88110 microarchitecture, illus- trated in Figure 1, employs a sym- * * metrical superscalar instruction 32-bit address bus 64-bit data bus dispatch unit, which dispatches two TLB Translation look-aside buffer instructions each clock cycle into an array of 10 concurrent execution units. The design implements fully Figure 1. Block diagram of the 88110. April 1992 41 .. 881 IO RISC returning a 64-bit quotient. l Additional information returned from the integer com- pare instruction to improve string-handling capability. l Addition of static branch prediction to the branch opcodes, providing a mechanism by which the com- piler gives the hardware a hint as to the direction a given (4 (W conditional branch is likely to go. We estimate that the compiler potentially can statically predict more than 85 MC88110 MC88110 MC88110 percent of the dynamically executed conditional branches I I . I correctly. We also believe that runtime branch profiling can further improve this rate in specific cases. I2 cache L2 cache L2 cache l An option that allows I6-bit immediate address offsets I I I and literal constants to be treated as signed numbers 1 (the 88100 treats them only as unsigned). 1 DRAM 1 Floating-point archi~ extensions. Our enhance- 64 ments of the floating-point architecture were more signifi- cant. Anticipating heavy use of the processor for graphics Figure 2. Target system configurations: single (a), dual (b), and DSP and greater use of floating-point data in many general- and multiprocessors (c). purpose PC applications, we added the following: memory environment. A common, 64-bit, external bus ser- l An extended floating-point register file to provide regis- vices cache misses. The demultiplexed, pipelined bus sup- ter name space for floating-point variables and frequently ports burst mode, split transactions, and bus snooping. accessed constants beyond that provided in the 88100 We designed the 88110 especially for the three basic sys- architecture. The extended register file contains thirty- tem configurations shown in Figure 2: two 80-bit registers. Each register can hold one floating- point number of any precision-single, double, or l single processors tightly coupled to low-cost DRAMS; double-extended. For compatibility with existing code, l dual-processor systems, also coupled to inexpensive single- and double-precision floating-point numbers con- DRAMS, either in a symmetrical multiprocessing arrange- tinue to be supported in the generai izgister file as well. ment or with one of the 88110s dedicated to a particular The compiler can use the additional register name space function such as graphics or digital signal processing to improve code schedules and reduce memory refer- (DSP); and ences This feature alone results in a speedup of more l medium-scale shared-memory multiprocessor systems, than 15 percent on the SPEC(Systems Performance Evalu- with each processor using local secondary static RAM ation Cooperative) floating-point benchmarks.’ The MAM) cache, which we call L2. speedup
Recommended publications
  • Development of a Dynamically Extensible Spinnaker Chip Computing Module
    FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Development of a Dynamically Extensible SpiNNaker Chip Computing Module Rui Emanuel Gonçalves Calado Araújo Master in Electrical and Computers Engineering Supervisor: Jörg Conradt Co-Supervisor: Diamantino Freitas January 27, 2014 Resumo O projeto SpiNNaker desenvolveu uma arquitetura que é capaz de criar um sistema com mais de um milhão de núcleos, com o objetivo de simular mais de um bilhão de neurónios em tempo real biológico. O núcleo deste sistema é o "chip" SpiNNaker, um multiprocessador System-on-Chip com um elevado nível de interligação entre as suas unidades de processamento. Apesar de ser uma plataforma de computação com muito potencial, até para aplicações genéricas, atualmente é ape- nas disponibilizada em configurações fixas e requer uma estação de trabalho, como uma máquina tipo "desktop" ou "laptop" conectada através de uma conexão Ethernet, para a sua inicialização e receber o programa e os dados a processar. No sentido de tirar proveito das capacidades do "chip" SpiNNaker noutras áreas, como por exemplo, na área da robótica, nomeadamente no caso de robots voadores ou de tamanho pequeno, uma nova solução de hardware com software configurável tem de ser projetada de forma a poder selecionar granularmente a quantidade do poder de processamento. Estas novas capacidades per- mitem que a arquitetura SpiNNaker possa ser utilizada em mais aplicações para além daquelas para que foi originalmente projetada. Esta dissertação apresenta um módulo de computação dinamicamente extensível baseado em "chips" SpiNNaker com a finalidade de ultrapassar as limitações supracitadas das máquinas SpiN- Naker atualmente disponíveis. Esta solução consiste numa única placa com um microcontrolador, que emula um "chip" SpiNNaker com uma ligação Ethernet, acessível através de uma porta série e com um "chip" SpiNNaker.
    [Show full text]
  • NVIDIA CUDA on IBM POWER8: Technical Overview, Software Installation, and Application Development
    Redpaper Dino Quintero Wei Li Wainer dos Santos Moschetta Mauricio Faria de Oliveira Alexander Pozdneev NVIDIA CUDA on IBM POWER8: Technical overview, software installation, and application development Overview The exploitation of general-purpose computing on graphics processing units (GPUs) and modern multi-core processors in a single heterogeneous parallel system has proven highly efficient for running several technical computing workloads. This applied to a wide range of areas such as chemistry, bioinformatics, molecular biology, engineering, and big data analytics. Recently launched, the IBM® Power System S824L comes into play to explore the use of the NVIDIA Tesla K40 GPU, combined with the latest IBM POWER8™ processor, providing a unique technology platform for high performance computing. This IBM Redpaper™ publication discusses the installation of the system, and the development of C/C++ and Java applications using the NVIDIA CUDA platform for IBM POWER8. Note: CUDA stands for Compute Unified Device Architecture. It is a parallel computing platform and programming model created by NVIDIA and implemented by the GPUs that they produce. The following topics are covered: Advantages of NVIDIA on POWER8 The IBM Power Systems S824L server Software stack System monitoring Application development Tuning and debugging Application examples © Copyright IBM Corp. 2015. All rights reserved. ibm.com/redbooks 1 Advantages of NVIDIA on POWER8 The IBM and NVIDIA partnership was announced in November 2013, for the purpose of integrating IBM POWER®-based systems with NVIDIA GPUs, and enablement of GPU-accelerated applications and workloads. The goal is to deliver higher performance and better energy efficiency to companies and data centers. This collaboration produced its initial results in 2014 with: The announcement of the first IBM POWER8 system featuring NVIDIA Tesla GPUs (IBM Power Systems™ S824L).
    [Show full text]
  • Openpower AI CERN V1.Pdf
    Moore’s Law Processor Technology Firmware / OS Linux Accelerator sSoftware OpenStack Storage Network ... Price/Performance POWER8 2000 2020 DRAM Memory Chips Buffer Power8: Up to 12 Cores, up to 96 Threads L1, L2, L3 + L4 Caches Up to 1 TB per socket https://www.ibm.com/blogs/syst Up to 230 GB/s sustained memory ems/power-systems- openpower-enable- bandwidth acceleration/ System System Memory Memory 115 GB/s 115 GB/s POWER8 POWER8 CPU CPU NVLink NVLink 80 GB/s 80 GB/s P100 P100 P100 P100 GPU GPU GPU GPU GPU GPU GPU GPU Memory Memory Memory Memory GPU PCIe CPU 16 GB/s System bottleneck Graphics System Memory Memory IBM aDVantage: data communication and GPU performance POWER8 + 78 ms Tesla P100+NVLink x86 baseD 170 ms GPU system ImageNet / Alexnet: Minibatch size = 128 ADD: Coherent Accelerator Processor Interface (CAPI) FPGA CAPP PCIe POWER8 Processor ...FPGAs, networking, memory... Typical I/O MoDel Flow Copy or Pin MMIO Notify Poll / Int Copy or Unpin Ret. From DD DD Call Acceleration Source Data Accelerator Completion Result Data Completion Flow with a Coherent MoDel ShareD Mem. ShareD Memory Acceleration Notify Accelerator Completion Focus on Enterprise Scale-Up Focus on Scale-Out and Enterprise Future Technology and Performance DriVen Cost and Acceleration DriVen Partner Chip POWER6 Architecture POWER7 Architecture POWER8 Architecture POWER9 Architecture POWER10 POWER8/9 2007 2008 2010 2012 2014 2016 2017 TBD 2018 - 20 2020+ POWER6 POWER6+ POWER7 POWER7+ POWER8 POWER8 P9 SO P9 SU P9 SO 2 cores 2 cores 8 cores 8 cores 12 cores w/ NVLink
    [Show full text]
  • IBM Power Systems Performance Report Apr 13, 2021
    IBM Power Performance Report Power7 to Power10 September 8, 2021 Table of Contents 3 Introduction to Performance of IBM UNIX, IBM i, and Linux Operating System Servers 4 Section 1 – SPEC® CPU Benchmark Performance 4 Section 1a – Linux Multi-user SPEC® CPU2017 Performance (Power10) 4 Section 1b – Linux Multi-user SPEC® CPU2017 Performance (Power9) 4 Section 1c – AIX Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 5 Section 1d – Linux Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 6 Section 2 – AIX Multi-user Performance (rPerf) 6 Section 2a – AIX Multi-user Performance (Power8, Power9 and Power10) 9 Section 2b – AIX Multi-user Performance (Power9) in Non-default Processor Power Mode Setting 9 Section 2c – AIX Multi-user Performance (Power7 and Power7+) 13 Section 2d – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power8) 15 Section 2e – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power7 and Power7+) 20 Section 3 – CPW Benchmark Performance 19 Section 3a – CPW Benchmark Performance (Power8, Power9 and Power10) 22 Section 3b – CPW Benchmark Performance (Power7 and Power7+) 25 Section 4 – SPECjbb®2015 Benchmark Performance 25 Section 4a – SPECjbb®2015 Benchmark Performance (Power9) 25 Section 4b – SPECjbb®2015 Benchmark Performance (Power8) 25 Section 5 – AIX SAP® Standard Application Benchmark Performance 25 Section 5a – SAP® Sales and Distribution (SD) 2-Tier – AIX (Power7 to Power8) 26 Section 5b – SAP® Sales and Distribution (SD) 2-Tier – Linux on Power (Power7 to Power7+)
    [Show full text]
  • Probabilistic Study of End-To-End Constraints in Real-Time Systems Cristian Maxim
    Probabilistic study of end-to-end constraints in real-time systems Cristian Maxim To cite this version: Cristian Maxim. Probabilistic study of end-to-end constraints in real-time systems. Systems and Control [cs.SY]. Université Pierre et Marie Curie - Paris VI, 2017. English. NNT : 2017PA066479. tel-02003251 HAL Id: tel-02003251 https://tel.archives-ouvertes.fr/tel-02003251 Submitted on 1 Feb 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THESE` DE DOCTORAT DE l'UNIVERSITE´ PIERRE ET MARIE CURIE Sp´ecialit´e Informatique Ecole´ doctorale Informatique, T´el´ecommunications et Electronique´ (Paris) Pr´esent´eepar Cristian MAXIM Pour obtenir le grade de DOCTEUR de l'UNIVERSITE´ PIERRE ET MARIE CURIE Sujet de la th`ese: Etude´ probabiliste des contraintes de bout en bout dans les syst`emestemps r´eel soutenue le 11 dec´embre 2017 devant le jury compos´ede : Mme. Liliana CUCU-GROSJEAN Directeur de th`ese M. Benoit TRIQUET Coordinateur Industriel Mme. Christine ROCHANGE Rapporteur M. Thomas NOLTE Rapporteur Mme. Alix MUNIER Examinateur M. Victor JEGU Examinateur M. George LIMA Examinateur M. Sascha UHRIG Examinateur Contents Table of contents iii 1 Introduction1 1.1 General Introduction...........................1 1.1.1 Embedded systems........................2 1.1.2 Real-time domain........................4 1.1.3 Avionics industry.........................9 1.2 Context.................................
    [Show full text]
  • 佐野正博(2010)「Cpu モジュール単体の性能指標としての Ips 値」2010 年度技術戦略論用資料
    佐野正博(2010)「CPU モジュール単体の性能指標としての IPS 値」2010 年度技術戦略論用資料 CPU モジュール単体の性能指標としての IPS 値 ............. .. CPU モジュール単体の性能を測る指標の一つは、「1秒間に何個の命令(Instructions)を実行可能であるのか」ということ である。そのための基本的単位は IPS(Instructions Per Second)である。代表的な CPU の IPS 値は下表の通りである。 ゲーム専用機用 CPU と PC 用 CPU、ゲーム専用機用 CPU でも据え置き型ゲーム専用機用 CPU と携帯型ゲーム専用 機用 CPU との区別に注意を払って下記の一覧表を見ると、CPU の MIPS 値は当然のことながら年ごとに性能を着実に向 上させていることがわかる。 IPS(Instructions Per Second)値の比較表 —- 年順 (命令実行性能の単位は MIPS、動作周波数の単位は MHz) 命令実行 動作 プロセッサーの名称 年 性能 周波数 Intel 4004 0.09 0.74 1971 Intel 8080 0.64 2 1974 セガ:メガドライブ/Motorola 68000 1 8 1979 Intel 80286 2.66 12 1982 Motorola 68030 11 33 1987 Intel 80386DX 8.5 25 1988 NEC:PC-FX/NEC V810 16 25 1994 Motorola 68040 44 40 1990 Intel 80486DX 54 66 1992 セガ:セガサターン/日立 SH-2 25 28.6 1994 SONY:PS/MIPS R3000A 30 34 1994 任天堂:N64/MIPS R4300 125 93.75 1996 Intel Pentium Pro 541 200 1996 PowerPC G3 525 233 1997 セガ:Dreamcast/日立 SH-4 360 200 1998 Intel Pentium III 1,354 500 1999 SONY:PS2 [Emotion Engine]/MIPS IV(R5900) 435 300 2000 任天堂:GAMECUBE [Gekko]/IBM PowerPC G3 (1) 1,125 485 2001 MICROSOFT:XBOX/Intel Mobile Celeron(Pentium III) 1,985 733 2001 (推定値) 任天堂:ゲームボーイアドバンス/ARM ARM7TDMI 15 16.8 2001 Pentium 4 Extreme Edition 9,726 3,200 2003 任天堂:ニンテンドーDS/ARM ARM946E-S 74.5 67 2004 MICROSOFT:XBOX360 [Xenon]/IBM PowerPC G5 6,400 3,200 2005 任天堂:Wii [BroadWay]/IBM PowerPC G3 2,250 729 2006 SONY:PS3[Cell 内蔵 PPE]/IBM PowerPC G5 10,240 3,200 2006 Intel Core 2 Extreme QX6700 49,161 2,660 2006 [出典]文書末の注で明示したものを除き、"Instructions per second", Wikipedia, http://en.wikipedia.org/wiki/Instructions_per_second, (2008
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • A Bibliography of Publications in IEEE Micro
    A Bibliography of Publications in IEEE Micro Nelson H. F. Beebe University of Utah Department of Mathematics, 110 LCB 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Tel: +1 801 581 5254 FAX: +1 801 581 4148 E-mail: [email protected], [email protected], [email protected] (Internet) WWW URL: http://www.math.utah.edu/~beebe/ 16 September 2021 Version 2.108 Title word cross-reference -Core [MAT+18]. -Cubes [YW94]. -D [ASX19, BWMS19, DDG+19, Joh19c, PZB+19, ZSS+19]. -nm [ABG+16, KBN16, TKI+14]. #1 [Kah93i]. 0.18-Micron [HBd+99]. 0.9-micron + [Ano02d]. 000-fps [KII09]. 000-Processor $1 [Ano17-58, Ano17-59]. 12 [MAT 18]. 16 + + [ABG+16]. 2 [DTH+95]. 21=2 [Ste00a]. 28 [BSP 17]. 024-Core [JJK 11]. [KBN16]. 3 [ASX19, Alt14e, Ano96o, + AOYS95, BWMS19, CMAS11, DDG+19, 1 [Ano98s, BH15, Bre10, PFC 02a, Ste02a, + + Ste14a]. 1-GHz [Ano98s]. 1-terabits DFG 13, Joh19c, LXB07, LX10, MKT 13, + MAS+07, PMM15, PZB+19, SYW+14, [MIM 97]. 10 [Loc03]. 10-Gigabit SCSR93, VPV12, WLF+08, ZSS+19]. 60 [Gad07, HcF04]. 100 [TKI+14]. < [BMM15]. > [BMM15]. 2 [Kir84a, Pat84, PSW91, YSMH91, ZACM14]. [WHCK18]. 3 [KBW95]. II [BAH+05]. ∆ 100-Mops [PSW91]. 1000 [ES84]. 11- + [Lyl04]. 11/780 [Abr83]. 115 [JBF94]. [MKG 20]. k [Eng00j]. µ + [AT93, Dia95c, TS95]. N [YW94]. x 11FO4 [ASD 05]. 12 [And82a]. [DTB01, Dur96, SS05]. 12-DSP [Dur96]. 1284 [Dia94b]. 1284-1994 [Dia94b]. 13 * [CCD+82]. [KW02]. 1394 [SB00]. 1394-1955 [Dia96d]. 1 2 14 [WD03]. 15 [FD04]. 15-Billion-Dollar [KR19a].
    [Show full text]
  • Modern Processor Design: Fundamentals of Superscalar
    Fundamentals of Superscalar Processors John Paul Shen Intel Corporation Mikko H. Lipasti University of Wisconsin WAVELAND PRESS, INC. Long Grove, Illinois To Our parents: Paul and Sue Shen Tarja and Simo Lipasti Our spouses: Amy C. Shen Erica Ann Lipasti Our children: Priscilla S. Shen, Rachael S. Shen, and Valentia C. Shen Emma Kristiina Lipasti and Elias Joel Lipasti For information about this book, contact: Waveland Press, Inc. 4180 IL Route 83, Suite 101 Long Grove, IL 60047-9580 (847) 634-0081 info @ waveland.com www.waveland.com Copyright © 2005 by John Paul Shen and Mikko H. Lipasti 2013 reissued by Waveland Press, Inc. 10-digit ISBN 1-4786-0783-1 13-digit ISBN 978-1-4786-0783-0 All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means without permission in writing from the publisher. Printed in the United States of America 7 6 5 4 3 2 1 Table of Contents PrefaceAbout the Authors x ix 1 Processor Design 1 1.1 The Evolution of Microprocessors 2 1.21.2.1 Instruction Digital Set Systems Processor Design Design 44 1.2.2 Architecture,Realization Implementation, and 5 1.2.3 Instruction Set Architecture 6 1.2.4 Dynamic-Static Interface 8 1.3 Principles of Processor Performance 10 1.3.1 Processor Performance Equation 10 1.3.2 Processor Performance Optimizations 11 1.3.3 Performance Evaluation Method 13 1.4 Instruction-Level Parallel Processing 16 1.4.1 From Scalar to Superscalar 16 1.4.2 Limits of Instruction-Level Parallelism 24 1.51.4.3 Machines Summary for Instruction-Level
    [Show full text]
  • The Bus Interface for 32-Bit Microprocessors That
    Freescale Semiconductor, Inc. MPC60XBUSRM 1/2004 Rev. 0.1 . c n I , r o t c u d n o c i PowerPC™ Microprocessor m e S Family: e l a c The Bus Interface for 32-Bit s e Microprocessors e r F For More Information On This Product, Go to: www.freescale.com Freescale Semiconductor, Inc... Freescale Semiconductor,Inc. F o r M o r G e o I n t f o o : r w m w a t w i o . f n r e O e n s c T a h l i e s . c P o r o m d u c t , Freescale Semiconductor, Inc. Overview 1 Signal Descriptions 2 Memory Access Protocol 3 Memory Coherency 4 . System Status Signals 5 . c n I Additional Bus Configurations 6 , r o t Direct-Store Interface 7 c u d n System Considerations 8 o c i m Processor Summary A e S e Processor Clocking Overview B l a c s Processor Upgrade Suggestions C e e r F L2 Considerations for the PowerPC 604 Processor D Coherency Action Tables E Glossary of Terms and Abbreviations GLO Index IND For More Information On This Product, Go to: www.freescale.com Freescale Semiconductor, Inc. 1 Overview 2 Signal Descriptions 3 Memory Access Protocol 4 Memory Coherency . 5 System Status Signals . c n I 6 Additional Bus Configurations , r o t 7 Direct-Store Interface c u d n 8 System Considerations o c i m A Processor Summary e S e B Processor Clocking Overview l a c s C Processor Upgrade Suggestions e e r F D L2 Considerations for the PowerPC 604 Processor E Coherency Action Tables GLO Glossary of Terms and Abbreviations IND Index For More Information On This Product, Go to: www.freescale.com Freescale Semiconductor, Inc.
    [Show full text]
  • Architectural Trade-Offs in a Latency Tolerant Gallium Arsenide Microprocessor
    Architectural Trade-offs in a Latency Tolerant Gallium Arsenide Microprocessor by Michael D. Upton A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Electrical Engineering) in The University of Michigan 1996 Doctoral Committee: Associate Professor Richard B. Brown, CoChairperson Professor Trevor N. Mudge, CoChairperson Associate Professor Myron Campbell Professor Edward S. Davidson Professor Yale N. Patt © Michael D. Upton 1996 All Rights Reserved DEDICATION To Kelly, Without whose support this work may not have been started, would not have been enjoyed, and could not have been completed. Thank you for your continual support and encouragement. ii ACKNOWLEDGEMENTS Many people, both at Michigan and elsewhere, were instrumental in the completion of this work. I would like to thank my co-chairs, Richard Brown and Trevor Mudge, first for attracting me to Michigan, and then for allowing our group the freedom to explore many different ideas in architecture and circuit design. Their guidance and motivation combined to make this a truly memorable experience. I am also grateful to each of my other dissertation committee members: Ed Davidson, Yale Patt, and Myron Campbell. The support and encouragement of the other faculty on the project, Karem Sakallah and Ron Lomax, is also gratefully acknowledged. My friends and former colleagues Mark Rossman, Steve Sugiyama, Ray Farbarik, Tom Rossman and Kendall Russell were always willing to lend their assistance. Richard Oettel continually reminded me of the valuable support of friends and family, and the importance of having fun in your work. Our corporate sponsors: Cascade Design Automation, Chronologic, Cadence, and Metasoft, provided software and support that made this work possible.
    [Show full text]
  • Copy-Back Cache Organisation for An
    COPY-BACK CACHE ORGANISATION FOR AN ASYNCHRONOUS MICROPROCESSOR A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Science & Engineering 2002 DARANEE HORMDEE Department of Computer Science Contents Contents ...................................................................................................................2 List of Figures .........................................................................................................6 List of Tables ...........................................................................................................8 Abstract ...................................................................................................................9 Declaration ............................................................................................................10 Copyright ...............................................................................................................10 The Author ............................................................................................................11 Acknowledgements ...............................................................................................12 Dedication .............................................................................................................13 Chapter 1: Introduction ....................................................................................14 1.1 Thesis organisation .................................................................................16
    [Show full text]