Introduction to the MIPS64® Architecture Comes As Part of a Multi-Volume Set

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to the MIPS64® Architecture Comes As Part of a Multi-Volume Set MIPS® Architecture For Programmers Volume I-A: Introduction to the MIPS64® Architecture Document Number: MD00083 Revision 6.01 August 20, 2014 Public. This publication contains proprietary information which is subject to change without notice and is supplied ‘as is’, without any warranty of any kind. MIPS® Architecture For Programmers Volume I-A: Introduction to the MIPS64® Architecture, Revision 6.01 Contents Chapter 1: About This Book ................................................................................................................ 12 1.1: Typographical Conventions ....................................................................................................................... 13 1.1.1: Italic Text.......................................................................................................................................... 13 1.1.2: Bold Text.......................................................................................................................................... 13 1.1.3: Courier Text ..................................................................................................................................... 13 1.1.4: Colored Text..................................................................................................................................... 13 1.2: UNPREDICTABLE and UNDEFINED ....................................................................................................... 13 1.2.1: UNPREDICTABLE........................................................................................................................... 13 1.2.2: UNDEFINED .................................................................................................................................... 14 1.2.3: UNSTABLE ...................................................................................................................................... 14 1.3: Special Symbols in Pseudocode Notation................................................................................................. 15 1.4: Notation for Register Field Accessibility .................................................................................................... 18 1.5: For More Information ................................................................................................................................. 20 Chapter 2: Overview of the MIPS® Architecture................................................................................ 21 2.1: Historical Perspective ................................................................................................................................ 21 2.2: Components of the MIPS® Architecture.................................................................................................... 22 2.2.1: MIPS Instruction Set Architecture (ISA)........................................................................................... 22 2.2.2: MIPS Privileged Resource Architecture (PRA) ................................................................................ 22 2.2.3: MIPS Modules and Application Specific Extensions (ASEs)............................................................ 23 2.2.4: MIPS User Defined Instructions (UDIs)............................................................................................ 23 2.3: Evolution of the Architecture...................................................................................................................... 23 2.3.1: MIPS I through MIPS V Architectures.............................................................................................. 24 2.3.2: MIPS64 Architecture Release 2....................................................................................................... 25 2.3.3: MIPS64 Architecture Releases 2.5+ ................................................................................................ 26 2.3.4: MIPS64 Release 3 Architecture (MIPSr3™)................................................................................... 26 2.3.5: MIPS64 Architecture Release 5....................................................................................................... 27 2.3.6: MIPS64 Architecture Release 6....................................................................................................... 28 2.4: Compliance and Subsetting....................................................................................................................... 30 2.4.1: Subsetting of Non-Privileged Architecture ....................................................................................... 30 2.4.2: Subsetting of Privileged Architecture ............................................................................................... 32 Chapter 3: Modules and Application Specific Extensions ............................................................... 35 3.1: Description of Optional Components......................................................................................................... 35 3.2: Application Specific Instructions ................................................................................................................ 36 3.2.1: MIPS16e™ Application Specific Extension ..................................................................................... 37 3.2.2: MDMX™ Application Specific Extension ......................................................................................... 37 3.2.3: MIPS-3D® Application Specific Extension ...................................................................................... 37 3.2.4: SmartMIPS® Application Specific Extension .................................................................................. 37 3.2.5: MIPS® DSP Module ....................................................................................................................... 37 3.2.6: MIPS® MT Module .......................................................................................................................... 37 3.2.7: MIPS® MCU Application Specific Extension .................................................................................. 37 3.2.8: MIPS® Virtualization Module .......................................................................................................... 38 3.2.9: MIPS® SIMD Architecture Module .................................................................................................. 38 Chapter 4: CPU Programming Model.................................................................................................. 39 3 MIPS® Architecture For Programmers Volume I-A: Introduction to the MIPS64® Architecture, Revision 6.01 4.1: CPU Data Formats .................................................................................................................................... 39 4.2: Coprocessors (CP0-CP3).......................................................................................................................... 39 4.3: CPU Registers........................................................................................................................................... 40 4.3.1: CPU General-Purpose Registers..................................................................................................... 40 4.3.2: CPU Special-Purpose Registers...................................................................................................... 40 4.4: Byte Ordering and Endianness.................................................................................................................. 43 4.4.1: Big-Endian Order ............................................................................................................................. 43 4.4.2: Little-Endian Order........................................................................................................................... 43 4.4.3: MIPS Bit Endianness ....................................................................................................................... 43 4.5: Memory Alignment..................................................................................................................................... 44 4.5.1: Addressing Alignment Constraints................................................................................................... 44 4.5.2: Unaligned Load and Store Instructions (Removed in Release 6) .................................................... 45 4.6: Memory Access Types .............................................................................................................................. 45 4.6.1: Uncached Memory Access .............................................................................................................. 46 4.6.2: Cached Memory Access .................................................................................................................. 46 4.6.3: Uncached Accelerated Memory Access ......................................................................................... 46 4.7: Implementation-Specific Access Types..................................................................................................... 47 4.8: Cacheability and Coherency Attributes and Access Types ....................................................................... 47 4.9: Mixing Access Types................................................................................................................................. 48 4.10: Instruction Fetch .....................................................................................................................................
Recommended publications
  • Memory Consistency
    Lecture 9: Memory Consistency Parallel Computing Stanford CS149, Winter 2019 Midterm ▪ Feb 12 ▪ Open notes ▪ Practice midterm Stanford CS149, Winter 2019 Shared Memory Behavior ▪ Intuition says loads should return latest value written - What is latest? - Coherence: only one memory location - Consistency: apparent ordering for all locations - Order in which memory operations performed by one thread become visible to other threads ▪ Affects - Programmability: how programmers reason about program behavior - Allowed behavior of multithreaded programs executing with shared memory - Performance: limits HW/SW optimizations that can be used - Reordering memory operations to hide latency Stanford CS149, Winter 2019 Today: what you should know ▪ Understand the motivation for relaxed consistency models ▪ Understand the implications of relaxing W→R ordering Stanford CS149, Winter 2019 Today: who should care ▪ Anyone who: - Wants to implement a synchronization library - Will ever work a job in kernel (or driver) development - Seeks to implement lock-free data structures * - Does any of the above on ARM processors ** * Topic of a later lecture ** For reasons to be described later Stanford CS149, Winter 2019 Memory coherence vs. memory consistency ▪ Memory coherence defines requirements for the observed Observed chronology of operations on address X behavior of reads and writes to the same memory location - All processors must agree on the order of reads/writes to X P0 write: 5 - In other words: it is possible to put all operations involving X on a timeline such P1 read (5) that the observations of all processors are consistent with that timeline P2 write: 10 ▪ Memory consistency defines the behavior of reads and writes to different locations (as observed by other processors) P2 write: 11 - Coherence only guarantees that writes to address X will eventually propagate to other processors P1 read (11) - Consistency deals with when writes to X propagate to other processors, relative to reads and writes to other addresses Stanford CS149, Winter 2019 Coherence vs.
    [Show full text]
  • Memory Ordering: a Value-Based Approach
    Memory Ordering: A Value-Based Approach Harold W. Cain Mikko H. Lipasti Computer Sciences Dept. Dept. of Elec. and Comp. Engr. Univ. of Wisconsin-Madison Univ. of Wisconsin-Madison [email protected] [email protected] Abstract queues, etc.) used to find independent operations and cor- rectly execute them out of program order are often con- Conventional out-of-order processors employ a multi- strained by clock cycle time. In order to decrease clock ported, fully-associative load queue to guarantee correct cycle time, the size of these conventional structures must memory reference order both within a single thread of exe- usually decrease, also decreasing IPC. Conversely, IPC cution and across threads in a multiprocessor system. As may be increased by increasing their size, but this also improvements in process technology and pipelining lead to increases their access time and may degrade clock fre- higher clock frequencies, scaling this complex structure to quency. accommodate a larger number of in-flight loads becomes There has been much recent research on mitigating this difficult if not impossible. Furthermore, each access to this negative feedback loop by scaling structures in ways that complex structure consumes excessive amounts of energy. are amenable to high clock frequencies without negatively In this paper, we solve the associative load queue scalabil- affecting IPC. Much of this work has focused on the ity problem by completely eliminating the associative load instruction issue queue, physical register file, and bypass queue. Instead, data dependences and memory consistency paths, but very little has focused on the load queue or store constraints are enforced by simply re-executing load queue [1][18][21].
    [Show full text]
  • MIPS Architecture • MIPS (Microprocessor Without Interlocked Pipeline Stages) • MIPS Computer Systems Inc
    Spring 2011 Prof. Hyesoon Kim MIPS Architecture • MIPS (Microprocessor without interlocked pipeline stages) • MIPS Computer Systems Inc. • Developed from Stanford • MIPS architecture usages • 1990’s – R2000, R3000, R4000, Motorola 68000 family • Playstation, Playstation 2, Sony PSP handheld, Nintendo 64 console • Android • Shift to SOC http://en.wikipedia.org/wiki/MIPS_architecture • MIPS R4000 CPU core • Floating point and vector floating point co-processors • 3D-CG extended instruction sets • Graphics – 3D curved surface and other 3D functionality – Hardware clipping, compressed texture handling • R4300 (embedded version) – Nintendo-64 http://www.digitaltrends.com/gaming/sony- announces-playstation-portable-specs/ Not Yet out • Google TV: an Android-based software service that lets users switch between their TV content and Web applications such as Netflix and Amazon Video on Demand • GoogleTV : search capabilities. • High stream data? • Internet accesses? • Multi-threading, SMP design • High graphics processors • Several CODEC – Hardware vs. Software • Displaying frame buffer e.g) 1080p resolution: 1920 (H) x 1080 (V) color depth: 4 bytes/pixel 4*1920*1080 ~= 8.3MB 8.3MB * 60Hz=498MB/sec • Started from 32-bit • Later 64-bit • microMIPS: 16-bit compression version (similar to ARM thumb) • SIMD additions-64 bit floating points • User Defined Instructions (UDIs) coprocessors • All self-modified code • Allow unaligned accesses http://www.spiritus-temporis.com/mips-architecture/ • 32 64-bit general purpose registers (GPRs) • A pair of special-purpose registers to hold the results of integer multiply, divide, and multiply-accumulate operations (HI and LO) – HI—Multiply and Divide register higher result – LO—Multiply and Divide register lower result • a special-purpose program counter (PC), • A MIPS64 processor always produces a 64-bit result • 32 floating point registers (FPRs).
    [Show full text]
  • Superh RISC Engine SH-1/SH-2
    SuperH RISC Engine SH-1/SH-2 Programming Manual September 3, 1996 Hitachi America Ltd. Notice When using this document, keep the following in mind: 1. This document may, wholly or partially, be subject to change without notice. 2. All rights are reserved: No one is permitted to reproduce or duplicate, in any form, the whole or part of this document without Hitachi’s permission. 3. Hitachi will not be held responsible for any damage to the user that may result from accidents or any other reasons during operation of the user’s unit according to this document. 4. Circuitry and other examples described herein are meant merely to indicate the characteristics and performance of Hitachi’s semiconductor products. Hitachi assumes no responsibility for any intellectual property claims or other problems that may result from applications based on the examples described herein. 5. No license is granted by implication or otherwise under any patents or other rights of any third party or Hitachi, Ltd. 6. MEDICAL APPLICATIONS: Hitachi’s products are not authorized for use in MEDICAL APPLICATIONS without the written consent of the appropriate officer of Hitachi’s sales company. Such use includes, but is not limited to, use in life support systems. Buyers of Hitachi’s products are requested to notify the relevant Hitachi sales offices when planning to use the products in MEDICAL APPLICATIONS. Introduction The SuperH RISC engine family incorporates a RISC (Reduced Instruction Set Computer) type CPU. A basic instruction can be executed in one clock cycle, realizing high performance operation. A built-in multiplier can execute multiplication and addition as quickly as DSP.
    [Show full text]
  • The 32-Bit PA-RISC Run- Time Architecture Document
    The 32-bit PA-RISC Run- time Architecture Document HP-UX 10.20 Version 3.0 (c) Copyright 1985-1997 HEWLETT-PACKARD COMPANY. The information contained in this document is subject to change without notice. HEWLETT-PACKARD MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OFMERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with furnishing, performance, or use of this material. Hewlett-Packard assumes no responsibility for the use or reliability of its software on equipment that is not furnished by Hewlett-Packard. This document contains proprietary information which is protected by copyright. All rights are reserved. No part of this document may be photocopied, reproduced, or translated to another language without the prior written consent of Hewlett- Packard Company. CSO/STG/STD/CLO Hewlett-Packard Company 11000 Wolfe Road Cupertino, California 95014 By The Run-time Architecture Team CHAPTER 1 Introduction This document describes the runtime architecture for PA-RISC systems running either the HP-UX or the MPE/iX operating system. Other operating systems running on PA-RISC may also use this runtime architecture or a variant of it. The runtime architecture defines all the conventions and formats necessary to compile, link, and execute a program on one of these operating systems. Its purpose is to ensure that object modules produced by many different compilers can be linked together into a single application, and to specify the interfaces between compilers and linker, and between linker and operating system.
    [Show full text]
  • Design of the RISC-V Instruction Set Architecture
    Design of the RISC-V Instruction Set Architecture Andrew Waterman Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-1 http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-1.html January 3, 2016 Copyright © 2016, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Design of the RISC-V Instruction Set Architecture by Andrew Shell Waterman A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor David Patterson, Chair Professor Krste Asanovi´c Associate Professor Per-Olof Persson Spring 2016 Design of the RISC-V Instruction Set Architecture Copyright 2016 by Andrew Shell Waterman 1 Abstract Design of the RISC-V Instruction Set Architecture by Andrew Shell Waterman Doctor of Philosophy in Computer Science University of California, Berkeley Professor David Patterson, Chair The hardware-software interface, embodied in the instruction set architecture (ISA), is arguably the most important interface in a computer system. Yet, in contrast to nearly all other interfaces in a modern computer system, all commercially popular ISAs are proprietary.
    [Show full text]
  • Connect User Guide Armv8-A Memory Systems
    ARMv8-A Memory Systems ConnectARMv8- UserA Memory Guide VersionSystems 0.1 Version 1.0 Copyright © 2016 ARM Limited or its affiliates. All rights reserved. Page 1 of 17 ARM 100941_0100_en ARMv8-A Memory Systems Revision Information The following revisions have been made to this User Guide. Date Issue Confidentiality Change 28 February 2017 0100 Non-Confidential First release Proprietary Notice Words and logos marked with ® or ™ are registered trademarks or trademarks of ARM® in the EU and other countries, except as otherwise stated below in this proprietary notice. Other brands and names mentioned herein may be the trademarks of their respective owners. Neither the whole nor any part of the information contained in, or the product described in, this document may be adapted or reproduced in any material form except with the prior written permission of the copyright holder. The product described in this document is subject to continuous developments and improvements. All particulars of the product and its use contained in this document are given by ARM in good faith. However, all warranties implied or expressed, including but not limited to implied warranties of merchantability, or fitness for purpose, are excluded. This document is intended only to assist the reader in the use of the product. ARM shall not be liable for any loss or damage arising from the use of any information in this document, or any error or omission in such information, or any incorrect use of the product. Where the term ARM is used it means “ARM or any of its subsidiaries as appropriate”. Confidentiality Status This document is Confidential.
    [Show full text]
  • PA-RISC 1.1 Architecture and Instruction Set Reference Manual
    PA-RISC 1.1 Architecture and Instruction Set Reference Manual HP Part Number: 09740-90039 Printed in U.S.A. February 1994 Third Edition Notice The information contained in this document is subject to change without notice. HEWLETT-PACKARD MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with furnishing, performance, or use of this material. Hewlett-Packard assumes no responsibility for the use or reliability of its software on equipment that is not furnished by Hewlett-Packard. This document contains proprietary information which is protected by copyright. All rights are reserved. No part of this document may be photocopied, reproduced, or translated to another language without the prior written consent of Hewlett-Packard Company. Copyright © 1986 – 1994 by HEWLETT-PACKARD COMPANY Printing History The printing date will change when a new edition is printed. The manual part number will change when extensive changes are made. First Edition . November 1990 Second Edition. September 1992 Third Edition . February 1994 Contents Contents . iii Preface. ix 1 Overview . 1-1 Introduction. 1-1 System Features . 1-2 PA-RISC 1.1 Enhancements . 1-2 System Organization . 1-4 2 System Organization . 2-1 Introduction. 2-1 Memory and I/O Addressing . 2-2 Byte Ordering (Big Endian/Little Endian) . 2-3 Levels of PA-RISC. 2-5 Data Types . 2-5 Processing Resources. 2-7 3 Addressing and Access Control.
    [Show full text]
  • PIPELINING Basics
    PIPELINING basics • A pipelined architecture for MIPS • Hurdles in pipelining • Simple solutions to pipelining hurdles • Advanced pipelining • Conclusive remarks 1 MIPS pipelined architecture • MIPS simplified architecture can be realized by having each instruction execute in a single clock cycle, approximately as long as the 5 clocks required to complete the 5 phases. Why would this approach be unconvenient? • We already know one reason: the longer cycle would waste time in all instructions that take less to execute (fewer than 5 clocks). 2 MIPS pipelined architecture • There is another relevant reason: • By breaking down into more phases (clock cycles) the execution of each instruction, it is possible to (partially) overlap the execution of more instructions. • At eack clock cycle, while a section of the datapath takes care of an instruction, another section can be used to execute another instruction. 3 MIPS pipelined architecture • If we start a new instruction at each new clock cycle, each of the 5 phases of the multi-cycle MIPS architecture becomes a stage in the pipeline, and the pattern of execution of a sequence of instructions looks like this (Hennessy-Patterson, Fig. A.1): instr. 1 2 3 4 5 6 7 8 9 number instr. i IF ID EX MEM WB instr. i+1 IF ID EX MEM WB instr. i+2 IF ID EX MEM WB instr. i+3 IF ID EX MEM WB instr. i+4 IF ID EX MEM WB 4 MIPS pipelined architecture • Pipelining only works is one does not attempt to execute at the same time two different operations that use the same datapath resource: – as an instance, if the datapath has a single ALU, this cannot compute concurrently the effective address of a load and the subtraction of the operands in another instruction • Using reduced (simple) instructions (namely RISC) makes it fairly easy to determine at each time which datapath resources are free and which are busy.
    [Show full text]
  • Recent Advances in Memory Consistency Models for Hardware
    SELECTED FOR PROC OF THE IEEE SPECIAL ISSUE ON DISTRIBUTED SHAREDMEMORY NOT THE FINAL VERSION Recent Advances in Memory Consistency Mo dels for Hardware SharedMemory Systems Sarita V Adve Vijay S Pai and Parthasarathy Ranganathan Department of Electrical and Computer Engineering Rice University Houston Texas rsimecericeedu Abstract sp ecication that determines what values the programmer The memory consistency mo del of a sharedmemory sys can exp ect a read to return tem determines the order in which memory op erations will Unipro cessors provide a simple memory consistency app ear to execute to the programmer The memory consis tency mo del for a system typically involves a tradeo b e mo del to the programmer that ensures that memory op era tween p erformance and programmability This pap er pro tions will app ear to execute one at a time and in the order vides an overview of recent advances in hardware optimiza sp ecied by the program or program order Thus a read tions compiler optimizations and programming environ in a unipro cessor returns the value of the last write to the ments relevant to memory consistency mo dels of hardware distributed sharedmemory systems same lo cation where last is uniquely dened by the pro We discuss recent hardware and compiler optimizations gram order Unipro cessor hardware and compilers how that exploit the observation that it is sucient to only ap ever do not necessarily execute memory op erations one pear as if the ordering rules of the consistency mo del are ob eyed These optimizations substantially
    [Show full text]
  • Fence Scoping
    Fence Scoping Changhui Lin Vijay Nagarajan Rajiv Gupta CSE Department School of Informatics CSE Department University of California, Riverside University of Edinburgh, UK University of California, Riverside [email protected] [email protected] [email protected] Abstract—We observe that fence instructions used by pro- avoid these problems by eschewing mutual exclusion, and grammers are usually only intended to order memory accesses improve scalability and robustness while still ensuring safety. within a limited scope. Based on this observation, we propose For example, non-blocking work-stealing [10] is a popular the concept fence scope which defines the scope within which approach for balancing load in parallel programs. Cilk [16] a fence enforces the order of memory accesses, called scoped bases on work-stealing to support load balancing of fully fence (S-Fence). S-Fence is a customizable fence, which enables strict computations. X10 [11] extends the the work-stealing programmers to express ordering demands by specifying the scope of fences when they only want to order part of memory approach in Cilk to support terminally strict computations. accesses. At runtime, hardware uses the scope information Other commonly used concurrent algorithms include non- conveyed by programmers to execute fence instructions in a blocking concurrent queue [33] which is implemented in Java manner that imposes fewer memory ordering constraints than class ConcurrentLinkedQueue, Lamport queue [28], Harris’s set a traditional fence, and hence improves program performance. [20], etc. Our experimental results show that the benefit of S-Fence hinges on the characteristics of applications and hardware parameters. A group of lock-free algorithms achieve peak speedups ranging from 1.13x to 1.34x; while full applications achieve speedups ranging from 1.04x to 1.23x.
    [Show full text]
  • Digital Design & Computer Arch. Lecture 17: Branch Prediction II
    Digital Design & Computer Arch. Lecture 17: Branch Prediction II Prof. Onur Mutlu ETH Zürich Spring 2020 24 April 2020 Required Readings n This week q Smith and Sohi, “The Microarchitecture of Superscalar Processors,” Proceedings of the IEEE, 1995 q H&H Chapters 7.8 and 7.9 q McFarling, “Combining Branch Predictors,” DEC WRL Technical Report, 1993. 2 Recall: How to Handle Control Dependences n Critical to keep the pipeline full with correct sequence of dynamic instructions. n Potential solutions if the instruction is a control-flow instruction: n Stall the pipeline until we know the next fetch address n Guess the next fetch address (branch prediction) n Employ delayed branching (branch delay slot) n Do something else (fine-grained multithreading) n Eliminate control-flow instructions (predicated execution) n Fetch from both possible paths (if you know the addresses of both possible paths) (multipath execution) 3 Recall: Fetch Stage with BTB and Direction Prediction Direction predictor (taken?) taken? PC + inst size Next Fetch Address Program hit? Counter Address of the current branch target address Cache of Target Addresses (BTB: Branch Target Buffer) 4 Recall: More Sophisticated Direction Prediction n Compile time (static) q Always not taken q Always taken q BTFN (Backward taken, forward not taken) q Profile based (likely direction) q Program analysis based (likely direction) n Run time (dynamic) q Last time prediction (single-bit) q Two-bit counter based prediction q Two-level prediction (global vs. local) q Hybrid q Advanced algorithms
    [Show full text]