Reduced Instruction Set Computers

Total Page:16

File Type:pdf, Size:1020Kb

Reduced Instruction Set Computers ARTICLES REDUCED INSTRUCTION SET COMPUTERS Reduced instruction set computers aim for both simplicity in hardware and synergy between architectures and compilers. Optimizing compilers are used to compile programming languages down to instructions that are as unencumbered as microinstructions in a large virtual address space, and to make the instruction cycle time as fast as possible. DAVID A. PATTERSON As circuit technologies reduce the relative cost of proc- called a microinstruction, and the contents are essen- essing and memory, instruction sets that are too com- tially an interpreter, programmed in microinstructions. plex become a distinct liability to performance. The The main memories of these computers were magnetic designers of reduced instruction set computers (RISCs) core memories, the small control memories of which strive for both simplicity in hardware and synergy be- were usually 10 times faster than core. tween architecture and compilers, in order to stream- Minicomputer manufacturers tend to follow the lead line processing as much as possible. Early experience of mainframe manufacturers, especially when the indicates that RISCs can in fact run much faster than mainframe manufacturer is IBM, and so microprogram- more conventionally designed machines. ming caught on quickly. The rapid growth of semicon- ductor memories also speeded this trend. In the early BACKGROUND 1970s. for example, 8192 bits of read-only memory The IBM System/360, first introduced in 1964, was the (ROM) took up no more space than 8 bits of register. real beginning of modern computer architecture. Al- Eventually, minicomputers using core main memory though computers in the System/360 “family” provided and semiconductor control memory became standard in a different level of performance for a different price, all the minicomputer industry. ran identical software. The System/360 originated the With the continuing growth of semiconductor mem- distinction between computer architecture-the abstract ory, a much richer and more complicated instruction structure of a computer that a machine-language pro- set could be implemented. The architecture research grammer needs to know to write programs-and the community argued for richer instruction sets. Let us hardware implementation of that structure. Before the review some of the arguments they advanced at that System/BBO, architectural trade-offs were determined time: by the effect on price and performance of a single im- plementation: henceforth, architectural trade-offs be- 1. Richer instruction sets would simplify compilers. As came more esoteric. The consequences of single imple- the story was told, compilers were very hard to mentations would no longer be sufficient to settle an build, and compilers for machines with registers argument about instruction set design. were the hardest of all. Compilers for architectures Microprogiamming was the primary technological with execution models based either on stacks or innovation behind this marketing concept. Micropro- memory-to-memory operations were much simpler and more reliable. gramming relied on a small control memory and was an elegant way of buil.ding the processor control unit for a 2. Richer instruction sets would alleviate the software cri- large instruction set. Each word of control memory is sis. At a time when software costs were rising as 0 1985 ACM OOOl-0782/85/0100-0008 75~ fast as hardware costs were dropping, it seemed 8 Communications of the ACM Ianua y 1985 Volume 28 Number 1 Articles appropriate to move as much function to the hard- many exotic instruction formats that reduced program ware as possible. The idea was to create machine size. instructions that resembled programming language The rapid rise of the integrated circuit, along with statements, so as to close the “semantic gap” be- arguments from the architecture research community tween programming languages and machine lan- in the 1970s led to certain design principles that guided guages. computer architecture: 3. Richer instruction sets would improve architecture qual- The memo y technology used for microprograms was ity. After IBM differentiated architecture from im- growing rapidly, so large microprograms would add lit- plementation, the research community looked for tle or nothing to the cost of the machine. ways to measure the quality of an architecture, as Since microinstructions were much faster than normal opposed to the speed at which implementations machine instructions, moving software functions to mi- could run programs. The only architectural metrics crocode made for faster computers and more reliable then widely recognized were program size, the functions. number of bits of instructions, and bits of data Since execution speed was proportional to program size, fetched from memory during program execution architectural techniques that led to smaller programs (see Figure 1). also led to faster computers. Registers were old fashioned and made it hard to build Memory efficiency was such a dominating concern in compilers; stacks or memory-to-memory architectures these metrics because main memory-magnetic core were superior execution models. As one architecture re- memory-was so slow and expensive. These metrics searcher put it in 1978, “One’s eyebrows should rise are partially responsible for the prevailing belief in the whenever a future architecture is developed with a 1970s that execution speed was proportional to program register-oriented instruction set.“’ size. It even became fashionable to examine long lists of instruction execution to see if a pair or triple of old Computers that exemplify these design principles are instructions could be replaced by a single, more power- the IBM 370/168, the DEC VAX-11/780, the Xerox ful instruction. The belief that larger programs were ’ Myers, G.). The case against stack-oriented instruction sets. Comput. Archit. invariably slower programs inspired the invention of News 6. 3 (Aug. 1977). 7-10. 8 4 16 8 16 . Load i rB : B Load i B I : : Load : rC i C Add : C I . Add i rA 3 rf3 ! rC Store I A . I . : Store : rA ! A c . I I = 104b; D = 96b; M = 200b I = 72b; D = 96b; M = 168b (Register-to-Register) (Memory-to-Register) 8 16 16 16 . : :. Add : B . C . A . I I I = 56b; D = 96b; M = 152b (Memory-to-Memory) In this example, the three data words are 32 bits each These metrics suggest that a memory-to-memory archi- and the address field is 16 bits. Metrics were selected by tecture is the “best” architecture, and a register-to-regis- research architects for deciding which architecture is ter architecture the “worst.” This study led one research best; they selected the total size of executed instructions architect in 1978 to suggest that future architectures (I), the total size of executed data (D), and the total mem- should not include registers. ory traffic-that is, the sum of I and D, which is (M). FIGURE1. The Statement A c B + C Translated into Assembly Language for Three Execution Models: Register-to-Register, Memory-to-Register, and Memory-to-Memory fanua y 1985 Volume 28 Number 1 Communications of the ACM Articles TABLE I. Four Implementations of Modem Architectures l8M 370/168 VAX-11/780 Dorado iAPX-432 Year 1973 1978 1978 1982 Number of instructions 208 303 270 222 Control memory size 420 Kb 480 Kb 138 Kb 64 Kb Instruction sizes (bits) 16-48 16-456 8-24 6-321 Technology ECL MSI l-I-L MSI ECL MSI NMOS VLSI Execution model reg-mem reg-mem stack stack mem-mem mem-mem mem-mem 44-w reg-reg Cache size 64 Kb 64 Kb 64 Kb 0 These four implementations, designed in the 197Os, all used VAX and the 432. Note how much larger the control memo- microprogramming. The emphasis on memory efficiency at ries were than the cache memories. that time led to the varying-sized instruction formats of the Dorado, and the Intel iAPX-432. Table I shows some of ing microprograms, since microprogramming was the the characteristics of these machines. most tedious form of machine-language programming. Although computer architects were reaching a con- Many researchers, including myself, built compilers sensus on design principles, the implementation world and debuggers for microprogramming. This was a for- was changing around them: midable assignment, for virtually no inefficiencies could be tolerated in microcode. These demands led to Semiconductor memory was replacing core, which the invention of new programming languages for micro- meant that main memories would no longer be 10 programming and new compiler techniques. times slower tha.n control memories. Unfortunately for me and several other researchers, Since it was virtually impossible to remove all mis- there were three more impediments that kept WCSs takes for 400,000 bits of microcode, control store from being very popular. (Although a few machines ROMs were becoming control store RAMS. offer WCS as an option today, it is unlikely that more Caches had been invented-studies showed that the than one in a thousand programmers take this option.) locality of programs meant that small, fast buffers These impediments were could make substantial improvement in the imple- mentation speed.of an architecture. As Table I shows, Virtual memory complications. Once computers caches were included in nearly every machine, made the transition from physical memory to vir- though control memories were much larger than tual memory, microprogrammers incurred the cache memories. added difficulty of making sure that any routine could start over if any memory operand caused a Compilers were subsetting architectures-simple virtual memory fault. compilers found it difficult to generate the complex Limited address space. The most difficult program- new functions that were included to help close the ming situation occurs when a program must be “semantic gap.” Optimizing compilers subsetted ar- forced to fit in too small a memory. With control chitectures because they removed so many of the memories of 4096 words or less, some unfortunate unknowns at compiler time that they rarely needed WCS developers spent more time squeezing space the powerful instructions at run time.
Recommended publications
  • Configurable RISC-V Softcore Processor for FPGA Implementation
    1 Configurable RISC-V softcore processor for FPGA implementation Joao˜ Filipe Monteiro Rodrigues, Instituto Superior Tecnico,´ Universidade de Lisboa Abstract—Over the past years, the processor market has and development of several programming tools. The RISC-V been dominated by proprietary architectures that implement Foundation controls the RISC-V evolution, and its members instruction sets that require licensing and the payment of fees to are responsible for promoting the adoption of RISC-V and receive permission so they can be used. ARM is an example of one of those companies that sell its microarchitectures to participating in the development of the new ISA. In the list of the manufactures so they can implement them into their own members are big companies like Google, NVIDIA, Western products, and it does not allow the use of its instruction set Digital, Samsung, or Qualcomm. (ISA) in other implementations without licensing. The RISC-V The main goal of this work is the development of a RISC- instruction set appeared proposing the hardware and software V softcore processor to be implemented in an FPGA, using development without costs, through the creation of an open- source ISA. This way, it is possible that any project that im- a non-RISC-V core as the base of this architecture. The plements the RISC-V ISA can be made available open-source or proposed solution is focused on solving the problems and even implemented in commercial products. However, the RISC- limitations identified in the other RISC-V cores that were V solutions that have been developed do not present the needed analyzed in this thesis, especially in terms of the adaptability requirements so they can be included in projects, especially the and flexibility, allowing future modifications according to the research projects, because they offer poor documentation, and their performances are not suitable.
    [Show full text]
  • Computer Architecture Research with RISC-‐V
    Computer Architecture Research with RISC-V Krste Asanovic UC Berkeley, RISC-V Foundaon, & SiFive Inc. [email protected] www.riscv.org CARRV, Boston, MA October 14, 2017 Only Two Big Mistakes Possible when Picking Research ISA § Design your own § Use someone else’s Promise of using commercially popular ISAs for research § Ported applicaons/workloads to study § Standard soRware stacks (compilers, OS) § Real commercial hardware to experiment with § Real commercial hardware to validate models with § ExisAng implementaons to study / modify § Industry is more interested in your results 3 Types of projects and standard ISAs used by me or my group in last 30 years § Experiments on real hardware plaorms: - Transputer arrays, SPARC workstaons, MIPS workstaons, POWER workstaons, ARMv7 handhelds, x86 desktops/ servers § Research chips built around modified MIPS ISA: - T0, IRAM, STC1, Scale, Maven § FPGA prototypes/simulaons using various ISAs: - RAMP Blue (modified Microblaze), RAMP Gold/ DIABLO (SPARC v8) § Experiments using soRware architectural simulators: - SimpleScalar (PISA), SMTsim (Alpha), Simics (SPARC,x86), Bochs (x86), MARSS (x86), Gem5(SPARC), PIN (Itanium, x86), … § And of course, other groups used some others too. RealiMes of using standard ISAs § Everything only works if you don’t change anything - Stock binary applicaons - Stock libraries - Stock compiler - Stock OS - Stock hardware implementaon § Add a new instrucAon, get a new non-standard ISA! - Need source code for the apps and recompile - Impossible for most real interesAng applicaons
    [Show full text]
  • A Developer's Guide to the POWER Architecture
    http://www.ibm.com/developerworks/linux/library/l-powarch/ 7/26/2011 10:53 AM English Sign in (or register) Technical topics Evaluation software Community Events A developer's guide to the POWER architecture POWER programming by the book Brett Olsson , Processor architect, IBM Anthony Marsala , Software engineer, IBM Summary: POWER® processors are found in everything from supercomputers to game consoles and from servers to cell phones -- and they all share a common architecture. This introduction to the PowerPC application-level programming model will give you an overview of the instruction set, important registers, and other details necessary for developing reliable, high performing POWER applications and maintaining code compatibility among processors. Date: 30 Mar 2004 Level: Intermediate Also available in: Japanese Activity: 22383 views Comments: The POWER architecture and the application-level programming model are common across all branches of the POWER architecture family tree. For detailed information, see the product user's manuals available in the IBM® POWER Web site technical library (see Resources for a link). The POWER architecture is a Reduced Instruction Set Computer (RISC) architecture, with over two hundred defined instructions. POWER is RISC in that most instructions execute in a single cycle and typically perform a single operation (such as loading storage to a register, or storing a register to memory). The POWER architecture is broken up into three levels, or "books." By segmenting the architecture in this way, code compatibility can be maintained across implementations while leaving room for implementations to choose levels of complexity for price/performances trade-offs. The levels are: Book I.
    [Show full text]
  • RISC-V Geneology
    RISC-V Geneology Tony Chen David A. Patterson Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-6 http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-6.html January 24, 2016 Copyright © 2016, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Introduction RISC-V is an open instruction set designed along RISC principles developed originally at UC Berkeley1 and is now set to become an open industry standard under the governance of the RISC-V Foundation (www.riscv.org). Since the instruction set architecture (ISA) is unrestricted, organizations can share implementations as well as open source compilers and operating systems. Designed for use in custom systems on a chip, RISC-V consists of a base set of instructions called RV32I along with optional extensions for multiply and divide (RV32M), atomic operations (RV32A), single-precision floating point (RV32F), and double-precision floating point (RV32D). The base and these four extensions are collectively called RV32G. This report discusses the historical precedents of RV32G. We look at 18 prior instruction set architectures, chosen primarily from earlier UC Berkeley RISC architectures and major proprietary RISC instruction sets. Among the 122 instructions in RV32G: ● 6 instructions do not have precedents among the selected instruction sets, ● 98 instructions of the 116 with precedents appear in at least three different instruction sets.
    [Show full text]
  • I.T.S.O. Powerpc an Inside View
    SG24-4299-00 PowerPC An Inside View IBM SG24-4299-00 PowerPC An Inside View Take Note! Before using this information and the product it supports, be sure to read the general information under “Special Notices” on page xiii. First Edition (September 1995) This edition applies to the IBM PC PowerPC hardware and software products currently announced at the date of publication. Order publications through your IBM representative or the IBM branch office serving your locality. Publications are not stocked at the address given below. An ITSO Technical Bulletin Evaluation Form for reader′s feedback appears facing Chapter 1. If the form has been removed, comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JLPC Building 014 Internal Zip 5220 1000 NW 51st Street Boca Raton, Florida 33431-1328 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Copyright International Business Machines Corporation 1995. All rights reserved. Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Abstract This document provides technical details on the PowerPC technology. It focuses on the features and advantages of the PowerPC Architecture and includes an historical overview of the development of the reduced instruction set computer (RISC) technology. It also describes in detail the IBM Power Series product family based on PowerPC technology, including IBM Personal Computer Power Series 830 and 850 and IBM ThinkPad Power Series 820 and 850.
    [Show full text]
  • Design of the RISC-V Instruction Set Architecture
    Design of the RISC-V Instruction Set Architecture Andrew Waterman Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-1 http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-1.html January 3, 2016 Copyright © 2016, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Design of the RISC-V Instruction Set Architecture by Andrew Shell Waterman A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor David Patterson, Chair Professor Krste Asanovi´c Associate Professor Per-Olof Persson Spring 2016 Design of the RISC-V Instruction Set Architecture Copyright 2016 by Andrew Shell Waterman 1 Abstract Design of the RISC-V Instruction Set Architecture by Andrew Shell Waterman Doctor of Philosophy in Computer Science University of California, Berkeley Professor David Patterson, Chair The hardware-software interface, embodied in the instruction set architecture (ISA), is arguably the most important interface in a computer system. Yet, in contrast to nearly all other interfaces in a modern computer system, all commercially popular ISAs are proprietary.
    [Show full text]
  • RISC Computers
    Reduced Instruction Set Computers Prof. Vojin G. Oklobdzija University of California Keywords: IBM 801; RISC; computer architecture; Load/Store architecture; instruction sets; pipelining; super-scalar machines; super-pipeline machines; optimizing compiler; Branch and Execute; Delayed Branch; Definitions Main features of RISC architecture Analysis of RISC and what makes RISC What brings performance to RISC Going beyond one instruction per cycle Issues in super-scalar machines 1. Architecture The term Computer Architecture was first defined in the paper by Amdahl, Blaauw and Brooks of IBM Corporation announcing IBM System/360 computer family on April 7, 1964 [1]. On that day IBM Corporation introduced, in the words of IBM spokesman, "the most important product announcement that this corporation has made in its history". Computer architecture was defined as the attributes of a computer seen by the machine language programmer as described in the Principles of Operation. IBM referred to the Principles of Operation as a definition of the machine which enables machine language programmer to write functionally correct, time independent programs that would run across a number of implementations of that particular architecture. The architecture specification covers: all functions of the machine that are observable by the program [2]. On the other hand Principles of Operation. are used to define the functions that the implementation should provide. In order to be functionally correct it is necessary that the implementation conforms to the Principles of Operation. Principles of Operation document defines computer architecture which includes: • Instruction set • Instruction format • Operation codes • Addressing modes • All registers and memory locations that may be directly manipulated or tested by a machine language program • Formats for data representation Machine Implementation was defined as the actual system organization and hardware structure encompassing the major functional units, data paths, and control.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Dynamicsilicon Gilder Publishing, LLC
    Written by Published by Nick Tredennick DynamicSilicon Gilder Publishing, LLC Vol. 2, No. 9 The Investor's Guide to Breakthrough Micro Devices September 2002 Lessons From the PC he worldwide market for personal computers has grown to 135 million units annually. Personal com- puters represent half of the worldwide revenue for semiconductors. In July of this year, PC makers Tshipped their billionth PC. I trace the story of the personal computer (PC) from its beginning. The lessons from the PC apply to contemporary products such as switches, routers, network processors, microprocessors, and cell phones. The story doesn’t repeat exactly because semiconductor-process advances change the rules. PC beginnings Intel introduced the first commercial microprocessor in 1971. The first microprocessors were designed solely as cost-effective substitutes for numerous chips in bills of material. But it wasn’t long before micro- processors became central processing units in small computer systems. The first advertisement for a micro- processor-based computer appeared in March 1974. Soon, companies, such as Scelbi Computer Consulting, MITS, and IMSAI, offered kit computers. Apple Computer incorporated in January 1977 and introduced the Apple II computer in April. The Apple II came fully assembled, which, together with the invention of the spreadsheet, changed the personal computer from a kit hobby to a personal business machine. In 1981, IBM legitimized personal computers by introducing the IBM Personal Computer. Once endorsed by IBM, many businesses bought personal computers. Even though it came out in August, IBM sold 15,000 units that year. Apple had a four-year head start. When IBM debuted its personal computer, the Apple II dom- inated the market.
    [Show full text]
  • Oral History of David (Dave) Ditzel
    Oral History of David (Dave) Ditzel Interviewed by: Kevin Krewell Recorded: July 31, 2015 Mountain View, California CHM Reference number: X7554.2016 © 2015 Computer History Museum Oral History of David (Dave) Ditzel Kevin Krewell: Hi, I'd like to introduce myself. I'm Kevin Krewell. I'm a member of the Semiconductor SIG at the Computer History Museum. Today is July 31st, 2015. We are at the Computer History Museum, and we're about to interview Dave Ditzel, who's probably best known as the founder of Transmeta. But, also an early researcher in RISC processor design at UC Berkeley. He's also worked at ATT Bell Labs, and at Sun Microsystems. Those are probably his most well-known attributes, or his well-known jobs. Dave Ditzel: And even at Intel. That was a surprise to people. Krewell: And Intel, but that's probably less well known. Most people were surprised when-- Ditzel: I wasn't allowed to talk about what I was doing there. Krewell: --I don't know if you still can. Ditzel: A little bit. Krewell: So, let's start off with a little background on Dave, and then we'll work into his history, fascinating history actually. But we're going to start off just a little bit about family background. Why don't just give us a little growing up, and where you born and raised, and how you started life. Ditzel: Generally, I grew up in the Midwest, Missouri and Iowa. My father was a chemical engineer, well trained, university-educated parents, encouraged me to read.
    [Show full text]
  • ISA Supplement
    Lecture 04: ISA Principles Supplements CSE 564 Computer Architecture Summer 2017 Department of Computer Science and Engineering Yonghong Yan [email protected] www.secs.oakland.edu/~yan 1 Contents 1. Introduc@on 2. ClAssifying Instruc@on Set Architectures 3. Memory Addressing 4. Type and Size of Operands 5. Operaons in the Instrucon Set 6. Instruc@ons for Control Flow 7. Encoding an Instruc@on Set 8. CrosscuMng Issues: The Role of Compilers 9. RISC-V ISA • Supplements 2 Lecture 03 Supplements • MIPS ISA • RISC vs CISC • Compiler compilaon stages • ISA Historical – Appendix L • Comparison of ISA – Appendix K 3 PuMng it All together: the MIPS Architecture(A simple 64-bit load-store architecture) • Use general-purpose registers with a load-store architecture • Support these addressing modes:displacement(with address offset of 12-16bits), immediate (size 8-16bits), and register indirect. • Support these data sizes and types: 8-, 16-, and 64- integers and 64-bit IEEE 754 floang-point numbers. 4 PuMng it all together:the MIPS Architecture(A simple 64-bit load-store architecture) • Support these simple instrucGons:load, store, add, subtract, move register-register, and shi\. • Compare equal, compare not equal, compare less, branch, jump, call, and return. • Use fixed instrucGon encoding if interested in performance, and use variable instrucGon encoding if interested in code size. 5 MIPS emphAsized • A simple load-store instrucGon set • Design for pipelining efficiency • Efficiency as a compiler target. 6 Instruc@on lAyout for MIPS 7 The loAd And store
    [Show full text]
  • RISC Architecture
    REDUCED INSTRUCTION SET COMPUTERS Prof. Vojin G. Oklobdzija Integration Berkeley, CA 94708 Keywords: IBM 801; RISC; computer architecture; Load/Store Architecture; instruction sets; pipelining; super-scalar machines; super-pipeline machines; optimizing compiler; Branch and Execute; Delayed Branch; Cache; Harvard Architecture; Delayed Load; Super-Scalar; Super-Pipelined. Fall 1999 1. ARCHITECTURE The term Computer Architecture was first defined in the paper by Amdahl, Blaauw and Brooks of International Business Machines (IBM) Corporation announcing IBM System/360 computer family on April 7, 1964 [1,17]. On that day IBM Corporation introduced, in the words of IBM spokesman, "the most important product announcement that this corporation has made in its history". Computer architecture was defined as the attributes of a computer seen by the machine language programmer as described in the Principles of Operation. IBM referred to the Principles of Operation as a definition of the machine which enables machine language programmer to write functionally correct, time independent programs that would run across a number of implementations of that particular architecture. The architecture specification covers: all functions of the machine that are observable by the program [2]. On the other hand Principles of Operation. are used to define the functions that the implementation should provide. In order to be functionally correct it is necessary that the implementation conforms to the Principles of Operation. Principles of Operation document defines computer architecture which includes: • Instruction set • Instruction format • Operation codes • Addressing modes • All registers and memory locations that may be directly manipulated or tested by a machine language program • Formats for data representation Machine Implementation was defined as the actual system organization and hardware structure encompassing the major functional units, data paths, and control.
    [Show full text]