Eight Key Ideas in Computer Architecture, from Eight Decades of Innovation 2020S 2000S 2010S 1990S 1980S

Total Page:16

File Type:pdf, Size:1020Kb

Eight Key Ideas in Computer Architecture, from Eight Decades of Innovation 2020S 2000S 2010S 1990S 1980S Eight Key Ideas in Computer Architecture, from Eight Decades of Innovation 2020s 2000s 2010s 1990s 1980s 1970s 1960s 1950s 1940s Behrooz Parhami University of California, Santa Barbara Mar. 2021 Eight Key Ideas in Computer Architecture Slide 1 About This Presentation This slide show was first developed as a keynote talk for remote delivery at CSICC-2016, Computer Society of Iran Computer Conference, held in Tehran on March 8-10. The talk was presented at a special session on March 9, 11:30 AM to 12:30 PM Tehran time (12:00-1:00 AM PST). An updated version of the talk was prepared and used within B. Parhami’s suite of lectures for IEEE Computer Society’s Distinguished Visitors Program, 2021-2023. All rights reserved for the author. ©2021 Behrooz Parhami Edition Released Revised Revised Revised First Mar. 2016 July 2017 Second Mar. 2021 Mar. 2021 Eight Key Ideas in Computer Architecture Slide 2 My Personal Academic Journey 1988 1986 Grad 1968 1970 1969 1974 My children, 1973 PhD ages 27-37 2023 Fifty years in academia Mar. 2021 Eight Key Ideas in Computer Architecture Slide 3 Some of the material in this talk come from, or will appear in updated versions of, my two computer architecture textbooks Mar. 2021 Eight Key Ideas in Computer Architecture Slide 4 Eight Key Ideas in Computer Architecture, from Eight Decades of Innovation Computer architecture became an established discipline when the stored-program concept was incorporated into bare- bones computers of the 1940s. Since then, the field has seen multiple minor and major innovations in each decade. I will present my pick of the most-important innovation in each of the eight decades, from the 1940s to the 2010s, and show how these ideas, when connected to each other and allowed to interact and cross-fertilize, produced the phenomenal growth of computer performance, now approaching exa-op/s (billion billion operations per second) level, as well as to ultra-low-energy and single-chip systems. I will also offer predictions for what to expect in the 2020s and beyond. Mar. 2021 Eight Key Ideas in Computer Architecture Slide 5 Background: Analytical Engine 1820s-1930s Difference Engine Punched Cards Program (Instructions) Data (Variable values) Mar. 2021 Eight Key Ideas in Computer Architecture Slide 6 Difference Engine: Fixed Program D(2) D(1) f(x) x x2+x+41 2nd-degree polynomial evaluation Babbage’s Difference Engine 2 Babbage used 7th-degree f(x) Mar. 2021 Eight Key Ideas in Computer Architecture Slide 7 Analytical Engine: Programmable Ada Lovelace, world’s first programmer Sample program > Mar. 2021 Eight Key Ideas in Computer Architecture Slide 8 The Programs of Charles Babbage IEEE Annals of the History of Computing, January-March 2021 (article by Raul Rojas) Mar. 2021 Eight Key Ideas in Computer Architecture Slide 9 Electromechanical and Turing’s Collosus Plug-Programmable Computing Machines Punched-card device Zuse’s Z3 ENIAC Mar. 2021 Eight Key Ideas in Computer Architecture Slide 10 The Eight Key Ideas 2010s Specialization 2000s GPUs 1990s FPGAs 1980s Pipelining 1970s Cache memory 1960s Parallel processing 1950s Microprogramming 1940s Stored program Come back in 2040, Mar. 2021for my top-10 Eight Key Ideas list! in Computer Architecture Slide 11 1940s: Stored Program Exactly who came up with the stored-program concept is unclear Legally, John Vincent Atanasoff is designated as inventor, but many others deserve to share the credit Babbage Turing Atanasoff Eckert Mauchly von Neumann Mar. 2021 Eight Key Ideas in Computer Architecture Slide 12 First Stored-Program Computer Manchester Small-Scale Experimental Machine Ran a stored program on June 21, 1948 (Its successor, Manchester Mark 1, operational in April 1949) EDSAC (Cambridge University; Wilkes et al.) Fully operational on May 6, 1949 EDVAC (IAS, Princeton University; von Neumann et al.) Conceived in 1945 but not delivered until August 1949 BINAC (Binary Automatic Computer, Eckert & Mauchly) Delivered on August 22, 1949, but did not function correctly Source: Wikipedia Eight Key Ideas in Computer Architecture Slide 13 von Neumann vs. Harvard Architecture von Neumann architecture (unified memory for code & data) Programs can be modified like data More efficient use of memory space Harvard architecture (separate memories for code & data) Better protection of programs Higher aggregate memory bandwidth Memory optimization for access type Mar. 2021 Eight Key Ideas in Computer Architecture Slide 14 1950s: Microprogramming Traditional control unit design (multicycle): Specify which control signals are to be asserted in each cycle and synthesize Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Gives rise to Notes for State 5: random logic % 0 for j or jal, 1 for syscall, State 5 State 6 don’t-care for other instr’s Jump/ ALUSrcX = 1 @ 0 for , , and , j jal syscall Branch ALUSrcY = 1 InstData = 1 1 for jr, 2 for branches ALUFunc = ‘’ MemWrite = 1 Error-prone # 1 for j, jr, jal, and syscall, JumpAddr = % ALUZero () for beq (bne), PCSrc = @ bit 31 of ALUout for bltz PCWrite = # and inflexible For jal, RegDst = 2, RegInSrc = 1, RegWrite = 1 sw State 0 State 1 State 2 State 3 State 4 InstData = 0 lw/ Hardware MemRead = 1 IRWrite = 1 ALUSrcX = 0 sw ALUSrcX = 1 lw InstData = 1 RegDst = 0 bugs hard to ALUSrcX = 0 ALUSrcY = 3 ALUSrcY = 2 MemRead = 1 RegInSrc = 0 ALUSrcY = 0 ALUFunc = ‘+’ ALUFunc = ‘+’ ALUFunc = ‘+’ RegWrite = 1 PCSrc = 3 fix after PCWrite = 1 deployment Start State 7 State 8 ALUSrcX = 1 RegDst = 0 or 1 Design from Note for State 7: ALUSrcY = 1 or 2 RegInSrc = 1 ALUFunc is determined based ALU- ALUFunc = Varies RegWrite = 1 scratch for on the op and fn fields type each system Mar. 2021 Eight Key Ideas in Computer Architecture Slide 15 The Birth of Microprogramming The control state machine resembles a program (microprogram) comprised of instructions (microinstructions) and sequencing PC Cache Regis ter ALU ALU Sequence control control control inputs function control Every minstruction contains a JumpAddr FnType PCSrc LogicFn branch field PCWrite AddSub ALUSrcY InstData ALUSrcX MemRead RegInSrc MemWrite RegDst Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Notes for State 5: % 0 for j or jal, 1 for syscall, State 5 State 6 RegWrite don’t-care for other instr’s IRWrite Jump/ ALUSrcX = 1 @ 0 for , , and , j jal syscall Branch ALUSrcY = 1 InstData = 1 1 for jr, 2 for branches ALUFunc = ‘’ MemWrite = 1 # 1 for j, jr, jal, and syscall, JumpAddr = % ALUZero () for beq (bne), PCSrc = @ bit 31 of ALUout for bltz PCWrite = # For jal, RegDst = 2, RegInSrc = 1, RegWrite = 1 sw State 0 State 1 State 2 State 3 State 4 InstData = 0 lw/ MemRead = 1 IRWrite = 1 ALUSrcX = 0 sw ALUSrcX = 1 lw InstData = 1 RegDst = 0 ALUSrcX = 0 ALUSrcY = 3 ALUSrcY = 2 MemRead = 1 RegInSrc = 0 ALUSrcY = 0 ALUFunc = ‘+’ ALUFunc = ‘+’ ALUFunc = ‘+’ RegWrite = 1 Maurice V. Wilkes PCSrc = 3 PCWrite = 1 Start (1913-2010) State 7 State 8 ALUSrcX = 1 RegDst = 0 or 1 Note for State 7: ALUSrcY = 1 or 2 RegInSrc = 1 ALUFunc is determined based ALU- ALUFunc = Varies RegWrite = 1 on the op and fn fields type Mar. 2021 Eight Key Ideas in Computer Architecture Slide 16 Microprogramming Implementation Each microinstruction controls the data path for one clock cycle Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Notes for State 5: % 0 for j or jal, 1 for syscall, State 5 State 6 don’t-care for other instr’s Jump/ ALUSrcX = 1 @ 0 for , , and , fetch: ----- j jal syscall Branch ALUSrcY = 1 InstData = 1 1 for jr, 2 for branches ALUFunc = ‘’ MemWrite = 1 # 1 for j, jr, jal, and syscall, JumpAddr = % ALUZero () for beq (bne), PCSrc = @ ----- bit 31 of ALUout for bltz PCWrite = # For jal, RegDst = 2, RegInSrc = 1, RegWrite = 1 sw ----- State 0 State 1 State 2 State 3 State 4 InstData = 0 lw/ MemRead = 1 IRWrite = 1 ALUSrcX = 0 sw ALUSrcX = 1 lw InstData = 1 RegDst = 0 ALUSrcX = 0 ALUSrcY = 3 ALUSrcY = 2 MemRead = 1 RegInSrc = 0 ALUSrcY = 0 ALUFunc = ‘+’ ALUFunc = ‘+’ ALUFunc = ‘+’ RegWrite = 1 PCSrc = 3 PCWrite = 1 Multiway andi: ----- Start State 7 State 8 branch ----- ALUSrcX = 1 RegDst = 0 or 1 Note for State 7: ALUSrcY = 1 or 2 RegInSrc = 1 ALUFunc is determined based ALU- ALUFunc = Varies RegWrite = 1 on the op and fn fields type Program Dispatch Instr table 1 0 MicroPC 1 Microprogram Instr 2 1 memory or PLA 3 Address Dispatch 0 Instr table 2 Incr Data mInstr Instr Microinstruction register mInstr op (from instruction mInstr Sequence register) Control signals to data path control mInstr Mar. 2021 Eight Key Ideas in Computer Architecture Slide 17 1960s: Parallel Processing Associative (content-addressed) memories and other forms of parallelism (compute-I/O overlap, functional parallelism) had been in existence since the 1940s SRAM Binary CAM Ternary CAM Highly parallel machine, proposed by Daniel Slotnick in 1964, later morphed into ILLIAC IV in 1968 (operational in 1975) Michael J. Flynn devised his now-famous 4-way taxonomy (SISD, SIMD, MISD, MIMD) in 1966 and Amdahl formulated his speed-up law and rules for system balance in 1967 Mar. 2021 Eight Key Ideas in Computer Architecture Slide 18 The ILLIAC IV Concept: SIMD Parallelism Control Unit Host Common control unit Fetch data or (B6700) instructions fetches and decodes Data Control Mode instructions, broadcasting the To Proc. 63 Proc. Proc. Proc. Proc. To Proc. 0 control signals 0 1 2 63 to all PEs Mem. Mem. Mem. Mem. 0 1 2 63 Each PE executes or ignores the instruction based on local, data- dependent conditions 3 3 3 3 2 2 2 2 1 1 1 . 1 The interprocessor Synchronized 0 0 0 0 routing network is Disks 299 299 299 299 (Main Memory) 298 298 298 298 only partially shown Band 0 Band 1 Band 2 Band 51 Mar.
Recommended publications
  • Computer Architectures
    Computer Architectures Central Processing Unit (CPU) Pavel Píša, Michal Štepanovský, Miroslav Šnorek The lecture is based on A0B36APO lecture. Some parts are inspired by the book Paterson, D., Henessy, V.: Computer Organization and Design, The HW/SW Interface. Elsevier, ISBN: 978-0-12-370606-5 and it is used with authors' permission. Czech Technical University in Prague, Faculty of Electrical Engineering English version partially supported by: European Social Fund Prague & EU: We invests in your future. AE0B36APO Computer Architectures Ver.1.10 1 Computer based on von Neumann's concept ● Control unit Processor/microprocessor ● ALU von Neumann architecture uses common ● Memory memory, whereas Harvard architecture uses separate program and data memories ● Input ● Output Input/output subsystem The control unit is responsible for control of the operation processing and sequencing. It consists of: ● registers – they hold intermediate and programmer visible state ● control logic circuits which represents core of the control unit (CU) AE0B36APO Computer Architectures 2 The most important registers of the control unit ● PC (Program Counter) holds address of a recent or next instruction to be processed ● IR (Instruction Register) holds the machine instruction read from memory ● Another usually present registers ● General purpose registers (GPRs) may be divided to address and data or (partially) specialized registers ● SP (Stack Pointer) – points to the top of the stack; (The stack is usually used to store local variables and subroutine return addresses) ● PSW (Program Status Word) ● IM (Interrupt Mask) ● Optional Floating point (FPRs) and vector/multimedia regs. AE0B36APO Computer Architectures 3 The main instruction cycle of the CPU 1. Initial setup/reset – set initial PC value, PSW, etc.
    [Show full text]
  • Early Stored Program Computers
    Stored Program Computers Thomas J. Bergin Computing History Museum American University 7/9/2012 1 Early Thoughts about Stored Programming • January 1944 Moore School team thinks of better ways to do things; leverages delay line memories from War research • September 1944 John von Neumann visits project – Goldstine’s meeting at Aberdeen Train Station • October 1944 Army extends the ENIAC contract research on EDVAC stored-program concept • Spring 1945 ENIAC working well • June 1945 First Draft of a Report on the EDVAC 7/9/2012 2 First Draft Report (June 1945) • John von Neumann prepares (?) a report on the EDVAC which identifies how the machine could be programmed (unfinished very rough draft) – academic: publish for the good of science – engineers: patents, patents, patents • von Neumann never repudiates the myth that he wrote it; most members of the ENIAC team contribute ideas; Goldstine note about “bashing” summer7/9/2012 letters together 3 • 1.0 Definitions – The considerations which follow deal with the structure of a very high speed automatic digital computing system, and in particular with its logical control…. – The instructions which govern this operation must be given to the device in absolutely exhaustive detail. They include all numerical information which is required to solve the problem…. – Once these instructions are given to the device, it must be be able to carry them out completely and without any need for further intelligent human intervention…. • 2.0 Main Subdivision of the System – First: since the device is a computor, it will have to perform the elementary operations of arithmetics…. – Second: the logical control of the device is the proper sequencing of its operations (by…a control organ.
    [Show full text]
  • Microprocessor Architecture
    EECE416 Microcomputer Fundamentals Microprocessor Architecture Dr. Charles Kim Howard University 1 Computer Architecture Computer System CPU (with PC, Register, SR) + Memory 2 Computer Architecture •ALU (Arithmetic Logic Unit) •Binary Full Adder 3 Microprocessor Bus 4 Architecture by CPU+MEM organization Princeton (or von Neumann) Architecture MEM contains both Instruction and Data Harvard Architecture Data MEM and Instruction MEM Higher Performance Better for DSP Higher MEM Bandwidth 5 Princeton Architecture 1.Step (A): The address for the instruction to be next executed is applied (Step (B): The controller "decodes" the instruction 3.Step (C): Following completion of the instruction, the controller provides the address, to the memory unit, at which the data result generated by the operation will be stored. 6 Harvard Architecture 7 Internal Memory (“register”) External memory access is Very slow For quicker retrieval and storage Internal registers 8 Architecture by Instructions and their Executions CISC (Complex Instruction Set Computer) Variety of instructions for complex tasks Instructions of varying length RISC (Reduced Instruction Set Computer) Fewer and simpler instructions High performance microprocessors Pipelined instruction execution (several instructions are executed in parallel) 9 CISC Architecture of prior to mid-1980’s IBM390, Motorola 680x0, Intel80x86 Basic Fetch-Execute sequence to support a large number of complex instructions Complex decoding procedures Complex control unit One instruction achieves a complex task 10
    [Show full text]
  • Technical Details of the Elliott 152 and 153
    Appendix 1 Technical Details of the Elliott 152 and 153 Introduction The Elliott 152 computer was part of the Admiralty’s MRS5 (medium range system 5) naval gunnery project, described in Chap. 2. The Elliott 153 computer, also known as the D/F (direction-finding) computer, was built for GCHQ and the Admiralty as described in Chap. 3. The information in this appendix is intended to supplement the overall descriptions of the machines as given in Chaps. 2 and 3. A1.1 The Elliott 152 Work on the MRS5 contract at Borehamwood began in October 1946 and was essen- tially finished in 1950. Novel target-tracking radar was at the heart of the project, the radar being synchronized to the computer’s clock. In his enthusiasm for perfecting the radar technology, John Coales seems to have spent little time on what we would now call an overall systems design. When Harry Carpenter joined the staff of the Computing Division at Borehamwood on 1 January 1949, he recalls that nobody had yet defined the way in which the control program, running on the 152 computer, would interface with guns and radar. Furthermore, nobody yet appeared to be working on the computational algorithms necessary for three-dimensional trajectory predic- tion. As for the guns that the MRS5 system was intended to control, not even the basic ballistics parameters seemed to be known with any accuracy at Borehamwood [1, 2]. A1.1.1 Communication and Data-Rate The physical separation, between radar in the Borehamwood car park and digital computer in the laboratory, necessitated an interconnecting cable of about 150 m in length.
    [Show full text]
  • What Do We Mean by Architecture?
    Embedded programming: Comparing the performance and development workflows for architectures Embedded programming week FABLAB BRIGHTON 2018 What do we mean by architecture? The architecture of microprocessors and microcontrollers are classified based on the way memory is allocated (memory architecture). There are two main ways of doing this: Von Neumann architecture (also known as Princeton) Von Neumann uses a single unified cache (i.e. the same memory) for both the code (instructions) and the data itself, Under pure von Neumann architecture the CPU can be either reading an instruction or reading/writing data from/to the memory. Both cannot occur at the same time since the instructions and data use the same bus system. Harvard architecture Harvard architecture uses different memory allocations for the code (instructions) and the data, allowing it to be able to read instructions and perform data memory access simultaneously. The best performance is achieved when both instructions and data are supplied by their own caches, with no need to access external memory at all. How does this relate to microcontrollers/microprocessors? We found this page to be a good introduction to the topic of microcontrollers and ​ ​ microprocessors, the architectures they use and the difference between some of the common types. First though, it’s worth looking at the difference between a microprocessor and a microcontroller. Microprocessors (e.g. ARM) generally consist of just the Central ​ ​ Processing Unit (CPU), which performs all the instructions in a computer program, including arithmetic, logic, control and input/output operations. Microcontrollers (e.g. AVR, PIC or ​ 8051) contain one or more CPUs with RAM, ROM and programmable input/output ​ peripherals.
    [Show full text]
  • Three-Dimensional Integrated Circuit Design: EDA, Design And
    Integrated Circuits and Systems Series Editor Anantha Chandrakasan, Massachusetts Institute of Technology Cambridge, Massachusetts For other titles published in this series, go to http://www.springer.com/series/7236 Yuan Xie · Jason Cong · Sachin Sapatnekar Editors Three-Dimensional Integrated Circuit Design EDA, Design and Microarchitectures 123 Editors Yuan Xie Jason Cong Department of Computer Science and Department of Computer Science Engineering University of California, Los Angeles Pennsylvania State University [email protected] [email protected] Sachin Sapatnekar Department of Electrical and Computer Engineering University of Minnesota [email protected] ISBN 978-1-4419-0783-7 e-ISBN 978-1-4419-0784-4 DOI 10.1007/978-1-4419-0784-4 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2009939282 © Springer Science+Business Media, LLC 2010 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) Foreword We live in a time of great change.
    [Show full text]
  • The Von Neumann Computer Model 5/30/17, 10:03 PM
    The von Neumann Computer Model 5/30/17, 10:03 PM CIS-77 Home http://www.c-jump.com/CIS77/CIS77syllabus.htm The von Neumann Computer Model 1. The von Neumann Computer Model 2. Components of the Von Neumann Model 3. Communication Between Memory and Processing Unit 4. CPU data-path 5. Memory Operations 6. Understanding the MAR and the MDR 7. Understanding the MAR and the MDR, Cont. 8. ALU, the Processing Unit 9. ALU and the Word Length 10. Control Unit 11. Control Unit, Cont. 12. Input/Output 13. Input/Output Ports 14. Input/Output Address Space 15. Console Input/Output in Protected Memory Mode 16. Instruction Processing 17. Instruction Components 18. Why Learn Intel x86 ISA ? 19. Design of the x86 CPU Instruction Set 20. CPU Instruction Set 21. History of IBM PC 22. Early x86 Processor Family 23. 8086 and 8088 CPU 24. 80186 CPU 25. 80286 CPU 26. 80386 CPU 27. 80386 CPU, Cont. 28. 80486 CPU 29. Pentium (Intel 80586) 30. Pentium Pro 31. Pentium II 32. Itanium processor 1. The von Neumann Computer Model Von Neumann computer systems contain three main building blocks: The following block diagram shows major relationship between CPU components: the central processing unit (CPU), memory, and input/output devices (I/O). These three components are connected together using the system bus. The most prominent items within the CPU are the registers: they can be manipulated directly by a computer program. http://www.c-jump.com/CIS77/CPU/VonNeumann/lecture.html Page 1 of 15 IPR2017-01532 FanDuel, et al.
    [Show full text]
  • Law and Military Operations in Kosovo: 1999-2001, Lessons Learned For
    LAW AND MILITARY OPERATIONS IN KOSOVO: 1999-2001 LESSONS LEARNED FOR JUDGE ADVOCATES Center for Law and Military Operations (CLAMO) The Judge Advocate General’s School United States Army Charlottesville, Virginia CENTER FOR LAW AND MILITARY OPERATIONS (CLAMO) Director COL David E. Graham Deputy Director LTC Stuart W. Risch Director, Domestic Operational Law (vacant) Director, Training & Support CPT Alton L. (Larry) Gwaltney, III Marine Representative Maj Cody M. Weston, USMC Advanced Operational Law Studies Fellows MAJ Keith E. Puls MAJ Daniel G. Jordan Automation Technician Mr. Ben R. Morgan Training Centers LTC Richard M. Whitaker Battle Command Training Program LTC James W. Herring Battle Command Training Program MAJ Phillip W. Jussell Battle Command Training Program CPT Michael L. Roberts Combat Maneuver Training Center MAJ Michael P. Ryan Joint Readiness Training Center CPT Peter R. Hayden Joint Readiness Training Center CPT Mark D. Matthews Joint Readiness Training Center SFC Michael A. Pascua Joint Readiness Training Center CPT Jonathan Howard National Training Center CPT Charles J. Kovats National Training Center Contact the Center The Center’s mission is to examine legal issues that arise during all phases of military operations and to devise training and resource strategies for addressing those issues. It seeks to fulfill this mission in five ways. First, it is the central repository within The Judge Advocate General's Corps for all-source data, information, memoranda, after-action materials and lessons learned pertaining to legal support to operations, foreign and domestic. Second, it supports judge advocates by analyzing all data and information, developing lessons learned across all military legal disciplines, and by disseminating these lessons learned and other operational information to the Army, Marine Corps, and Joint communities through publications, instruction, training, and databases accessible to operational forces, world-wide.
    [Show full text]
  • Chapter 1 Computer Basics
    Chapter 1 Computer Basics 1.1 History of the Computer A computer is a complex piece of machinery made up of many parts, each of which can be considered a separate invention. abacus Ⅰ. Prehistory /ˈæbəkəs/ n. 算盘 The abacus, which is a simple counting aid, might have been invented in Babylonia(now Iraq) in the fourth century BC. It should be the ancestor of the modern digital calculator. Figure 1.1 Abacus Wilhelm Schickard built the first mechanical calculator in 1623. It can loom work with six digits and carry digits across columns. It works, but never /lu:m/ makes it beyond the prototype stage. n. 织布机 Blaise Pascal built a mechanical calculator, with the capacity for eight digits. However, it had trouble carrying and its gears tend to jam. punch cards Joseph-Marie Jacquard invents an automatic loom controlled by punch 穿孔卡片 cards. Difference Engine Charles Babbage conceived of a Difference Engine in 1820. It was a 差分机 massive steam-powered mechanical calculator designed to print astronomical tables. He attempted to build it over the course of the next 20 years, only to have the project cancelled by the British government in 1842. 1 新编计算机专业英语 Analytical Engine Babbage’s next idea was the Analytical Engine-a mechanical computer which 解析机(早期的机 could solve any mathematical problem. It used punch-cards similar to those used 械通用计算机) by the Jacquard loom and could perform simple conditional operations. countess Augusta Ada Byron, the countess of Lovelace, met Babbage in 1833. She /ˈkaʊntɪs/ described the Analytical Engine as weaving “algebraic patterns just as the n.
    [Show full text]
  • Hardware Architecture
    Hardware Architecture Components Computing Infrastructure Components Servers Clients LAN & WLAN Internet Connectivity Computation Software Storage Backup Integration is the Key ! Security Data Network Management Computer Today’s Computer Computer Model: Von Neumann Architecture Computer Model Input: keyboard, mouse, scanner, punch cards Processing: CPU executes the computer program Output: monitor, printer, fax machine Storage: hard drive, optical media, diskettes, magnetic tape Von Neumann architecture - Wiki Article (15 min YouTube Video) Components Computer Components Components Computer Components CPU Memory Hard Disk Mother Board CD/DVD Drives Adaptors Power Supply Display Keyboard Mouse Network Interface I/O ports CPU CPU CPU – Central Processing Unit (Microprocessor) consists of three parts: Control Unit • Execute programs/instructions: the machine language • Move data from one memory location to another • Communicate between other parts of a PC Arithmetic Logic Unit • Arithmetic operations: add, subtract, multiply, divide • Logic operations: and, or, xor • Floating point operations: real number manipulation Registers CPU Processor Architecture See How the CPU Works In One Lesson (20 min YouTube Video) CPU CPU CPU speed is influenced by several factors: Chip Manufacturing Technology: nm (2002: 130 nm, 2004: 90nm, 2006: 65 nm, 2008: 45nm, 2010:32nm, Latest is 22nm) Clock speed: Gigahertz (Typical : 2 – 3 GHz, Maximum 5.5 GHz) Front Side Bus: MHz (Typical: 1333MHz , 1666MHz) Word size : 32-bit or 64-bit word sizes Cache: Level 1 (64 KB per core), Level 2 (256 KB per core) caches on die. Now Level 3 (2 MB to 8 MB shared) cache also on die Instruction set size: X86 (CISC), RISC Microarchitecture: CPU Internal Architecture (Ivy Bridge, Haswell) Single Core/Multi Core Multi Threading Hyper Threading vs.
    [Show full text]
  • Computer Development at the National Bureau of Standards
    Computer Development at the National Bureau of Standards The first fully operational stored-program electronic individual computations could be performed at elec- computer in the United States was built at the National tronic speed, but the instructions (the program) that Bureau of Standards. The Standards Electronic drove these computations could not be modified and Automatic Computer (SEAC) [1] (Fig. 1.) began useful sequenced at the same electronic speed as the computa- computation in May 1950. The stored program was held tions. Other early computers in academia, government, in the machine’s internal memory where the machine and industry, such as Edvac, Ordvac, Illiac I, the Von itself could modify it for successive stages of a compu- Neumann IAS computer, and the IBM 701, would not tation. This made a dramatic increase in the power of begin productive operation until 1952. programming. In 1947, the U.S. Bureau of the Census, in coopera- Although originally intended as an “interim” com- tion with the Departments of the Army and Air Force, puter, SEAC had a productive life of 14 years, with a decided to contract with Eckert and Mauchly, who had profound effect on all U.S. Government computing, the developed the ENIAC, to create a computer that extension of the use of computers into previously would have an internally stored program which unknown applications, and the further development of could be run at electronic speeds. Because of a demon- computing machines in industry and academia. strated competence in designing electronic components, In earlier developments, the possibility of doing the National Bureau of Standards was asked to be electronic computation had been demonstrated by technical monitor of the contract that was issued, Atanasoff at Iowa State University in 1937.
    [Show full text]
  • Declustering Spatial Databases on a Multi-Computer Architecture
    Declustering spatial databases on a multi-computer architecture 1 2 ? 3 Nikos Koudas and Christos Faloutsos and Ibrahim Kamel 1 Computer Systems Research Institute University of Toronto 2 AT&T Bell Lab oratories Murray Hill, NJ 3 Matsushita Information Technology Lab oratory Abstract. We present a technique to decluster a spatial access metho d + on a shared-nothing multi-computer architecture [DGS 90]. We prop ose a software architecture with the R-tree as the underlying spatial access metho d, with its non-leaf levels on the `master-server' and its leaf no des distributed across the servers. The ma jor contribution of our work is the study of the optimal capacity of leaf no des, or `chunk size' (or `striping unit'): we express the resp onse time on range queries as a function of the `chunk size', and we show how to optimize it. We implemented our metho d on a network of workstations, using a real dataset, and we compared the exp erimental and the theoretical results. The conclusion is that our formula for the resp onse time is very accurate (the maximum relative error was 29%; the typical error was in the vicinity of 10-15%). We illustrate one of the p ossible ways to exploit such an accurate formula, by examining several `what-if ' scenarios. One ma jor, practical conclusion is that a chunk size of 1 page gives either optimal or close to optimal results, for a wide range of the parameters. Keywords: Parallel data bases, spatial access metho ds, shared nothing ar- chitecture. 1 Intro duction One of the requirements for the database management systems (DBMSs) of the future is the ability to handle spatial data.
    [Show full text]