Computer Architecture Compendium for INF2270

Total Page:16

File Type:pdf, Size:1020Kb

Computer Architecture Compendium for INF2270 UNIVERSITY OF OSLO Department of Informatics Computer architecture Compendium for INF2270 Philipp Häfliger Dag Langmyhr Spring 2010 Contents Contents1 List of Figures5 List of Tables7 1 Introduction9 I Basics of computer architecture 11 2 Introduction to Digital Electronics 13 3 Binary Numbers 15 3.1 Unsigned Binary Numbers......................... 15 3.2 Signed Binary Numbers.......................... 15 3.2.1 Sign and Magnitude........................ 15 3.2.2 Two’s Complement........................ 16 3.3 Addition and Subtraction......................... 16 3.4 Multiplication and Division......................... 18 3.5 Extending an n-bit binary to n+k-bits................... 19 4 Boolean Algebra 21 4.1 Karnaugh maps............................... 24 4.1.1 Karnaugh maps with 5 and 6 bit variables........... 26 4.1.2 Karnaugh map simplification based on zeros.......... 27 5 Combinational Logic Circuits 29 5.1 Standard Combinational Circuit Blocks................. 30 5.1.1 Encoder............................... 31 5.1.2 Decoder.............................. 32 5.1.3 Multiplexer............................. 33 5.1.4 Demultiplexer........................... 35 5.1.5 Adders............................... 35 6 Sequential Logic Circuits 39 6.1 Flip-Flops.................................. 39 6.1.1 Asynchronous Latches...................... 39 6.1.2 Synchronous Flip-Flops...................... 42 6.2 Finite State Machines............................ 45 6.2.1 State Transition Graphs...................... 45 6.3 Registers................................... 48 6.4 Standard Sequential Logic Circuits.................... 48 6.4.1 Counters.............................. 48 6.4.2 Shift Registers........................... 50 Page 1 CONTENTS 7 Von Neumann Architecture 53 7.1 Data Path and Memory Bus........................ 55 7.2 Arithmetic and Logic Unit (ALU)..................... 55 7.3 Memory................................... 56 7.3.1 Static Random Access Memory (SRAM)............ 60 7.3.2 Dynamic Random Access Memory (DRAM).......... 60 7.4 Control Unit (CU)............................. 62 7.4.1 Register Transfer Language.................... 62 7.4.2 Execution of Instructions..................... 62 7.4.3 Microarchitecture......................... 65 7.4.4 Complex and reduced instruction sets (CISC/RISC)..... 66 7.5 Input/Output................................ 66 8 Optimizing Hardware Performance 69 8.1 Memory Hierarchy............................. 69 8.1.1 Cache................................ 69 8.1.2 Virtual Memory.......................... 74 8.2 Pipelining................................... 75 8.2.1 Pipelining Hazards......................... 78 8.2.2 Conclusion............................. 81 8.3 Superscalar CPU.............................. 81 8.3.1 Brief Historical Detour into Supercomputing......... 81 8.3.2 Superscalar Principle....................... 83 II Low-level programming 85 9 Introduction to low-level programming 87 10 Programming in C 89 10.1 Data..................................... 89 10.1.1 Integer data............................ 89 10.1.2 Texts................................ 89 10.1.3 Floating-point data......................... 90 10.2 Statements.................................. 90 10.3 Expressions................................. 90 11 Character encodings 93 11.1 ASCII..................................... 93 11.2 Latin-1.................................... 93 11.3 Latin-9.................................... 93 11.4 Unicode................................... 93 11.4.1 UTF-8................................ 93 12 Assembly programming 97 12.1 Assembler notation............................. 97 12.1.1 Instruction lines.......................... 97 12.1.2 Specification lines......................... 97 12.1.3 Comments............................. 98 12.1.4 Alternative notation........................ 98 12.2 The assembler................................ 98 12.2.1 Assembling under Linux..................... 98 Page 2 CONTENTS 12.2.2 Assembling under Windows................... 99 12.3 Registers................................... 99 12.4 Instruction set................................ 100 Index 105 Page 3 Page 4 List of Figures 1.1 Abstraction levels in a computer.....................9 2.1 CMOSFET schematic symbols....................... 14 4.1 Boolean operators, truth tables and logic gates............. 23 4.2 3D Karnaugh map............................. 26 5.1 Example combinational logic circuit................... 30 5.2 Encoder Symbol............................... 31 5.3 Implementation 3-bit encoder....................... 31 5.4 Decoder symbol.............................. 33 5.5 3-bit decoder implementation....................... 33 5.6 Multiplexer symbol............................. 34 5.8 Demultiplexer symbol........................... 35 5.9 3-bit demultiplexer implementation................... 35 5.10 Schematics/circuit diagram of a 1-bit half adder............. 36 5.11 Full adder schematics............................ 37 6.1 Gated D-latch/transparent latch..................... 40 6.3 Clock signal................................. 42 6.5 T-flip-flop symbol.............................. 43 6.6 D-flip-flop symbol............................. 44 6.7 State transition graph for traffic light................... 45 6.8 Moore/Mealy finite state machine..................... 46 6.9 Traffic light controller schematics..................... 47 6.10 Register Symbol............................... 48 6.11 State transition graph of a 3-bit counter................. 48 6.12 3-bit counter Karnaugh maps....................... 48 6.13 3-bit synchronous counter......................... 49 6.14 3-bit ripple counter............................. 50 6.15 Shift register................................. 51 7.1 Von Neumann architecture........................ 54 7.2 1-bit ALU schematics............................ 55 7.3 1 bit ALU symbol.............................. 56 7.4 n-bit ALU schematics example...................... 57 7.5 n-bit ALU symbol.............................. 57 7.6 Static random access memory principle................. 59 7.7 DRAM principle............................... 61 7.8 Hardwired and Microprogrammed CU.................. 65 7.9 Simple I/O block diagram......................... 67 7.10 I/O controller principle.......................... 67 8.1 Memory hierarchy............................. 70 8.2 Associative cache.............................. 71 Page 5 LIST OF FIGURES 8.3 Directly mapped cache........................... 71 8.4 Set associative cache............................ 72 8.5 Look-through architecture......................... 73 8.6 Look-aside architecture.......................... 74 8.7 The principle of virtual memory..................... 75 8.8 Virtual memory paging........................... 75 8.10 4-stage pipeline simplified block diagram................. 76 8.11 4-stage pipeline execution......................... 77 8.12 Data hazard illustration.......................... 79 8.13 Control hazard illustration......................... 80 8.14 Cray-1.................................... 82 8.15 Principle of superscalar execution.................... 83 12.1 The most important x86/x87 registers.................. 99 Page 6 List of Tables 4.1 Basic Boolean functions.......................... 22 4.2 Boolean function table of rules...................... 22 4.3 Exercise to verify deMorgan........................ 22 5.1 Truth table of a 3-bit encoder....................... 31 5.2 Complete truth table of a 3-bit priority encoder............ 32 5.3 3-bit decoder truth table......................... 32 5.5 3-bit demultiplexer truth table...................... 34 5.6 Truth table for a 1-bit half adder..................... 36 5.7 Full Adder truth table........................... 36 6.1 SR-latch characteristic table, full..................... 41 6.2 SR-latch characteristic table, abbreviated................ 41 6.3 JK-flip-flop characteristic table, full.................... 43 6.4 JK-flip-flop characteristic table, abbreviated............... 43 6.5 T-flip-flop characteristic table, full..................... 44 6.6 T-flip-flop characteristic table, abbreviated................ 44 6.7 D-flip-flop characteristic table, full.................... 45 6.8 D-flip-flop characteristic table, abbreviated............... 45 6.9 Traffic light controller characteristic table................ 47 6.10 State transition table of a 3-bit counter................. 49 6.11 Shift register state transition table.................... 51 7.1 1-bit ALU truth table............................ 56 7.2 RAM characteristic table.......................... 58 7.3 Comparison of SRAM and DRAM.................... 62 7.4 RTL grammar................................ 62 7.7 Pros and Cons of hardwired and microarchitecture CU........ 66 8.1 Memory hierarchy summary table.................... 70 10.1 Integer data types in C........................... 90 10.2 Floating-point data types in C....................... 90 10.3 The statements in C............................ 91 10.4 The expression operators in C...................... 92 11.1 The ISO 8859-1 (Latin-1) encoding.................... 94 11.2 The difference between Latin-1 and Latin-9............... 95 11.3 UTF-8 representation of Unicode characters.............. 95 12.1 The major differences between AT&T and Intel assembler notation. 98 12.2 A subset of the x86 instructions (part 1)................ 101 12.3 A subset of the x86 instructions (part 2)................ 102 12.4 A subset of the x87 floating-point instructions............. 103
Recommended publications
  • Lecture 7: Synchronous Sequential Logic
    Systems I: Computer Organization and Architecture Lecture 7: Synchronous Sequential Logic Synchronous Sequential Logic • The digital circuits that we have looked at so far are combinational, depending only on the inputs. In practice, most systems have many components that contain memory elements, which require sequential logic. • A sequential circuit contains both a combinational circuit and memory elements, whose output also serves as an input for the combinational circuit. • The binary data stored in the memory elements define the state of the sequential circuit and help determine the conditions for changing the state in the memory elements. Inputs Combinational Outputs circuit Memory elements Sequential Circuits: Synchronous and Asynchronous • There are two types of sequential circuits, classified by their signal’s timing: – Synchronous sequential circuits have behavior that is defined from knowledge of its signal at discrete instants of time. – The behavior of asynchronous sequential circuits depends on the order in which input signals change and are affected at any instant of time. Synchronous Sequential Circuits • Synchronous sequential circuits need to use signal that affect memory elements at discrete time instants. • Synchronization is achieved by a timing device called a master-clock generator which generates a periodic train of clock pulses. Basic Flip-Flop Circuit Using NOR Gates 1 R(reset) Q 0 Set state Clear state 1 Q’ 0 S(set) S R Q Q’ 1 0 1 0 0 0 1 0 (after S = 1, R = 0) 0 1 0 1 0 0 0 1 (after S = 0, R = 1) 1 1 0 0 Basic
    [Show full text]
  • Chapter 3: Combinational Logic Design
    Chapter 3: Combinational Logic Design 1 Introduction • We have learned all the prerequisite material: – Truth tables and Boolean expressions describe functions – Expressions can be converted into hardware circuits – Boolean algebra and K-maps help simplify expressions and circuits • Now, let us put all of these foundations to good use, to analyze and design some larger circuits 2 Introduction • Logic circuits for digital systems may be – Combinational – Sequential • A combinational circuit consists of logic gates whose outputs at any time are determined by the current input values, i.e., it has no memory elements • A sequential circuit consists of logic gates whose outputs at any time are determined by the current input values as well as the past input values, i.e., it has memory elements 3 Introduction • Each input and output variable is a binary variable • 2^n possible binary input combinations • One possible binary value at the output for each input combination • A truth table or m Boolean functions can be used to specify input-output relation 4 Design Hierarchy • A single very large-scale integrated (VLSI) processos circuit contains several tens of millions of gates! • Imagine interconnecting these gates to form the processor • No complex circuit can be designed simply by interconnecting gates one at a time • Divide and Conquer approach is used to deal with the complexity – Break up the circuit into pieces (blocks) – Define the functions and the interfaces of each block such that the circuit formed by interconnecting the blocks obeys the original circuit specification – If a block is still too large and complex to be designed as a single entity, it can be broken into smaller blocks 5 Divide and Conquer 6 Hierarchical Design due to Divide and Conquer 7 Hierarchical Design • A hierarchy reduce the complexity required to represent the schematic diagram of a circuit • In any hierarchy, the leaves consist of predefined blocks, some of which may be primitives.
    [Show full text]
  • Combinational Logic; Hierarchical Design and Analysis
    Combinational Logic; Hierarchical Design and Analysis Tom Kelliher, CS 240 Feb. 10, 2012 1 Administrivia Announcements Collect assignment. Assignment Read 3.3. From Last Time IC technology. Outline 1. Combinational logic. 2. Hierarchical design 3. Design analysis. 1 Coming Up Design example. 2 Combinational Logic 1. Definition: Logic circuits in which the output(s) depend solely upon current inputs. 2. No feedback or memory. 3. Sequential circuits: outputs depend upon current inputs and previous inputs. Memory or registers. 4. Example — BCD to 7-segment decoder: S0 D0 S1 D1 BCD - 7 Seg S2 D2 S3 D3 Decoder S4 S5 S6 3 Hierarchical Design 1. Transistor counts: 2 Processor Year Transistor Count TI SN7400 1966 16 Intel 4004 1971 2,300 Intel 8085 1976 6,500 Intel 8088 1979 29,000 Intel 80386 1985 275,000 Intel Pentium 1993 3,100,000 Intel Pentium 4 2000 42,000,000 AMD Athlon 64 2003 105,900,000 Intel Core 2 Duo 2006 291,000,000 Intel Core 2 Quad 2006 582,000,000 NVIDIA G80 2006 681,000,000 Intel Dual Core Itanium 2 2006 1,700,000,000 Intel Atom 2008 42,000,000 Six Core Xeon 7400 2008 1,900,000,000 AMD RV770 2008 956,000,000 NVIDIA GT200 2008 1,400,000,000 Eight Core Xeon Nehalem-EX 2010 2,300,000,000 10 Core Xeon Westmere-EX 2011 2,600,000,000 AMD Cayman 2010 2,640,000,000 NVIDIA GF100 2010 3,000,000,000 Altera Stratix V 2011 3,800,000,000 2. Design and conquer: CPU ⇒ Integer Unit ⇒ Adder ⇒ binary full adder ⇒ NAND gates 3.
    [Show full text]
  • The Basics of Logic Design
    C APPENDIX The Basics of Logic Design C.1 Introduction C-3 I always loved that C.2 Gates, Truth Tables, and Logic word, Boolean. Equations C-4 C.3 Combinational Logic C-9 Claude Shannon C.4 Using a Hardware Description IEEE Spectrum, April 1992 Language (Shannon’s master’s thesis showed that C-20 the algebra invented by George Boole in C.5 Constructing a Basic Arithmetic Logic the 1800s could represent the workings of Unit C-26 electrical switches.) C.6 Faster Addition: Carry Lookahead C-38 C.7 Clocks C-48 AAppendixC-9780123747501.inddppendixC-9780123747501.indd 2 226/07/116/07/11 66:28:28 PPMM C.8 Memory Elements: Flip-Flops, Latches, and Registers C-50 C.9 Memory Elements: SRAMs and DRAMs C-58 C.10 Finite-State Machines C-67 C.11 Timing Methodologies C-72 C.12 Field Programmable Devices C-78 C.13 Concluding Remarks C-79 C.14 Exercises C-80 C.1 Introduction This appendix provides a brief discussion of the basics of logic design. It does not replace a course in logic design, nor will it enable you to design signifi cant working logic systems. If you have little or no exposure to logic design, however, this appendix will provide suffi cient background to understand all the material in this book. In addition, if you are looking to understand some of the motivation behind how computers are implemented, this material will serve as a useful intro- duction. If your curiosity is aroused but not sated by this appendix, the references at the end provide several additional sources of information.
    [Show full text]
  • Combinational Logic
    MEC520 디지털 공학 Combinational Logic Jee-Hwan Ryu School of Mechanical Engineering Korea University of Technology and Education Combinational circuits Outputs are determined from the present inputs Consist of input/output variables and logic gates Binary signal to registers Binary signal from registers Sequential Circuits Outputs are determined from the present inputs and the state of the storage elements The state of the storage elements is a function of previous inputs Depends on present and past inputs Korea University of Technology and Education Analysis procedure To determine the function from a given circuit diagram Analysis procedure Make sure the circuit is combinational or sequential No Feedback and memory elements Obtain the output Boolean functions or the truth table Korea University of Technology and Education Obtain Procedure-Boolean Function Boolean function from a logic diagram Label all gate outputs with arbitrary symbols Make output functions at each level Substitute final outputs to input variables Korea University of Technology and Education Obtain Procedure-Truth Table Truth table from a logic diagram Put the input variables to binary numbers Determine the output value at each gate Obtain truth table Korea University of Technology and Education Example Korea University of Technology and Education Design Procedure Procedure to design a combinational circuit 1. Determine the required number of input and output from specification 2. Assign a symbol to each input/output 3. Derive the truth table from the
    [Show full text]
  • Performance Evaluation of a Signal Processing Algorithm with General-Purpose Computing on a Graphics Processing Unit
    DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2019 Performance Evaluation of a Signal Processing Algorithm with General-Purpose Computing on a Graphics Processing Unit FILIP APPELGREN MÅNS EKELUND KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Performance Evaluation of a Signal Processing Algorithm with General-Purpose Computing on a Graphics Processing Unit Filip Appelgren and M˚ansEkelund June 5, 2019 2 Abstract Graphics Processing Units (GPU) are increasingly being used for general-purpose programming, instead of their traditional graphical tasks. This is because of their raw computational power, which in some cases give them an advantage over the traditionally used Central Processing Unit (CPU). This thesis therefore sets out to identify the performance of a GPU in a correlation algorithm, and what parameters have the greatest effect on GPU performance. The method used for determining performance was quantitative, utilizing a clock library in C++ to measure performance of the algorithm as problem size increased. Initial problem size was set to 28 and increased exponentially to 221. The results show that smaller sample sizes perform better on the serial CPU implementation but that the parallel GPU implementations start outperforming the CPU between problem sizes of 29 and 210. It became apparent that GPU's benefit from larger problem sizes, mainly because of the memory overhead costs involved with allocating and transferring data. Further, the algorithm that is under evaluation is not suited for a parallelized implementation due to a high amount of branching. Logic can lead to warp divergence, which can drastically lower performance.
    [Show full text]
  • Chapter 6 - Combinational Logic Systems GCSE Electronics – Component 1: Discovering Electronics
    Chapter 6 - Combinational logic systems GCSE Electronics – Component 1: Discovering Electronics Combinational logic systems Learners should be able to: (a) recognise 1/0 as two-state logic levels (b) identify and use NOT gates and 2-input AND, OR, NAND and NOR gates, singly and in combination (c) produce a suitable truth table from a given system specification and for a given logic circuit (d) use truth tables to analyse a system of gates (e) use Boolean algebra to represent the output of truth tables or logic gates and use the basic Boolean identities A.B = A+B and A+B = A.B (f) design processing systems consisting of logic gates to solve problems (g) simplify logic circuits using NAND gate redundancy (h) analyse and design systems from a given truth table to solve a given problem (i) use data sheets to select a logic IC for given applications and to identify pin connections (j) design and use switches and pull-up or pull-down resistors to provide correct logic level/edge-triggered signals for logic gates and timing circuits 180 © WJEC 2017 Chapter 6 - Combinational logic systems GCSE Electronics – Component 1: Discovering Electronics Introduction In this chapter we will be concentrating on the basics of digital logic circuits which will then be extended in Component 2. We should start by ensuring that you understand the difference between a digital signal and an analogue signal. An analogue signal Voltage (V) Max This is a signal that can have any value between the zero and maximum of the power supply. Changes between values can occur slowly or rapidly depending on the system involved.
    [Show full text]
  • CS 61C: Great Ideas in Computer Architecture Combinational and Sequential Logic, Boolean Algebra
    CS 61C: Great Ideas in Computer Architecture Combinational and Sequential Logic, Boolean Algebra Instructor: Justin Hsia 7/24/2013 Summer 2013 ‐‐ Lecture #18 1 Review of Last Lecture • OpenMP as simple parallel extension to C – During parallel fork, be aware of which variables should be shared vs. private among threads – Work‐sharing accomplished with for/sections – Synchronization accomplished with critical/atomic/reduction • Hardware is made up of transistors and wires – Transistors are voltage‐controlled switches – Building blocks of all higher‐level blocks 7/24/2013 Summer 2013 ‐‐ Lecture #18 2 Great Idea #1: Levels of Representation/Interpretation temp = v[k]; Higher‐Level Language v[k] = v[k+1]; Program (e.g. C) v[k+1] = temp; Compiler lw $t0, 0($2) Assembly Language lw $t1, 4($2) Program (e.g. MIPS) sw $t1, 0($2) sw $t0, 4($2) Assembler 0000 1001 1100 0110 1010 1111 0101 1000 Machine Language 1010 1111 0101 1000 0000 1001 1100 0110 Program (MIPS) 1100 0110 1010 1111 0101 1000 0000 1001 0101 1000 0000 1001 1100 0110 1010 1111 Machine Interpretation We are here Hardware Architecture Description (e.g. block diagrams) Architecture Implementation Logic Circuit Description (Circuit Schematic Diagrams) 7/24/2013 Summer 2013 ‐‐ Lecture #18 3 Synchronous Digital Systems Hardware of a processor, such as the MIPS, is an example of a Synchronous Digital System Synchronous: •All operations coordinated by a central clock ‒ “Heartbeat” of the system! Digital: • Represent all values with two discrete values •Electrical signals are treated as 1’s
    [Show full text]
  • Lecture 34: Bonus Topics
    Lecture 34: Bonus topics Philipp Koehn, David Hovemeyer December 7, 2020 601.229 Computer Systems Fundamentals Outline I GPU programming I Virtualization and containers I Digital circuits I Compilers Code examples on web page: bonus.zip GPU programming 3D graphics Rendering 3D graphics requires significant computation: I Geometry: determine visible surfaces based on geometry of 3D shapes and position of camera I Rasterization: determine pixel colors based on surface, texture, lighting A GPU is a specialized processor for doing these computations fast GPU computation: use the GPU for general-purpose computation Streaming multiprocessor I Fetches instruction (I-Cache) I Has to apply it over a vector of data I Each vector element is processed in one thread (MT Issue) I Thread is handled by scalar processor (SP) I Special function units (SFU) Flynn’s taxonomy I SISD (single instruction, single data) I uni-processors (most CPUs until 1990s) I MIMD (multi instruction, multiple data) I all modern CPUs I multiple cores on a chip I each core runs instructions that operate on their own data I SIMD (single instruction, multiple data) I Streaming Multi-Processors (e.g., GPUs) I multiple cores on a chip I same instruction executed on different data GPU architecture GPU programming I If you have an application where I Data is regular (e.g., arrays) I Computation is regular (e.g., same computation is performed on many array elements) then doing the computation on the GPU is likely to be much faster than doing the computation on the CPU I Issues: I GPU
    [Show full text]
  • SEQUENTIAL LOGIC  Combinational:  Output Depends Only on Current Inputs  Sequential:  Output Depend on Current Inputs Plus Past History  Includes Memory Elements
    10/10/2019 COMBINATIONAL VS. SEQUENTIAL DIGITAL ELECTRONICS LOGIC SYSTEM DESIGN Combinational Circuit Sequential Circuit inputs Combinational outputs inputs Combinational outputs FALL 2019 Logic Logic PROF. IRIS BAHAR (TAUGHT BY JIWON CHOE) OCTOBER 9, 2019 State LECTURE 11: SEQUENTIAL LOGIC Combinational: Output depends only on current inputs Sequential: Output depend on current inputs plus past history Includes memory elements BISTABLE MEMORY STORAGE SEQUENTIAL CIRCUITS ELEMENT Fundamental building block of other state elements Two outputs, Q, Q Outputs depend on inputs and state variables No inputs I1 Q The state variables embody the past Q Q I2 I1 Storage elements hold the state variables A clock periodically advances the circuit I2 Q What does the circuit do? 1 10/10/2019 BISTABLE MEMORY ELEMENT REVISIT NOR & NAND GATES Controlling inputs for NAND and NOR gates 0 • Consider the two possible cases: I1 Q 1 – Q = 0: then Q’ = 1 and Q = 0 (consistent) X X 0 1 1 0 I2 Q 0 1 1 – Q = 1: then Q’ = 0 and Q = 1 (consistent) I1 Q 0 Implementing NOT with NAND/NOR using non-controlling 1 0 I2 Q inputs – Bistable circuit stores 1 bit of state in the X X X’ X’ state variable, Q (or Q’ ) 1 0 • But there are no inputs to control the state S-R (SET/RESET) LATCH S-R LATCH ANALYSIS 0 R N1 Q – S = 1, R = 0: then Q = 1 and Q = 0 . Consider the four possible cases: 1 N2 Q . S = 1, R = 0 S . S = 0, R = 1 R N1 Q 1 R .
    [Show full text]
  • CPE 323 Introduction to Embedded Computer Systems: Introduction
    CPE 323 Introduction to Embedded Computer Systems: Introduction Instructor: Dr Aleksandar Milenkovic CPE 323 Administration Syllabus textbook & other references grading policy important dates course outline Prerequisites Number representation Digital design: combinational and sequential logic Computer systems: organization Embedded Systems Laboratory Located in EB 106 EB 106 Policies Introduction sessions Lab instructor CPE 323: Introduction to Embedded Computer Systems 2 CPE 323 Administration LAB Session on-line LAB manuals and tutorials Access cards Accounts Lab Assistant: Zahra Atashi Lab sessions (select 4 from the following list) Monday 8:00 - 9:30 AM Wednesday 8:00 - 9:30 AM Wednesday 5:30 - 7:00 PM Friday 8:00 - 9:30 AM Friday 9:30 – 11:00 AM Sign-up sheet will be available in the laboratory CPE 323: Introduction to Embedded Computer Systems 3 Outline Computer Engineering: Past, Present, Future Embedded systems What are they? Where do we find them? Structure and Organization Software Architectures CPE 323: Introduction to Embedded Computer Systems 4 What Is Computer Engineering? The creative application of engineering principles and methods to the design and development of hardware and software systems Discipline that combines elements of both electrical engineering and computer science Computer engineers are electrical engineers that have additional training in the areas of software design and hardware-software integration CPE 323: Introduction to Embedded Computer Systems 5 What Do Computer Engineers Do? Computer engineers are involved in all aspects of computing Design of computing devices (both Hardware and Software) Where are computing devices? Embedded computer systems (low-end – high-end) In: cars, aircrafts, home appliances, missiles, medical devices,..
    [Show full text]
  • Single-Cycle
    18-447 Computer Architecture Lecture 5: Intro to Microarchitecture: Single-Cycle Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 1/26/2015 Agenda for Today & Next Few Lectures Start Microarchitecture Single-cycle Microarchitectures Multi-cycle Microarchitectures Microprogrammed Microarchitectures Pipelining Issues in Pipelining: Control & Data Dependence Handling, State Maintenance and Recovery, … 2 Recap of Two Weeks and Last Lecture Computer Architecture Today and Basics (Lectures 1 & 2) Fundamental Concepts (Lecture 3) ISA basics and tradeoffs (Lectures 3 & 4) Last Lecture: ISA tradeoffs continued + MIPS ISA Instruction length Uniform vs. non-uniform decode Number of registers Addressing modes Aligned vs. unaligned access RISC vs. CISC properties MIPS ISA Overview 3 Assignment for You Not to be turned in As you learn the MIPS ISA, think about what tradeoffs the designers have made in terms of the ISA properties we talked about And, think about the pros and cons of design choices In comparison to ARM, Alpha In comparison to x86, VAX And, think about the potential mistakes Branch delay slot? Look Backward Load delay slot? No FP, no multiply, MIPS (initial) 4 Food for Thought for You How would you design a new ISA? Where would you place it? What design choices would you make in terms of ISA properties? What would be the first question you ask in this process? “What is my design point?” Look Forward & Up 5 Review: Other Example ISA-level Tradeoffs Condition codes vs. not VLIW vs. single instruction SIMD (single instruction multiple data) vs. SISD Precise vs. imprecise exceptions Virtual memory vs. not Unaligned access vs.
    [Show full text]