Advanced Computer Architecture II Review of 752 Iron Law Iron Law

Total Page:16

File Type:pdf, Size:1020Kb

Advanced Computer Architecture II Review of 752 Iron Law Iron Law ECE/CS 757: Advanced Review of 752 Computer Architecture II • Iron law Instructor:Mikko H Lipasti • Beyond pipelining • Superscalar challenges Spring 2009 • Instruction flow • Register data flow University of Wisconsin‐Madison • Memory Dataflow • Modern memory interface Lecture notes based on slides created by John Shen, Mark Hill, David Wood, Guri Sohi, and Jim Smith, Natalie Enright Jerger, and probably others Iron Law Iron Law Time Processor Performance = --------------- • Instructions/Program Program – Instructions executed, not static code size – Determined by algorithm, compiler, ISA Instructions Cycles Time = X X • Cycles/Instruction Program Instruction Cycle – Determined by ISA and CPU organization (code size) (CPI) (cycle time) – Overlap among instructions reduces this term • Time/cycle Architecture --> Implementation --> Realization – Determined by technology, organization, clever circuit design Compiler Designer Processor Designer Chip Designer Our Goal Pipelined Design • Motivation: • Minimize time, which is the product, NOT – Increase throughput with little increase in hardware. isolated terms Bandwidth or Throughput = Performance • Common error to miss terms while devising optimizations • Bandwidth (BW) = no. of tasks/unit time • For a system that operates on one task at a time: – E.g. ISA change to decrease instruction count – BW = 1/delay (latency) – BUT leads to CPU organization which makes clock • BW can be increased by pipelining if many operands exist which need the same operation, i.e. many repetitions of slower the same task are to be performed. • Bottom line: terms are inter‐related • Latency required for each task remains the same or may even increase slightly. ECE 752: Advanced Computer Architecture I 1 Ideal Pipelining Example: Integer Multiplier Comb. Logic L n Gate Delay BW = ~(1/n) n n L -- Gate L -- Gate 2 Delay 2 Delay BW = ~(2/n) n n n L -- Gate L -- Gate L -- Gate 3 Delay 3 Delay 3 Delay BW = ~(3/n) • Bandwidth increases linearly with pipeline depth [Source: J. Hayes, Univ. o • Latency increases by latch delays 16x16 combinational multiplier ISCAS‐85 C6288 standard benchmark Tools: Synopsys DC/LSI Logic 110nm gflxp ASIC 8 Example: Integer Multiplier Pipelining Idealisms Configuration Delay MPS Area (FF/wiring) Area Increase • Uniform subcomputations Combinational 3.52ns 284 7535 (‐‐/1759) – Can pipeline into stages with equal delay 2 Stages 1.87ns 534 (1.9x) 8725 (1078/1870) 16% – Balance pipeline stages 4 Stages 1.17ns 855 (3.0x) 11276 (3388/2112) 50% • Identical computations 8 Stages 0.80ns 1250 (()4.4x) 17127 ((/)8938/2612) 127% – Can fill pipeline with identical work – Unify instruction types • Independent computations Pipeline efficiency – No relationships between work units 2‐stage: nearly double throughput; marginal area cost – Minimize pipeline stalls 4‐stage: 75% efficiency; area still reasonable • Are these practical? 8‐stage: 55% efficiency; area more than doubles – No, but can get close enough to get significant speedup Tools: Synopsys DC/LSI Logic 110nm gflxp ASIC 9 Instruction Pipelining Generic Instruction Pipeline • The “computation” to be pipelined. 1. Instruction IF Fetch – Instruction Fetch (IF) 2. Instruction ID – Instruction Decode (ID) Decode – Operand(s) Fetch (OF) 3. OdOperand OF Fetch – Instruction Execution (EX) 4. Instruction EX – Operand Store (OS) Execute – Update Program Counter (PC) 5. Operand OS Store • Based on “obvious” subcomputations ECE 752: Advanced Computer Architecture I 2 Pipelining Idealisms Program Dependences A true dependence between two instructions may only Uniform subcomputations involve one subcomputation of each instruction. i1: – Can pipeline into stages with equal delay i1: xxxx i1 – Balance pipeline stages Identical computations i2: xxxx i2 i2: – Can fill pipeline with identical work – Unify instruction types (example in 752 notes) i3: xxxx i3 i3: • Independent computations – No relationships between work units The implied sequential precedences are – Minimize pipeline stalls an overspecification. It is sufficient but not necessary to ensure program correctness. © 2005 Mikko Lipasti 13 © 2005 Mikko Lipasti 14 Program Data Dependences Control Dependences • True dependence (RAW) • Conditional branches D(i) R( j) – j cannot execute until i – Branch must execute to determine which produces its result instruction to fetch next • Anti‐dependence (WAR) – Instructions following a conditional branch are – R(i) D( j) j cannot write its result until i control dependent on the branch instruction has read its sources • Output dependence (WAW) – j cannot write its result until i D(i) D( j) has written its result © 2005 Mikko Lipasti 15 © 2005 Mikko Lipasti 16 Resolution of Pipeline Hazards IBM RISC Experience [Agerwala and Cocke 1987] • Pipeline hazards • Internal IBM study: Limits of a scalar pipeline? – Potential violations of program dependences • Memory Bandwidth – Must ensure program dependences are not violated – Fetch 1 instr/cycle from I‐cache – 40% of instructions are load/store (D‐cache) • Hazard resolution • Code characteristics (dynamic) – Static: compiler/programmer guarantees correctness – Loads – 25% – Dynamic: hardware performs checks at runtime – Stores 15% • Pipeline interlock – ALU/RR – 40% – Hardware mechanism for dynamic hazard resolution – Branches – 20% – Must detect and enforce dependences at runtime • 1/3 unconditional (always taken • 1/3 conditional taken, 1/3 conditional not taken © 2005 Mikko Lipasti 17 © 2005 Mikko Lipasti 18 ECE 752: Advanced Computer Architecture I 3 IBM Experience CPI Optimizations • Cache Performance • Goal and impediments – Assume 100% hit ratio (upper bound) – CPI = 1, prevented by pipeline stalls – Cache latency: I = D = 1 cycle default • No cache bypass of RF, no load/branch scheduling • Load and branch scheduling – Load penalty: 2 cycles: 0.25 x 2 = 0.5 CPI – Loads – Branch penalty: 2 cycles: 020.2 x 2/3 x 2 = 0270.27 CPI • 25% cannot be scheduled (delay slot empty) – Total CPI: 1 + 0.5 + 0.27 = 1.77 CPI • 65% can be moved back 1 or 2 instructions • Bypass, no load/branch scheduling • 10% can be moved back 1 instruction – Load penalty: 1 cycle: 0.25 x 1 = 0.25 CPI – Branches • Unconditional – 100% schedulable (fill one delay slot) – Total CPI: 1 + 0.25 + 0.27 = 1.52 CPI • Conditional – 50% schedulable (fill one delay slot) © 2005 Mikko Lipasti 19 © 2005 Mikko Lipasti 20 More CPI Optimizations Simplify Branches • Bypass, scheduling of loads/branches – Load penalty: • Assume 90% can be PC‐relative 15% Overhead • 65% + 10% = 75% moved back, no penalty – No register indirect, no register access from program • 25% => 1 cycle penalty – Separate adder (like MIPS R3000) dependences • 0.25 x 0.25 x 1 = 0.0625 CPI – Branch penalty reduced – Branch Penalty • Total CPI: 1 + 0.063 + 0.085 = 1.15 CPI = 0.87 IPC • 1/3 unconditional 100% schedulable => 1 cycle • 1/3 cond. not‐taken, => no penalty (predict not‐taken) PC-relative Schedulable Penalty • 1/3 cond. Taken, 50% schedulable => 1 cycle • 1/3 cond. Taken, 50% unschedulable => 2 cycles Yes (90%) Yes (50%) 0 cycle • 0.25 x [1/3 x 1 + 1/3 x 0.5 x 1 + 1/3 x 0.5 x 2] = 0.167 Yes (90%) No (50%) 1 cycle • Total CPI: 1 + 0.063 + 0.167 = 1.23 CPI No (10%) Yes (50%) 1 cycle No (10%) No (50%) 2 cycles © 2005 Mikko Lipasti 21 © 2005 Mikko Lipasti 22 Limits of Pipelining Processor Performance Time Processor Performance = --------------- Program • IBM RISC Experience – Control and data dependences add 15% Instructions Cycles Time = XX – Best case CPI of 1.15, IPC of 0.87 Program Instruction Cycle – Deeper piliipelines (hig her f)frequency) magnify dependence penalties (code size) (CPI) (cycle time) • This analysis assumes 100% cache hit rates • In the 1980’s (decade of pipelining): – CPI: 5.0 => 1.15 – Hit rates approach 100% for some programs • In the 1990’s (decade of superscalar): – Many important programs have much worse hit rates – CPI: 1.15 => 0.5 (best case) • In the 2000’s (decade of multicore): – Core CPI unchanged; chip CPI scales with #cores ECE 752: Advanced Computer Architecture I 4 Limits on Instruction Level Superscalar Proposal Parallelism (ILP) Weiss and Smith [1984] 1.58 • Go beyond single instruction pipeline, achieve Sohi and Vajapeyam [1987] 1.81 IPC > 1 Tjaden and Flynn [1970] 1.86 (Flynn’s bottleneck) Tjaden and Flynn [1973] 1.96 • Dispatch multiple instructions per cycle Uht [1986] 2.00 Smith et al. [1989] 2.00 • Provide more generally applicable form of Jouppi and Wall [1988] 2.40 concurrency (not just vectors) Johnson [1991] 2.50 Acosta et al. [1986] 2.79 • Geared for sequential code that is hard to Wedig [1982] 3.00 parallelize otherwise Butler et al. [1991] 5.8 Melvin and Patt [1991] 6 • Exploit fine‐grained or instruction‐level Wall [1991] 7 (Jouppi disagreed) parallelism (ILP) Kuck et al. [1972] 8 Riseman and Foster [1972] 51 (no control dependences) Nicolau and Fisher [1984] 90 (Fisher’s optimism) Limitations of Scalar Pipelines Parallel Pipelines • Scalar upper bound on throughput – IPC <= 1 or CPI >= 1 • Inefficient unified pipeline – Long latency for each iiinstruction (a) No Parallelism (b) Temporal Parallelism • Rigid pipeline stall policy – One stalled instruction stalls all newer instructions (d) Parallel Pipeline (c) Spatial Parallelism Power4 Diversified Pipelines Rigid Pipeline Stall Policy I-Cache PC Fetch Q BR Scan BR Decode Predict Backward Propagation FP FX/LD 1 FX/LD 2 BR/CR of Stalling Reorder Buffer Issue Q Issue Q Issue Q Issue Q Bypassing Stalled of Stalled Instruction FX1 FX2 CR BR Instruction Unit
Recommended publications
  • Towards Attack-Tolerant Trusted Execution Environments
    Master’s Programme in Security and Cloud Computing Towards attack-tolerant trusted execution environments Secure remote attestation in the presence of side channels Max Crone MASTER’S THESIS Aalto University — KTH Royal Institute of Technology MASTER’S THESIS 2021 Towards attack-tolerant trusted execution environments Secure remote attestation in the presence of side channels Max Crone Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Technology. Espoo, 12 July 2021 Supervisors: Prof. N. Asokan Prof. Panagiotis Papadimitratos Advisors: Dr. HansLiljestrand Dr. Lachlan Gunn Aalto University School of Science KTH Royal Institute of Technology School of Electrical Engineering and Computer Science Master’s Programme in Security and Cloud Computing Abstract Aalto University, P.O. Box 11000, FI-00076 Aalto www.aalto.fi Author Max Crone Title Towards attack-tolerant trusted execution environments: Secure remote attestation in the presence of side channels School School of Science Master’s programme Security and Cloud Computing Major Security and Cloud Computing Code SCI3113 Supervisors Prof. N. Asokan, Prof. Panagiotis Papadimitratos Advisors Dr. Hans Liljestrand, Dr. Lachlan Gunn Level Master’s thesis Date 12 July 2021 Pages 64 Language English Abstract In recent years, trusted execution environments (TEEs) have seen increasing deployment in comput- ing devices to protect security-critical software from run-time attacks and provide isolation from an untrustworthy operating system (OS). A trusted party verifies the software that runs in a TEE using remote attestation procedures. However, the publication of transient execution attacks such as Spectre and Meltdown revealed fundamental weaknesses in many TEE architectures, including Intel Software Guard Exentsions (SGX) and Arm TrustZone.
    [Show full text]
  • Instruction Pipelining (1 of 7)
    Chapter 5 A Closer Look at Instruction Set Architectures Objectives • Understand the factors involved in instruction set architecture design. • Gain familiarity with memory addressing modes. • Understand the concepts of instruction- level pipelining and its affect upon execution performance. 5.1 Introduction • This chapter builds upon the ideas in Chapter 4. • We present a detailed look at different instruction formats, operand types, and memory access methods. • We will see the interrelation between machine organization and instruction formats. • This leads to a deeper understanding of computer architecture in general. 5.2 Instruction Formats (1 of 31) • Instruction sets are differentiated by the following: – Number of bits per instruction. – Stack-based or register-based. – Number of explicit operands per instruction. – Operand location. – Types of operations. – Type and size of operands. 5.2 Instruction Formats (2 of 31) • Instruction set architectures are measured according to: – Main memory space occupied by a program. – Instruction complexity. – Instruction length (in bits). – Total number of instructions in the instruction set. 5.2 Instruction Formats (3 of 31) • In designing an instruction set, consideration is given to: – Instruction length. • Whether short, long, or variable. – Number of operands. – Number of addressable registers. – Memory organization. • Whether byte- or word addressable. – Addressing modes. • Choose any or all: direct, indirect or indexed. 5.2 Instruction Formats (4 of 31) • Byte ordering, or endianness, is another major architectural consideration. • If we have a two-byte integer, the integer may be stored so that the least significant byte is followed by the most significant byte or vice versa. – In little endian machines, the least significant byte is followed by the most significant byte.
    [Show full text]
  • Instruction Level Parallelism -- Hardware Speculation and VLIW (Static Superscalar)
    Lecture 17: Instruction Level Parallelism -- Hardware Speculation and VLIW (Static Superscalar) CSE 564 Computer Architecture Summer 2017 Department of Computer Science and Engineering Yonghong Yan [email protected] www.secs.oakland.edu/~yan Topics for Instruction Level Parallelism § ILP Introduction, Compiler Techniques and Branch Prediction – 3.1, 3.2, 3.3 § Dynamic Scheduling (OOO) – 3.4, 3.5 and C.5, C.6 and C.7 (FP pipeline and scoreboard) § Hardware Speculation and Static Superscalar/VLIW – 3.6, 3.7 § Dynamic Scheduling, Multiple Issue and Speculation – 3.8, 3.9 § ILP Limitations and SMT – 3.10, 3.11, 3.12 2 REVIEW 3 Not Every Stage Takes only one Cycle § FP EXE Stage – Multi-cycle Add/Mul – Nonpiplined for DIV § MEM Stage 4 Issues of Multi-Cycle in Some Stages § The divide unit is not fully pipelined – structural hazards can occur » need to be detected and stall incurred. § The instructions have varying running times – the number of register writes required in a cycle can be > 1 § Instructions no longer reach WB in order – Write after write (WAW) hazards are possible » Note that write after read (WAR) hazards are not possible, since the register reads always occur in ID. § Instructions can complete in a different order than they were issued (out-of-order complete) – causing problems with exceptions § Longer latency of operations – stalls for RAW hazards will be more frequent. 5 Hazards and Forwarding for Longer- Latency Pipeline § H 6 Problems Arising From Writes § If we issue one instruction per cycle, how can we avoid structural
    [Show full text]
  • Advanced Architecture Intel Microprocessor History
    Advanced Architecture Intel microprocessor history Computer Organization and Assembly Languages Yung-Yu Chuang with slides by S. Dandamudi, Peng-Sheng Chen, Kip Irvine, Robert Sedgwick and Kevin Wayne Early Intel microprocessors The IBM-AT • Intel 8080 (1972) • Intel 80286 (1982) – 64K addressable RAM – 16 MB addressable RAM – 8-bit registers – Protected memory – CP/M operating system – several times faster than 8086 – 5,6,8,10 MHz – introduced IDE bus architecture – 29K transistors – 80287 floating point unit • Intel 8086/8088 (1978) my first computer (1986) – Up to 20MHz – IBM-PC used 8088 – 134K transistors – 1 MB addressable RAM –16-bit registers – 16-bit data bus (8-bit for 8088) – separate floating-point unit (8087) – used in low-cost microcontrollers now 3 4 Intel IA-32 Family Intel P6 Family • Intel386 (1985) • Pentium Pro (1995) – 4 GB addressable RAM – advanced optimization techniques in microcode –32-bit registers – More pipeline stages – On-board L2 cache – paging (virtual memory) • Pentium II (1997) – Up to 33MHz – MMX (multimedia) instruction set • Intel486 (1989) – Up to 450MHz – instruction pipelining • Pentium III (1999) – Integrated FPU – SIMD (streaming extensions) instructions (SSE) – 8K cache – Up to 1+GHz • Pentium (1993) • Pentium 4 (2000) – Superscalar (two parallel pipelines) – NetBurst micro-architecture, tuned for multimedia – 3.8+GHz • Pentium D (2005, Dual core) 5 6 IA32 Processors ARM history • Totally Dominate Computer Market • 1983 developed by Acorn computers • Evolutionary Design – To replace 6502 in
    [Show full text]
  • Instruction Pipelining in Computer Architecture Pdf
    Instruction Pipelining In Computer Architecture Pdf Which Sergei seesaws so soakingly that Finn outdancing her nitrile? Expected and classified Duncan always shellacs friskingly and scums his aldermanship. Andie discolor scurrilously. Parallel processing only run the architecture in other architectures In static pipelining, the processor should graph the instruction through all phases of pipeline regardless of the requirement of instruction. Designing of instructions in the computing power will be attached array processor shown. In computer in this can access memory! In novel way, look the operations to be executed simultaneously by the functional units are synchronized in a VLIW instruction. Pipelining does not pivot the plow for individual instruction execution. Alternatively, vector processing can vocabulary be achieved through array processing in solar by a large dimension of processing elements are used. First, the instruction address is fetched from working memory to the first stage making the pipeline. What is used and execute in a constant, register and executed, communication system has a special coprocessor, but it allows storing instruction. Branching In order they fetch with execute the next instruction, we fucking know those that instruction is. Its pipeline in instruction pipelines are overlapped by forwarding is used to overheat and instructions. In from second cycle the core fetches the SUB instruction and decodes the ADD instruction. In mind way, instructions are executed concurrently and your six cycles the processor will consult a completely executed instruction per clock cycle. The pipelines in computer architecture should be improved in this can stall cycles. By double clicking on the Instr. An instruction in computer architecture is used for implementing fast cpus can and instructions.
    [Show full text]
  • Software-Hardware Cooperative Memory Disambiguation
    Software-Hardware Cooperative Memory Disambiguation Ruke Huang, Alok Garg, and Michael Huang Department of Electrical & Computer Engineering University of Rochester fhrk1,garg,[email protected] Abstract order execution of these instructions do not violate the program se- mantics. Without the a priori knowledge of which load instructions In high-end processors, increasing the number of in-flight in- can execute out of program order and not violate program seman- structions can improve performance by overlapping useful process- tics, conventional implementations simply buffer all in-flight load ing with long-latency accesses to the main memory. Buffering these and store instructions and perform cross-comparisons during their instructions requires a tremendous amount of microarchitectural re- execution to detect all violations. The hardware uses associative sources. Unfortunately, large structures negatively impact proces- arrays with priority encoding. Such a design makes the LSQ proba- sor clock speed and energy efficiency. Thus, innovations in effective bly the least scalable of all microarchitectural structures in modern and efficient utilization of these resources are needed. In this paper, out-of-order processors. In reality, we observe that in many appli- we target the load-store queue, a dynamic memory disambiguation cations, especially array-based floating-point applications, a signif- logic that is among the least scalable structures in a modern micro- icant portion of memory instructions can be statically determined processor. We propose to use software assistance to identify load not to cause any possible violations. Based on these observations, instructions that are guaranteed not to overlap with earlier pend- we propose to use software analysis to identify certain memory in- ing stores and prevent them from competing for the resources in the structions to bypass hardware memory disambiguation.
    [Show full text]
  • Pipeliningpipelining
    ChapterChapter 99 PipeliningPipelining Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Outline ¾ Basic Concepts ¾ Data Hazards ¾ Instruction Hazards Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 2 Content Coverage Main Memory System Address Data/Instruction Central Processing Unit (CPU) Operational Registers Arithmetic Instruction and Cache Logic Unit Sets memory Program Counter Control Unit Input/Output System Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 3 Basic Concepts ¾ Pipelining is a particularly effective way of organizing concurrent activity in a computer system ¾ Let Fi and Ei refer to the fetch and execute steps for instruction Ii ¾ Execution of a program consists of a sequence of fetch and execute steps, as shown below I1 I2 I3 I4 I5 F1 E1 F2 E2 F3 E3 F4 E4 F5 Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 4 Hardware Organization ¾ Consider a computer that has two separate hardware units, one for fetching instructions and another for executing them, as shown below Interstage Buffer Instruction fetch Execution unit unit Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 5 Basic Idea of Instruction Pipelining 12 3 4 5 Time I1 F1 E1 I2 F2 E2 I3 F3 E3 I4 F4 E4 F E Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 6 A 4-Stage Pipeline 12 3 4 567 Time I1 F1 D1 E1 W1 I2 F2 D2 E2 W2 I3 F3 D3 E3 W3 I4 F4 D4 E4 W4 D: Decode F: Fetch Instruction E: Execute W: Write instruction & fetch operation results operands B1 B2 B3 Advanced Reliable Systems (ARES) Lab.
    [Show full text]
  • Introduction to PIC 18 Microcontrollers
    Mod-5: PIC 18 Introduction 1 Module 5 Contents: Overview of PIC 18, memory organisation, CPU, registers, pipelining, instruction format, addressing modes, instruction set, interrupts, interrupt operation, resets, parallel ports, timers, CCP. Features of the PIC18 microcontroller 8-bit CPU 2 MB program memory space 256 bytes to 1KB of data EEPROM Up to 3968 bytes of on-chip SRAM 4 KB to 128KB flash program memory Sophisticated timer functions that include: input capture, output compare, PWM, real- time interrupt, and watchdog timer Serial communication interfaces: SCI SPI I2C and CAN Background debug mode (BDM) 10-bit A/D converter Memory protection capability Instruction pipelining Operates at up to 40 MHz crystal oscillator Overview of the PIC18 MCU Microchip has introduced six different lines of 8-bit MCUs over the years: 1. PIC12XXX: 8-pin, 12- or 14-bit instruction format 2. PIC14000: 28-pin, 14-bit instruction format (same as PIC16XX) 3. PIC16C5X: 12-bit instruction format 4. PIC16CXX: 14-bit instruction format 5. PIC17: 16-bit instruction format 6. PIC18: 16-bit instruction format Each line of the PIC MCUs supports different number of instructions with slightly different instruction formats and different design in their peripheral functions. This makes products designed with a different family of PIC MCUs incompatible. The members of the Muhammed Riyas A.M, Assistant Professor,Department of E.C.E, M.C.E.T Pathanamthitta Mod-5: PIC 18 Introduction 2 PIC18 family share the same instruction set and the same peripheral function design and provide from eight to more than 80 signal pins.
    [Show full text]
  • Notes-Cso-Unit-5
    Unit-05/Lecture-01 Pipeline Processing In computing, a pipeline is a set of data processing elements connected in series, where the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion; in that case, some amount of buffer storage is often inserted between elements. Computer-related pipelines include: Instruction pipelines, such as the classic RISC pipeline, which are used in central processing units (CPUs) to allow overlapping execution of multiple instructions with the same circuitry. The circuitry is usually divided up into stages, including instruction decoding, arithmetic, and register fetching stages, wherein each stage processes one instruction at a time. Graphics pipelines, found in most graphics processing units (GPUs), which consist of multiple arithmetic units, or complete CPUs, that implement the various stages of common rendering operations (perspective projection, window clipping, color and light calculation, rendering, etc.). Software pipelines, where commands can be written where the output of one operation is automatically fed to the next, following operation. The Unix system call pipe is a classic example of this concept, although other operating systems do support pipes as well. Pipelining is a natural concept in everyday life, e.g. on an assembly line. Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, and install the wheels (in that order, with arbitrary interstitial steps). A car on the assembly line can have only one of the three steps done at once.
    [Show full text]
  • Instruction Level Parallelism Example
    Instruction Level Parallelism Example Is Jule peaty or weak-minded when highlighting some heckles foreground thenceforth? Homoerotic and commendatory Shelby still pinks his pronephros inly. Overneat Kermit never beams so quaveringly or fecundated any academicians effectively. Summary of parallelism create readable and as with a bit says if we currently being considered to resolve these two machine of a pretty cool explanation. Once plug, it book the parallel grammatical structure which creates a truly memorable phrase. In order to accomplish whereas, a hybrid approach is designed to whatever advantage of streaming SIMD instructions for each faction the awful that executes in parallel on independent cores. For the ILPA, there is one more type of instruction possible, which is the special instruction type for the dedicated hardware units. Advantages and high for example? Two which is already present data is, to process includes comprehensive career related services that instruction level parallelism example how many diverse influences on. Simple uses of parallelism create readable and understandable passages. Also note that a data dependent elements that has to be imported from another core in another processor is much higher than either of the previous two costs. Why the charge of the proton does not transfer to the neutron in the nuclei? The OPENMP code is implemented in advance way leaving each thread can climb up an element from first vector and compare after all the elements in you second vector and forth thread will appear able to execute simultaneously in parallel. To be ready to instruction level parallelism in this allows enormous reduction in memory.
    [Show full text]
  • Lecture #2 "Computer Systems Big Picture"
    18-600 Foundations of Computer Systems Lecture 2: “Computer Systems: The Big Picture” John P. Shen & Gregory Kesden 18-600 August 30, 2017 CS: AAP CS: APP ➢ Recommended Reference: ❖ Chapters 1 and 2 of Shen and Lipasti (SnL). ➢ Other Relevant References: ❖ “A Detailed Analysis of Contemporary ARM and x86 Architectures” by Emily Blem, Jaikrishnan Menon, and Karthikeyan Sankaralingam . (2013) ❖ “Amdahl’s and Gustafson’s Laws Revisited” by Andrzej Karbowski. (2008) 8/30/2017 (©J.P. Shen) 18-600 Lecture #2 1 18-600 Foundations of Computer Systems Lecture 2: “Computer Systems: The Big Picture” 1. Instruction Set Architecture (ISA) a. Hardware / Software Interface (HSI) b. Dynamic / Static Interface (DSI) c. Instruction Set Architecture Design & Examples 2. Historical Perspective on Computing a. Major Epochs of Modern Computers b. Computer Performance Iron Law (#1) 3. “Economics” of Computer Systems a. Amdahl’s Law and Gustafson’s Law b. Moore’s Law and Bell’s Law 8/30/2017 (©J.P. Shen) 18-600 Lecture #2 2 Anatomy of Engineering Design SPECIFICATION: Behavioral description of “What does it do?” Synthesis: Search for possible solutions; pick best one. Creative process IMPLEMENTATION: Structural description of “How is it constructed?” Analysis: Validate if the design meets the specification. “Does it do the right thing?” + “How well does it perform?” 8/30/2017 (©J.P. Shen) 18-600 Lecture #2 3 [Gerrit Blaauw & Fred Brooks, 1981] Instruction Set Processor Design ARCHITECTURE: (ISA) programmer/compiler view = SPECIFICATION • Functional programming model to application/system programmers • Opcodes, addressing modes, architected registers, IEEE floating point IMPLEMENTATION: (μarchitecture) processor designer view • Logical structure or organization that performs the ISA specification • Pipelining, functional units, caches, physical registers, buses, branch predictors REALIZATION: (Chip) chip/system designer view • Physical structure that embodies the implementation • Gates, cells, transistors, wires, dies, packaging 8/30/2017 (©J.P.
    [Show full text]
  • Instruction Pipelining Review
    InstructionInstruction PipeliningPipelining ReviewReview • Instruction pipelining is CPU implementation technique where multiple operations on a number of instructions are overlapped. • An instruction execution pipeline involves a number of steps, where each step completes a part of an instruction. Each step is called a pipeline stage or a pipeline segment. • The stages or steps are connected in a linear fashion: one stage to the next to form the pipeline -- instructions enter at one end and progress through the stages and exit at the other end. • The time to move an instruction one step down the pipeline is is equal to the machine cycle and is determined by the stage with the longest processing delay. • Pipelining increases the CPU instruction throughput: The number of instructions completed per cycle. – Under ideal conditions (no stall cycles), instruction throughput is one instruction per machine cycle, or ideal CPI = 1 • Pipelining does not reduce the execution time of an individual instruction: The time needed to complete all processing steps of an instruction (also called instruction completion latency). – Minimum instruction latency = n cycles, where n is the number of pipeline stages EECC551 - Shaaban (In Appendix A) #1 Lec # 2 Spring 2004 3-10-2004 MIPS In-Order Single-Issue Integer Pipeline Ideal Operation Fill Cycles = number of stages -1 Clock Number Time in clock cycles ® Instruction Number 1 2 3 4 5 6 7 8 9 Instruction I IF ID EX MEM WB Instruction I+1 IF ID EX MEM WB Instruction I+2 IF ID EX MEM WB Instruction I+3 IF ID
    [Show full text]