Razor: a Low-Power Pipeline Based on Circuit-Level Timing Speculation
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Approaches in Green Computing
Special Issue - 2015 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 NSRCL-2015 Conference Proceedings Approaches in Green Computing Reena Thomas Fedrina J Manjaly 3 rd BCA, Department of Computer Science 3 rd BCA , Department of Computer Science Carmel College Carmel College Mala, Thrissur Mala, Thrissur Abstract— In a 2008 article San Murugesan defined green computing as "the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems — such as monitors, printers, storage devices, and networking and communications systems — efficiently and effectively with minimal or no impact on the environment."Murugesan lays out four paths along which he believes the environmental effects of computing should be addressed:Green use, green disposal, green design, and green manufacturing. Green computing can also develop solutions that offer benefits by "aligning all IT processes and practices with the core principles of sustainability, which are to reduce, reuse, and recycle; and finding innovative ways to use IT in business processes to deliver sustainability benefits across the Figure 1: Green Computing Migration Framework enterprise and beyond". I. INTRODUCTION II. APPROACHES In 1992, the U.S. Environmental Protection Agency A. Product longevity launched Energy Star, a voluntary labeling program that is Gartner maintains that the PC manufacturing process designed to promote and recognize energy-efficiency in accounts for 70% of the natural resources used in the life monitors, climate control equipment, and other cycle of a PC. More recently, Fujitsu released a Life Cycle technologies. This resulted in the widespread adoption of Assessment (LCA) of a desktop that show that sleep mode among consumer electronics. -
PIPELINING and ASSOCIATED TIMING ISSUES Introduction: While
PIPELINING AND ASSOCIATED TIMING ISSUES Introduction: While studying sequential circuits, we studied about Latches and Flip Flops. While Latches formed the heart of a Flip Flop, we have explored the use of Flip Flops in applications like counters, shift registers, sequence detectors, sequence generators and design of Finite State machines. Another important application of latches and flip flops is in pipelining combinational/algebraic operation. To understand what is pipelining consider the following example. Let us take a simple calculation which has three operations to be performed viz. 1. add a and b to get (a+b), 2. get magnitude of (a+b) and 3. evaluate log |(a + b)|. Each operation would consume a finite period of time. Let us assume that each operation consumes 40 nsec., 35 nsec. and 60 nsec. respectively. The process can be represented pictorially as in Fig. 1. Consider a situation when we need to carry this out for a set of 100 such pairs. In a normal course when we do it one by one it would take a total of 100 * 135 = 13,500 nsec. We can however reduce this time by the realization that the whole process is a sequential process. Let the values to be evaluated be a1 to a100 and the corresponding values to be added be b1 to b100. Since the operations are sequential, we can first evaluate (a1 + b1) while the value |(a1 + b1)| is being evaluated the unit evaluating the sum is dormant and we can use it to evaluate (a2 + b2) giving us both |(a1 + b1)| and (a2 + b2) at the end of another evaluation period. -
2.5 Classification of Parallel Computers
52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor. -
Microprocessor Architecture
EECE416 Microcomputer Fundamentals Microprocessor Architecture Dr. Charles Kim Howard University 1 Computer Architecture Computer System CPU (with PC, Register, SR) + Memory 2 Computer Architecture •ALU (Arithmetic Logic Unit) •Binary Full Adder 3 Microprocessor Bus 4 Architecture by CPU+MEM organization Princeton (or von Neumann) Architecture MEM contains both Instruction and Data Harvard Architecture Data MEM and Instruction MEM Higher Performance Better for DSP Higher MEM Bandwidth 5 Princeton Architecture 1.Step (A): The address for the instruction to be next executed is applied (Step (B): The controller "decodes" the instruction 3.Step (C): Following completion of the instruction, the controller provides the address, to the memory unit, at which the data result generated by the operation will be stored. 6 Harvard Architecture 7 Internal Memory (“register”) External memory access is Very slow For quicker retrieval and storage Internal registers 8 Architecture by Instructions and their Executions CISC (Complex Instruction Set Computer) Variety of instructions for complex tasks Instructions of varying length RISC (Reduced Instruction Set Computer) Fewer and simpler instructions High performance microprocessors Pipelined instruction execution (several instructions are executed in parallel) 9 CISC Architecture of prior to mid-1980’s IBM390, Motorola 680x0, Intel80x86 Basic Fetch-Execute sequence to support a large number of complex instructions Complex decoding procedures Complex control unit One instruction achieves a complex task 10 -
Instruction Pipelining (1 of 7)
Chapter 5 A Closer Look at Instruction Set Architectures Objectives • Understand the factors involved in instruction set architecture design. • Gain familiarity with memory addressing modes. • Understand the concepts of instruction- level pipelining and its affect upon execution performance. 5.1 Introduction • This chapter builds upon the ideas in Chapter 4. • We present a detailed look at different instruction formats, operand types, and memory access methods. • We will see the interrelation between machine organization and instruction formats. • This leads to a deeper understanding of computer architecture in general. 5.2 Instruction Formats (1 of 31) • Instruction sets are differentiated by the following: – Number of bits per instruction. – Stack-based or register-based. – Number of explicit operands per instruction. – Operand location. – Types of operations. – Type and size of operands. 5.2 Instruction Formats (2 of 31) • Instruction set architectures are measured according to: – Main memory space occupied by a program. – Instruction complexity. – Instruction length (in bits). – Total number of instructions in the instruction set. 5.2 Instruction Formats (3 of 31) • In designing an instruction set, consideration is given to: – Instruction length. • Whether short, long, or variable. – Number of operands. – Number of addressable registers. – Memory organization. • Whether byte- or word addressable. – Addressing modes. • Choose any or all: direct, indirect or indexed. 5.2 Instruction Formats (4 of 31) • Byte ordering, or endianness, is another major architectural consideration. • If we have a two-byte integer, the integer may be stored so that the least significant byte is followed by the most significant byte or vice versa. – In little endian machines, the least significant byte is followed by the most significant byte. -
Review of Computer Architecture
Basic Computer Architecture CSCE 496/896: Embedded Systems Witawas Srisa-an Review of Computer Architecture Credit: Most of the slides are made by Prof. Wayne Wolf who is the author of the textbook. I made some modifications to the note for clarity. Assume some background information from CSCE 430 or equivalent von Neumann architecture Memory holds data and instructions. Central processing unit (CPU) fetches instructions from memory. Separate CPU and memory distinguishes programmable computer. CPU registers help out: program counter (PC), instruction register (IR), general- purpose registers, etc. von Neumann Architecture Memory Unit Input CPU Output Unit Control + ALU Unit CPU + memory address 200PC memory data CPU 200 ADD r5,r1,r3 ADD IRr5,r1,r3 Recalling Pipelining Recalling Pipelining What is a potential Problem with von Neumann Architecture? Harvard architecture address data memory data PC CPU address program memory data von Neumann vs. Harvard Harvard can’t use self-modifying code. Harvard allows two simultaneous memory fetches. Most DSPs (e.g Blackfin from ADI) use Harvard architecture for streaming data: greater memory bandwidth. different memory bit depths between instruction and data. more predictable bandwidth. Today’s Processors Harvard or von Neumann? RISC vs. CISC Complex instruction set computer (CISC): many addressing modes; many operations. Reduced instruction set computer (RISC): load/store; pipelinable instructions. Instruction set characteristics Fixed vs. variable length. Addressing modes. Number of operands. Types of operands. Tensilica Xtensa RISC based variable length But not CISC Programming model Programming model: registers visible to the programmer. Some registers are not visible (IR). Multiple implementations Successful architectures have several implementations: varying clock speeds; different bus widths; different cache sizes, associativities, configurations; local memory, etc. -
Power-Aware Design Methodologies for FPGA-Based Implementation of Video Processing Systems
Old Dominion University ODU Digital Commons Electrical & Computer Engineering Theses & Dissertations Electrical & Computer Engineering Winter 2007 Power-Aware Design Methodologies for FPGA-Based Implementation of Video Processing Systems Hau Trung Ngo Old Dominion University Follow this and additional works at: https://digitalcommons.odu.edu/ece_etds Part of the Electrical and Computer Engineering Commons Recommended Citation Ngo, Hau T.. "Power-Aware Design Methodologies for FPGA-Based Implementation of Video Processing Systems" (2007). Doctor of Philosophy (PhD), Dissertation, Electrical & Computer Engineering, Old Dominion University, DOI: 10.25777/j6kw-q685 https://digitalcommons.odu.edu/ece_etds/185 This Dissertation is brought to you for free and open access by the Electrical & Computer Engineering at ODU Digital Commons. It has been accepted for inclusion in Electrical & Computer Engineering Theses & Dissertations by an authorized administrator of ODU Digital Commons. For more information, please contact [email protected]. POWER-AW ARE DESIGN METHODOLOGIES FOR FPGA-BASED IMPLEMENTATION OF VIDEO PROCESSING SYSTEMS By Hau Trung Ngo B. S. May 2001, Old Dominion University M. S. May 2003, Old Dominion University A Dissertation Submitted to the Faculty of Old Dominion University in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY ELECTRICAL AND COMPUTER ENGINEERING OLD DOMINION UNIVERSITY December 2007 Approved by: Vijayan i K. Asari (Direc(Director) Shirshak K. Dhali (Member) Min Song (Member) Ravi Mukkdmala (Member) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ABSTRACT POWER-AWARE DESIGN METHODOLOGIES FOR FPGA-BASED IMPLEMENTATION OF VIDEO PROCESSING SYSTEMS Hau Trung Ngo Old Dominion University Director: Dr. Vijayan Asari The increasing capacity and capabilities of FPGA devices in recent years provide an attractive option for performance-hungry applications in the image and video processing domain. -
CS152: Computer Systems Architecture Pipelining
CS152: Computer Systems Architecture Pipelining Sang-Woo Jun Winter 2021 Large amount of material adapted from MIT 6.004, “Computation Structures”, Morgan Kaufmann “Computer Organization and Design: The Hardware/Software Interface: RISC-V Edition”, and CS 152 Slides by Isaac Scherson Eight great ideas ❑ Design for Moore’s Law ❑ Use abstraction to simplify design ❑ Make the common case fast ❑ Performance via parallelism ❑ Performance via pipelining ❑ Performance via prediction ❑ Hierarchy of memories ❑ Dependability via redundancy But before we start… Performance Measures ❑ Two metrics when designing a system 1. Latency: The delay from when an input enters the system until its associated output is produced 2. Throughput: The rate at which inputs or outputs are processed ❑ The metric to prioritize depends on the application o Embedded system for airbag deployment? Latency o General-purpose processor? Throughput Performance of Combinational Circuits ❑ For combinational logic o latency = tPD o throughput = 1/t F and G not doing work! PD Just holding output data X F(X) X Y G(X) H(X) Is this an efficient way of using hardware? Source: MIT 6.004 2019 L12 Pipelined Circuits ❑ Pipelining by adding registers to hold F and G’s output o Now F & G can be working on input Xi+1 while H is performing computation on Xi o A 2-stage pipeline! o For input X during clock cycle j, corresponding output is emitted during clock j+2. Assuming ideal registers Assuming latencies of 15, 20, 25… 15 Y F(X) G(X) 20 H(X) 25 Source: MIT 6.004 2019 L12 Pipelined Circuits 20+25=45 25+25=50 Latency Throughput Unpipelined 45 1/45 2-stage pipelined 50 (Worse!) 1/25 (Better!) Source: MIT 6.004 2019 L12 Pipeline conventions ❑ Definition: o A well-formed K-Stage Pipeline (“K-pipeline”) is an acyclic circuit having exactly K registers on every path from an input to an output. -
A Dynamic Voltage Scaling Algorithm for Sporadic Tasks∗
In: Proceedings of the 24th IEEE Real-Time Systems Symposium, Cancun, Mexico, December 2003, pp. 52-62. A Dynamic Voltage Scaling Algorithm for Sporadic Tasks¤ Ala0 Qadi Steve Goddard Shane Farritor Computer Science & Engineering Mechanical Engineering University of Nebraska—Lincoln University of Nebraska - Lincoln Lincoln, NE 68588-0115 Lincoln, NE 68588-0656 faqadi,[email protected] [email protected] Abstract In CMOS circuits the power consumed by a CMOS gate is proportional to the square of the voltage applied to the Dynamic voltage scaling (DVS) algorithms save energy circuit, as shown by Equation (1) where CL is the gate load by scaling down the processor frequency when the proces- capacitance (output capacitance),VDD is the supply voltage sor is not fully loaded. Many algorithms have been proposed and f is the clock frequency [29]. The circuit delay td is for periodic and aperiodic task models but none support the given by Equation (2) where k is a constant depending on canonical sporadic task model. A DVS algorithm, called the output gate size and the output capacitance and VT is DVSST, is presented that can be used with sporadic tasks the threshold voltage [29]. The clock frequency is inversely in conjunction with preemptive EDF scheduling. The algo- proportional to the circuit delay; it is expressed using td and rithm is proven to guarantee each task meets its deadline the logic depth of a critical path as in Equation (3) where Ld while saving the maximum amount of energy possible with is the depth of the critical path [29]. processor frequency scaling. -
Comptia Fc0-Gr1 Exam Questions & Answers
COMPTIA FC0-GR1 EXAM QUESTIONS & ANSWERS Number : FC0-GR1 Passing Score : 800 Time Limit : 120 min File Version : 31.4 http://www.gratisexam.com/ COMPTIA FC0-GR1 EXAM QUESTIONS & ANSWERS Exam Name: CompTIA Strata Green IT Exam Visualexams QUESTION 1 A small business currently has a server room with a large cooling system that is appropriate for its size. The location of the server room is the top level of a building. The server room is filled with incandescent lighting that needs to continuously stay on for security purposes. Which of the following would be the MOST cost-effective way for the company to reduce the server rooms energy footprint? A. Replace all incandescent lighting with energy saving neon lighting. B. Set an auto-shutoff policy for all the lights in the room to reduce energy consumption after hours. C. Replace all incandescent lighting with energy saving fluorescent lighting. D. Consolidate server systems into a lower number of racks, centralizing airflow and cooling in the room. Correct Answer: C Section: (none) Explanation Explanation/Reference: QUESTION 2 Which of the following methods effectively removes data from a hard drive prior to disposal? (Select TWO). A. Use the remove hardware OS feature B. Formatting the hard drive C. Physical destruction D. Degauss the drive E. Overwriting data with 1s and 0s by utilizing software Correct Answer: CE Section: (none) Explanation Explanation/Reference: QUESTION 3 Which of the following terms is used when printing data on both the front and the back of paper? A. Scaling B. Copying C. Duplex D. Simplex Correct Answer: C Section: (none) Explanation Explanation/Reference: QUESTION 4 A user reports that their cell phone battery is dead and cannot hold a charge. -
Threading SIMD and MIMD in the Multicore Context the Ultrasparc T2
Overview SIMD and MIMD in the Multicore Context Single Instruction Multiple Instruction ● (note: Tute 02 this Weds - handouts) ● Flynn’s Taxonomy Single Data SISD MISD ● multicore architecture concepts Multiple Data SIMD MIMD ● for SIMD, the control unit and processor state (registers) can be shared ■ hardware threading ■ SIMD vs MIMD in the multicore context ● however, SIMD is limited to data parallelism (through multiple ALUs) ■ ● T2: design features for multicore algorithms need a regular structure, e.g. dense linear algebra, graphics ■ SSE2, Altivec, Cell SPE (128-bit registers); e.g. 4×32-bit add ■ system on a chip Rx: x x x x ■ 3 2 1 0 execution: (in-order) pipeline, instruction latency + ■ thread scheduling Ry: y3 y2 y1 y0 ■ caches: associativity, coherence, prefetch = ■ memory system: crossbar, memory controller Rz: z3 z2 z1 z0 (zi = xi + yi) ■ intermission ■ design requires massive effort; requires support from a commodity environment ■ speculation; power savings ■ massive parallelism (e.g. nVidia GPGPU) but memory is still a bottleneck ■ OpenSPARC ● multicore (CMT) is MIMD; hardware threading can be regarded as MIMD ● T2 performance (why the T2 is designed as it is) ■ higher hardware costs also includes larger shared resources (caches, TLBs) ● the Rock processor (slides by Andrew Over; ref: Tremblay, IEEE Micro 2009 ) needed ⇒ less parallelism than for SIMD COMP8320 Lecture 2: Multicore Architecture and the T2 2011 ◭◭◭ • ◮◮◮ × 1 COMP8320 Lecture 2: Multicore Architecture and the T2 2011 ◭◭◭ • ◮◮◮ × 3 Hardware (Multi)threading The UltraSPARC T2: System on a Chip ● recall concurrent execution on a single CPU: switch between threads (or ● OpenSparc Slide Cast Ch 5: p79–81,89 processes) requires the saving (in memory) of thread state (register values) ● aggressively multicore: 8 cores, each with 8-way hardware threading (64 virtual ■ motivation: utilize CPU better when thread stalled for I/O (6300 Lect O1, p9–10) CPUs) ■ what are the costs? do the same for smaller stalls? (e.g. -
Pipelined Computations
Pipelined Computations 1 Functional Decomposition car manufacturing with three plants speedup for n inputs in a p-stage pipeline 2 Pipeline Implementations processors in a ring topology pipelined addition pipelined addition with MPI MCS 572 Lecture 27 Introduction to Supercomputing Jan Verschelde, 15 March 2021 Introduction to Supercomputing (MCS 572) Pipelined Computations L-27 15 March 2021 1 / 27 Pipelined Computations 1 Functional Decomposition car manufacturing with three plants speedup for n inputs in a p-stage pipeline 2 Pipeline Implementations processors in a ring topology pipelined addition pipelined addition with MPI Introduction to Supercomputing (MCS 572) Pipelined Computations L-27 15 March 2021 2 / 27 car manufacturing Consider a simplified car manufacturing process in three stages: (1) assemble exterior, (2) fix interior, and (3) paint and finish: input sequence - - - - c5 c4 c3 c2 c1 P1 P2 P3 The corresponding space-time diagram is below: space 6 P3 c1 c2 c3 c4 c5 P2 c1 c2 c3 c4 c5 P1 c1 c2 c3 c4 c5 - time After 3 time units, one car per time unit is completed. Introduction to Supercomputing (MCS 572) Pipelined Computations L-27 15 March 2021 3 / 27 p-stage pipelines A pipeline with p processors is a p-stage pipeline. Suppose every process takes one time unit to complete. How long till a p-stage pipeline completes n inputs? A p-stage pipeline on n inputs: After p time units the first input is done. Then, for the remaining n − 1 items, the pipeline completes at a rate of one item per time unit. ) p + n − 1 time units for the p-stage pipeline to complete n inputs.