A Pattern Language for Writing Efficient Kernels on GPU Architectures

Total Page:16

File Type:pdf, Size:1020Kb

A Pattern Language for Writing Efficient Kernels on GPU Architectures UNIVERSITÀ DEGLI STUDI DI ROMA “TOR VERGATA” This dissertation is submitted for the degree of Computer Science and Automation Engineering Doctorate XXVI Cycle SIMPL: A Pattern Language for writing Efficient Kernels on GPU architectures Davide Barbieri A.A. 2013 / 2014 Advisors: Valeria Cardellini, Salvatore Filippone Coordinator: Giovanni Schiavon ©Copyright 2015 by Davide Barbieri. All rights reserved. Reproduction in whole or in part is prohibited without the written consent of the copyright owner. ACKNOWLEDGEMENTS I would like to thank my advisors, Valeria Cardellini and Salvatore Filippone. My graduate studies would not have started at all without their support and persuad- ing skills. I would like to thank them for teaching me about pipelines, instruction- level parallelism, memory hierarchies and computational science during my under- graduate studies, and about doing research with passion and curiosity during my doctorate. I would like to thank Daniel Pierre Bovet, Marco Cesati and Emiliano Betti for letting me teach CUDA for three times during these years and the students of the CUDA course to have followed my lectures with interest. I gratefully acknowledge the support we have received from CASPUR for the PSBLAS-GPU project, under the HPC Grant 2011 on GPU cluster, from CINECA for project IsC14 HyPSBLAS, under the ISCRA grant programme for 2014, and from Amazon with the AWS in Education Grant programme 2014. Thanks are also due to Nvidia Corporation for making a platform which provides a great tool to learn massive parallel programming and to play good-looking videogames. I thank my parents, my grandparents, my brother Domenico, and Eleonora to give me a reason to improve my skills and achieve goals in life, to support me during stressful periods (especially near deadlines) and for all their love. ABSTRACT From embedded systems to desktop computers, and of course HPC (High Per- formance Computing) solutions, computing resources today are most of all based on multi-core / many-core architectures. While the presence of parallel hardware is ubiquitous, applications that exploit its full potential are still difficult to write. One particular mention is to Graphics Processing Units (GPUs) programming. Thanks to its data-parallel oriented architecture, a GPU can achieve a higher throughput in terms of floating point operations in time unit and memory bandwidth compared to an off-the-shelf CPU with similar power consumption and cost. Nevertheless, a GPU naïve implementation could be so inefficient as to lose orders of magnitude of performance compared to its optimized counterpart. For this reason, it is fundamen- tal to have enough experience on the reference architecture to provide an optimal solution and make the switch from CPU to GPU advantageous. A pattern language defines a structured collection of design practices within a field of expertise. Inthe past, pattern languages were proven to be an effective way to communicate experi- ence and help researchers and developers to reduce the learning curve over a partic- ular expertise field. In the field of parallel programming, much work has beendone to provide a composable set of patterns that could be used to design an algorithm in a way that makes it completely hardware-agnostic and flawlessy integrable inside algorithmic skeleton frameworks, which actually care of producing optimized code for a target architecture or a heterogenous platform. While algorithmic skeleton frameworks are in many cases portable and efficient, a number of common appli- vi cations had to be retrofitted to provide good performance on GPUs; this shows the need for the novice developer to get well acquainted with the details of the platform. In this dissertation we present a new pattern language, SIMPL (SIMt Pattern Lan- guage), that is solely dedicated to the development of optimized code on a SIMT (single-instruction multiple-thread) architecture, which models a modern GPU. To the best of our knowledge, this is the first pattern language exclusively dedicated to General Purpose computing on GPUs (GPGPU). This language is currently made by 16 patterns, structured into 5 categories, and gathers the experience we made on this platform so far, presenting it in a reusable form. Among those patterns, we place particular emphasis on the original approaches that constitute our main contribution to the research field. We discuss in detail a set of case studies which involve the application of our pattern language. Specifically, we describe the im- plementation of the sparse matrix-vector multiply routine, reviewing the available literature and discussing our own approach to the problem, together with pointers to available software. As our main contribution, we propose three novel matrix stor- age formats, ELL-G and HLL which were derived from ELL, and HDIA for matri- ces having mostly a diagonal sparsity pattern. We compare the performance of the proposed formats to the results provided by the state-of-the-art formats with exper- iments realized on different GPU platforms and test matrices coming from various application domains. Furthermore, we implement the reversal of MD5 and SHA1 hash functions on a cluster of Nvidia GPUs. Our CUDA implementation achieves comparable or even better average performance results when compared to other pop- ular password cracking software, reaching near-maximal throughput over different GPU architectures. Finally, we present the GPU implementation of a broad-phase collision detection algorithm for particles simulation, which uses a uniform grid as spatial partitioning scheme. In some tests our original approach achieves a speedup vii of 2 compared to the fastest known method supporting a fixed maximum number of elements per cell, and a speedup of 7 compared with the fastest method without such a constraint. TABLE OF CONTENTS Table of contents ix List of figures xv List of tables xix 1 Introduction1 1.1 Introduction and motivation . .1 1.2 Contributions . .4 1.3 Organization . .6 2 Background9 2.1 Basic parallel laws . 11 2.1.1 Amdahl’s law . 11 2.1.2 Gustafson’s law . 12 2.1.3 Little’s law . 17 2.2 The work-time paradigm . 18 2.3 The PRAM model . 19 2.3.1 Brent’s theorem . 19 2.4 Pattern-based design . 21 2.5 Skeleton-based parallel programming . 22 x Table of contents 3 General-purpose computing on GPU 25 3.1 Evolution of the GPU . 28 3.1.1 The first GPUs and the fixed pipeline . 28 3.1.2 Shader cores and shader model . 29 3.1.3 Unified shader model . 30 3.1.4 From unified shader model to CUDA . 31 3.2 Compute Unified Device Architecture . 31 4 Pattern language 37 4.1 Overview . 37 4.2 Related work . 37 4.3 Pattern template . 39 4.4 Language context . 40 4.5 Language forces . 40 4.6 Taxonomy . 41 4.7 Underlying architecture model . 42 4.7.1 Device utilization . 48 4.7.2 Memory model . 50 4.7.3 Relashionship with PRAM model . 51 5 Mapping patterns 53 5.1 Vectorize pattern . 53 5.2 Enumerate pattern . 55 5.3 Load Remap pattern . 60 6 Consistency patterns 63 6.1 Double Buffering pattern . 63 6.2 Ghost Cell pattern . 65 Table of contents xi 6.3 Wave pattern . 68 7 Transformation patterns 73 7.1 Cascading pattern . 73 7.2 Reduce pattern . 75 7.3 Scan pattern . 84 8 Construction patterns 93 8.1 Count And Allocate pattern . 93 8.2 Atomic Add Insertion pattern . 95 8.3 Sort And Pack pattern . 98 8.4 Atomic Concatenate pattern . 101 8.5 Atomic Traversal pattern . 103 9 Tuning patterns 107 9.1 Scale pattern . 107 9.2 Anti Camping pattern . 111 10 Case study: sparse matrix vector multiply 115 10.1 Overview . 115 10.2 Storage formats for sparse matrices . 118 10.2.1 COOrdinate . 120 10.2.2 Compressed Sparse Rows . 121 10.2.3 Compressed Sparse Columns . 123 10.2.4 Storage formats for vector computers . 123 10.3 Formats for sparse matrices on GPU . 126 10.4 Related work . 127 10.4.1 COO variants . 129 xii Table of contents 10.4.2 CSR variants . 131 10.4.3 CSC variants . 134 10.4.4 ELLPACK variants . 135 10.4.5 DIA variants . 140 10.4.6 Hybrid variants . 140 10.4.7 New GPU-specific storage formats . 143 10.4.8 Automated tuning and performance optimization . 144 10.5 Formats for sparse matrices on SIMT architectures . 146 10.5.1 GPU ELLPACK . 147 10.5.2 Hacked ELLPACK . 150 10.5.3 DIA and Hacked DIA . 153 10.6 Experimental results . 155 11 Case study: exhaustive key search 175 11.1 Related work . 178 11.2 Password cracking on GPU . 178 11.2.1 GPU kernel . 181 11.3 GPU optimizations . 182 11.3.1 CUDA multiprocessor throughput . 183 11.3.2 The main bottleneck . 185 11.4 Experimental results . 190 11.4.1 Reference hardware . 190 11.4.2 Performance results . 191 12 Case study: interacting particles simulation 195 12.1 Uniform grids on GPU . 196 12.2 Atomic Concatenate implementation . 200 Table of contents xiii 12.3 Experimental results . 200 12.3.1 Performance analysis . 202 13 Conclusions 207 13.1 Future directions . 209 References 211 LIST OF FIGURES 2.1 Amdahl’s Law for different parallel fractions . 13 2.2 Speedup in data-parallel programs . 16 2.3 PRAM machine diagram . 19 3.1 Floating-Point Operations per Second for CPU and GPU . 26 3.2 Memory Bandwidth for CPU and GPU . 27 3.3 A 2D grid of threads . 33 4.1 Single-instruction multiple-threads Model: host and devices . 43 4.2 Single-instruction multiple-threads Model: a multi-processor . 44 6.1 Ghost Cell pattern: simulate and copy . 66 6.2 Ghost Cell pattern: double buffering .
Recommended publications
  • Lecture 7: Synchronous Sequential Logic
    Systems I: Computer Organization and Architecture Lecture 7: Synchronous Sequential Logic Synchronous Sequential Logic • The digital circuits that we have looked at so far are combinational, depending only on the inputs. In practice, most systems have many components that contain memory elements, which require sequential logic. • A sequential circuit contains both a combinational circuit and memory elements, whose output also serves as an input for the combinational circuit. • The binary data stored in the memory elements define the state of the sequential circuit and help determine the conditions for changing the state in the memory elements. Inputs Combinational Outputs circuit Memory elements Sequential Circuits: Synchronous and Asynchronous • There are two types of sequential circuits, classified by their signal’s timing: – Synchronous sequential circuits have behavior that is defined from knowledge of its signal at discrete instants of time. – The behavior of asynchronous sequential circuits depends on the order in which input signals change and are affected at any instant of time. Synchronous Sequential Circuits • Synchronous sequential circuits need to use signal that affect memory elements at discrete time instants. • Synchronization is achieved by a timing device called a master-clock generator which generates a periodic train of clock pulses. Basic Flip-Flop Circuit Using NOR Gates 1 R(reset) Q 0 Set state Clear state 1 Q’ 0 S(set) S R Q Q’ 1 0 1 0 0 0 1 0 (after S = 1, R = 0) 0 1 0 1 0 0 0 1 (after S = 0, R = 1) 1 1 0 0 Basic
    [Show full text]
  • Performance Evaluation of a Signal Processing Algorithm with General-Purpose Computing on a Graphics Processing Unit
    DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2019 Performance Evaluation of a Signal Processing Algorithm with General-Purpose Computing on a Graphics Processing Unit FILIP APPELGREN MÅNS EKELUND KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Performance Evaluation of a Signal Processing Algorithm with General-Purpose Computing on a Graphics Processing Unit Filip Appelgren and M˚ansEkelund June 5, 2019 2 Abstract Graphics Processing Units (GPU) are increasingly being used for general-purpose programming, instead of their traditional graphical tasks. This is because of their raw computational power, which in some cases give them an advantage over the traditionally used Central Processing Unit (CPU). This thesis therefore sets out to identify the performance of a GPU in a correlation algorithm, and what parameters have the greatest effect on GPU performance. The method used for determining performance was quantitative, utilizing a clock library in C++ to measure performance of the algorithm as problem size increased. Initial problem size was set to 28 and increased exponentially to 221. The results show that smaller sample sizes perform better on the serial CPU implementation but that the parallel GPU implementations start outperforming the CPU between problem sizes of 29 and 210. It became apparent that GPU's benefit from larger problem sizes, mainly because of the memory overhead costs involved with allocating and transferring data. Further, the algorithm that is under evaluation is not suited for a parallelized implementation due to a high amount of branching. Logic can lead to warp divergence, which can drastically lower performance.
    [Show full text]
  • Lecture 34: Bonus Topics
    Lecture 34: Bonus topics Philipp Koehn, David Hovemeyer December 7, 2020 601.229 Computer Systems Fundamentals Outline I GPU programming I Virtualization and containers I Digital circuits I Compilers Code examples on web page: bonus.zip GPU programming 3D graphics Rendering 3D graphics requires significant computation: I Geometry: determine visible surfaces based on geometry of 3D shapes and position of camera I Rasterization: determine pixel colors based on surface, texture, lighting A GPU is a specialized processor for doing these computations fast GPU computation: use the GPU for general-purpose computation Streaming multiprocessor I Fetches instruction (I-Cache) I Has to apply it over a vector of data I Each vector element is processed in one thread (MT Issue) I Thread is handled by scalar processor (SP) I Special function units (SFU) Flynn’s taxonomy I SISD (single instruction, single data) I uni-processors (most CPUs until 1990s) I MIMD (multi instruction, multiple data) I all modern CPUs I multiple cores on a chip I each core runs instructions that operate on their own data I SIMD (single instruction, multiple data) I Streaming Multi-Processors (e.g., GPUs) I multiple cores on a chip I same instruction executed on different data GPU architecture GPU programming I If you have an application where I Data is regular (e.g., arrays) I Computation is regular (e.g., same computation is performed on many array elements) then doing the computation on the GPU is likely to be much faster than doing the computation on the CPU I Issues: I GPU
    [Show full text]
  • SEQUENTIAL LOGIC  Combinational:  Output Depends Only on Current Inputs  Sequential:  Output Depend on Current Inputs Plus Past History  Includes Memory Elements
    10/10/2019 COMBINATIONAL VS. SEQUENTIAL DIGITAL ELECTRONICS LOGIC SYSTEM DESIGN Combinational Circuit Sequential Circuit inputs Combinational outputs inputs Combinational outputs FALL 2019 Logic Logic PROF. IRIS BAHAR (TAUGHT BY JIWON CHOE) OCTOBER 9, 2019 State LECTURE 11: SEQUENTIAL LOGIC Combinational: Output depends only on current inputs Sequential: Output depend on current inputs plus past history Includes memory elements BISTABLE MEMORY STORAGE SEQUENTIAL CIRCUITS ELEMENT Fundamental building block of other state elements Two outputs, Q, Q Outputs depend on inputs and state variables No inputs I1 Q The state variables embody the past Q Q I2 I1 Storage elements hold the state variables A clock periodically advances the circuit I2 Q What does the circuit do? 1 10/10/2019 BISTABLE MEMORY ELEMENT REVISIT NOR & NAND GATES Controlling inputs for NAND and NOR gates 0 • Consider the two possible cases: I1 Q 1 – Q = 0: then Q’ = 1 and Q = 0 (consistent) X X 0 1 1 0 I2 Q 0 1 1 – Q = 1: then Q’ = 0 and Q = 1 (consistent) I1 Q 0 Implementing NOT with NAND/NOR using non-controlling 1 0 I2 Q inputs – Bistable circuit stores 1 bit of state in the X X X’ X’ state variable, Q (or Q’ ) 1 0 • But there are no inputs to control the state S-R (SET/RESET) LATCH S-R LATCH ANALYSIS 0 R N1 Q – S = 1, R = 0: then Q = 1 and Q = 0 . Consider the four possible cases: 1 N2 Q . S = 1, R = 0 S . S = 0, R = 1 R N1 Q 1 R .
    [Show full text]
  • CPE 323 Introduction to Embedded Computer Systems: Introduction
    CPE 323 Introduction to Embedded Computer Systems: Introduction Instructor: Dr Aleksandar Milenkovic CPE 323 Administration Syllabus textbook & other references grading policy important dates course outline Prerequisites Number representation Digital design: combinational and sequential logic Computer systems: organization Embedded Systems Laboratory Located in EB 106 EB 106 Policies Introduction sessions Lab instructor CPE 323: Introduction to Embedded Computer Systems 2 CPE 323 Administration LAB Session on-line LAB manuals and tutorials Access cards Accounts Lab Assistant: Zahra Atashi Lab sessions (select 4 from the following list) Monday 8:00 - 9:30 AM Wednesday 8:00 - 9:30 AM Wednesday 5:30 - 7:00 PM Friday 8:00 - 9:30 AM Friday 9:30 – 11:00 AM Sign-up sheet will be available in the laboratory CPE 323: Introduction to Embedded Computer Systems 3 Outline Computer Engineering: Past, Present, Future Embedded systems What are they? Where do we find them? Structure and Organization Software Architectures CPE 323: Introduction to Embedded Computer Systems 4 What Is Computer Engineering? The creative application of engineering principles and methods to the design and development of hardware and software systems Discipline that combines elements of both electrical engineering and computer science Computer engineers are electrical engineers that have additional training in the areas of software design and hardware-software integration CPE 323: Introduction to Embedded Computer Systems 5 What Do Computer Engineers Do? Computer engineers are involved in all aspects of computing Design of computing devices (both Hardware and Software) Where are computing devices? Embedded computer systems (low-end – high-end) In: cars, aircrafts, home appliances, missiles, medical devices,..
    [Show full text]
  • Single-Cycle
    18-447 Computer Architecture Lecture 5: Intro to Microarchitecture: Single-Cycle Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 1/26/2015 Agenda for Today & Next Few Lectures Start Microarchitecture Single-cycle Microarchitectures Multi-cycle Microarchitectures Microprogrammed Microarchitectures Pipelining Issues in Pipelining: Control & Data Dependence Handling, State Maintenance and Recovery, … 2 Recap of Two Weeks and Last Lecture Computer Architecture Today and Basics (Lectures 1 & 2) Fundamental Concepts (Lecture 3) ISA basics and tradeoffs (Lectures 3 & 4) Last Lecture: ISA tradeoffs continued + MIPS ISA Instruction length Uniform vs. non-uniform decode Number of registers Addressing modes Aligned vs. unaligned access RISC vs. CISC properties MIPS ISA Overview 3 Assignment for You Not to be turned in As you learn the MIPS ISA, think about what tradeoffs the designers have made in terms of the ISA properties we talked about And, think about the pros and cons of design choices In comparison to ARM, Alpha In comparison to x86, VAX And, think about the potential mistakes Branch delay slot? Look Backward Load delay slot? No FP, no multiply, MIPS (initial) 4 Food for Thought for You How would you design a new ISA? Where would you place it? What design choices would you make in terms of ISA properties? What would be the first question you ask in this process? “What is my design point?” Look Forward & Up 5 Review: Other Example ISA-level Tradeoffs Condition codes vs. not VLIW vs. single instruction SIMD (single instruction multiple data) vs. SISD Precise vs. imprecise exceptions Virtual memory vs. not Unaligned access vs.
    [Show full text]
  • Sequential Logic .Sel(F[0]), .Out(Addmux Out));
    Use Explicit Port Declarations module mux32two (input [31:0] i0,i1, input sel, output [31:0] out); assign out = sel ? i1 : i0; endmodule mux32two adder_mux(.i0(b), .i1(32'd1), Sequential Logic .sel(f[0]), .out(addmux_out)); • Digital state: the D-Register mux32two adder_mux(b, 32'd1, f[0], addmux_out); • Timing constraints for D-Registers 2 • Specifying registers in Verilog • Blocking and nonblocking assignments Order of the ports matters! • Examples Reminder: Lab #2 due Thursday 6.111 Fall 2016 Lecture 4 1 6.111 Fall 2015 Verilog Summary Examples • Verilog – Hardware description language – not software program. parameter MSB = 7; // defines msb as a constant value 7 • A convention: lowercase for variables, UPPERCASE for parameter E = 25, F = 9; // defines two constant numbers parameters module blob #(parameter WIDTH = 64, // default width: 64 pixels parameter BYTE_SIZE = 8, HEIGHT = 64, // default height: 64 pixels BYTE_MASK = BYTE_SIZE - 1; COLOR = 3'b111) // default color: white (input [10:0] x,hcount, input [9:0] y,vcount, output reg [2:0] pixel); endmodule parameter [31:0] DEC_CONST = 1’b1; // value converted to 32 bits • wires wire a,b,z; // three 1-bit wires wire [31:0] memdata; // a 32-bit bus parameter NEWCONST = 3’h4; // implied range of [2:0] wire [7:0] b1,b2,b3,b4; // four 8-bit buses wire [WIDTH-1:0] input; // parameterized bus parameter NEWCONS = 4; // implied range of at least [31:0] 6.111 Fall 2016 Lecture 4 3 6.111 Fall 2016 Lecture 4 4 Something We Can’t Build (Yet) Digital State One model of what we’d like to build What if you were given the following design specification: Next State Memory When the button is pushed: 1) Turn on the light if Device Current it is off State Combinational button 2) Turn off the light if light LOAD it is on Logic The light should change state within a second Input Output of the button press What makes this circuit so different Plan: Build a Sequential Circuit with stored digital STATE – from those we’ve discussed before? • Memory stores CURRENT state, produced at output 1.
    [Show full text]
  • Object-Oriented Development for Reconfigurable Architectures
    Object-Oriented Development for Reconfigurable Architectures Von der Fakultät für Mathematik und Informatik der Technischen Universität Bergakademie Freiberg genehmigte DISSERTATION zur Erlangung des akademischen Grades Doktor Ingenieur Dr.-Ing., vorgelegt von Dipl.-Inf. (FH) Dominik Fröhlich geboren am 19. Februar 1974 Gutachter: Prof. Dr.-Ing. habil. Bernd Steinbach (Freiberg) Prof. Dr.-Ing. Thomas Beierlein (Mittweida) PD Dr.-Ing. habil. Michael Ryba (Osnabrück) Tag der Verleihung: 20. Juni 2007 To my parents. ABSTRACT Reconfigurable hardware architectures have been available now for several years. Yet the application devel- opment for such architectures is still a challenging and error-prone task, since the methods, languages, and tools being used for development are inappropriate to handle the complexity of the problem. This hampers the widespread utilization, despite of the numerous advantages offered by this type of architecture in terms of computational power, flexibility, and cost. This thesis introduces a novel approach that tackles the complexity challenge by raising the level of ab- straction to system-level and increasing the degree of automation. The approach is centered around the paradigms of object-orientation, platforms, and modeling. An application and all platforms being used for its design, implementation, and deployment are modeled with objects using UML and an action language. The application model is then transformed into an implementation, whereby the transformation is steered by the platform models. In this thesis solutions for the relevant problems behind this approach are discussed. It is shown how UML can be used for complete and precise modeling of applications and platforms. Application development is done at the system-level using a set of well-defined, orthogonal platform models.
    [Show full text]
  • Exploring Applications in CUDA Michael Kubacki Computer Science and Engineering, University of South Florida
    Exploring Applications in CUDA Michael Kubacki Computer Science and Engineering, University of South Florida [email protected] Abstract—Modern Graphics Processing Units (GPUs) are or massive parallelization on the GPU. Of course, after such capable of much more than supporting GUIs and generating 3D analysis, an efficient implementation is necessary. This paper graphics. These devices are highly parallel, highly multithreaded explores two applications based on the CUDA programming multiprocessors harnessing a large amount of floating-point model and a sequential application demonstrating the processing power for non-graphics problems. This project is necessary preparation for a parallel implementation. based on experiments in CUDA C. These examples seek to demonstrate the potential speedups offered by CUDA and the II. BACKGROUND ease of which a new programmer can take advantage of such performance gains. A. NVIDIA CUDA Programming Model NVIDIA’s CUDA (Compute Unified Device Architecture) Keywords— CUDA, GPGPU, Parallel Programming, GPU is a scalable parallel programming model and software platform for the GPU and other parallel processors that allow I. INTRODUCTION the programmer to bypass the graphics API and graphics Traditionally, software applications have been written in a interfaces of the GPU and simply program in C or C++. sequential fashion, easily understood by a programmer CUDA uses a SPMD (single-program, multiple data) style, stepping through the code. Software developers have largely programs are written for one thread that is instanced and relied on increasing clock frequencies and advances in executed by many threads in parallel on the multiple hardware to simply speed up execution of these programs. processors of the GPU [7].
    [Show full text]
  • Sequential Code Parallelization for Multi-Core Embedded Systems: a Survey of Models, Algorithms and Tools
    Instituto Tecnol´ogicode Costa Rica Electronics Engineering School Master's Program in Electronics Engineering Sequential Code Parallelization for Multi-core Embedded Systems: A Survey of Models, Algorithms and Tools Master's Thesis in fulfillment of the requirements for the degree of Master of Science in Electronics Engineering Major on Embedded Systems Submitted by Jorge Alberto Castro God´ınez December 15, 2014 I declare that this thesis document has been made entirely by my person, using and applying literature on the subject, and introducing my own knowledge and experimental results. In the cases I have used literature, I proceeded to indicate the sources by the respective references. Accordingly, I assume full responsibility for this thesis work and the content of this document. Jorge Alberto Castro God´ınez B¨uhl,Germany. December 15, 2014 C´ed.:1 1236 0930 Instituto Tecnol´ogicode Costa Rica Electronics Engineering School Master's Thesis Evaluation Committee Master's Thesis presented to the Evaluation Committee as a requirement to obtain the Master of Science degree from the Instituto Tecnol´ogicode Costa Rica. Evaluation Committee Members The members of the Evaluation Committee certify that this Master's Thesis has been approved and that fulfills the requirements set by the Electronics Engineering School. B¨uhl,Germany. December 15, 2014 Abstract In recent years the industry experienced a shift in the design and manufacture of pro- cessors. Multiple-core processors in one single chip started replacing the common used single-core processors. This design trend reached the develop of System-on-Chip, widely used in embedded systems, and turned them into powerful Multiprocessor System-on- Chip.
    [Show full text]
  • Sequential Logic – Each Circuit Element Used at Most Once Sequential Circuits
    Review of Combinational Circuits Combinational circuits. Basic abstraction = switch. In principle, can build TOY computer with a combinational circuit. – 255 16 = 4,080 inputs 24080 rows in truth table! – no simple pattern Sequential Logic – each circuit element used at most once Sequential circuits. Reuse circuit elements Introduction to Computer by storing bits in "memory." ALU Yung-Yu Chuang combinational with slides by Sedgewick & Wayne (introcs.cs.princeton.edu), Nisan & Schocken Memory state (www.nand2tetris.org) and Harris & Harris (DDCA) 2 Combinational vs. Sequential Circuits Flip-Flop Combinational circuits. Flip-flop Output determined solely by inputs. A small and useful sequential circuit Can draw with no loops. Abstraction that remembers one bit Ex: majority, adder, ALU. Basis of important computer components for – register – memory Sequential circuits. – counter Output determined by inputs and There are several flavors previous outputs. Ex: memory, program counter, CPU. 3 4 S-R flip flop Relay-based flip-flop RSQ 00 Ex. Simplest feedback loop. Two relays A and B, both connected 01 to power, each blocked by the other. 10 State determined by whichever switches first. The state is latched. 11 Stable. Q=S+RQ output1 input2 input1 output2 5 6 SR Flip Flop Flip-Flop SR flip flop. Two cross-coupled NOR gates. Flip-flop. A way to control the feedback loop. Q=R(S+Q) Abstraction that "remembers" one bit. R S Basic building block for memory and registers. RSQ Q 00 01 10 11 Caveat. Need to deal with switching delay. 7 8 Truth Table and Timing Diagram Clock SR Flip Flop Truth Table Truth table.
    [Show full text]
  • Designing Sequential Logic Circuits
    chapter7_pub.fm Page 270 Wednesday, November 22, 2000 8:41 AM CHAPTER 7 DESIGNING SEQUENTIAL LOGIC CIRCUITS Implementation techniques for flip-flops, latches, oscillators, pulse generators, and Schmitt triggers n Static versus dynamic realization n Choosing clocking strategies 7.1 Introduction 7.5.1 Dynamic Transmission-Gate Based Edge-triggred Registers 7.2 Timing Metrics for Sequential Circuits 7.5.2 C2MOS Dynamic Register: A Clock 7.3 Classification of Memory Elements Skew Insensitive Approach 7.4 Static Latches and Registers 7.5.3 True Single-Phase Clocked Register (TSPCR) 7.4.1The Bistability Principle 7.6 Pulse Registers 7.4.2SR Flip-Flops 6.4.2 The C2MOS Latch 7.4.3Multiplexer Based Latches 7.8.2 NORA-CMOS—A Logic Style for 7.4.4Master-Slave Based Edge Triggered Pipelined Structures Register 7.5.3 True Single-Phase Clocked Register 7.4.5Non-ideal clock signals (TSPCR) 7.7 Sense-Amplifier Based Registers 7.4.6Low-Voltage Static Latches 7.8 Pipelining: An approach to optimize sequential 7.5 Dynamic Latches and Registers circuits 270 chapter7_pub.fm Page 271 Wednesday, November 22, 2000 8:41 AM Section 271 7.8.1Latch- vs. Register-Based Pipelines 7.8.2NORA-CMOS—A Logic Style for Pipelined Structures 7.9 Non-Bistable Sequential Circuits 7.9.1The Schmitt Trigger 7.9.2Monostable Sequential Circuits 7.9.3Astable Circuits 7.10 Perspective: Choosing a Clocking Strategy 7.11 Summary 7.12 To Probe Further 7.13 Exercises and Design Problems chapter7_pub.fm Page 272 Wednesday, November 22, 2000 8:41 AM 272 DESIGNING SEQUENTIAL LOGIC CIRCUITS Chapter 7 7.1 Introduction Combinational logic circuits that were described earlier have the property that the output of a logic block is only a function of the current input values, assuming that enough time has elapsed for the logic gates to settle.
    [Show full text]