Realizing High IPC Through a Scalable Memory-Latency Tolerant Multipath Microarchitecture

Total Page:16

File Type:pdf, Size:1020Kb

Realizing High IPC Through a Scalable Memory-Latency Tolerant Multipath Microarchitecture Realizing High IPC Through a Scalable Memory-Latency Tolerant Multipath Microarchitecture D. Morano, A. Khala¯, D.R. Kaeli A.K. Uht Northeastern University University of Rhode Island dmorano, akhala¯, [email protected] [email protected] Abstract designs. Unfortunately, most of the ¯ne-grained ILP inherent in integer sequential programs spans several A microarchitecture is described that achieves high basic blocks. Data and control independent instruc- performance on conventional single-threaded pro- tions, that may exist far ahead in the program in- gram codes without compiler assistance. To obtain struction stream, need to be speculatively executed high instructions per clock (IPC) for inherently se- to exploit all possible inherent ILP. quential (e.g., SpecInt-2000 programs), a large num- A large number of instructions need to be fetched ber of instructions must be in flight simultaneously. each cycle and executed concurrently in order to However, several problems are associated with such achieve this. We need to ¯nd the available program microarchitectures, including scalability, issues re- ILP at runtime and to provide su±cient hardware lated to control flow, and memory latency. to expose, schedule, and otherwise manage the out- Our design investigates how to utilize a large mesh of-order speculative execution of control independent of processing elements in order to execute a single- instructions. Of course, the microarchitecture has to threaded program. We present a basic overview of also provide a means to maintain the architectural our microarchitecture and discuss how it addresses program order that is required for proper program scalability as we attempt to execute many instruc- execution. tions in parallel. The microarchitecture makes use We present a novel microarchitecture in this pa- of control and value speculative execution, multi- per that can be applied to any existing ISA. Our mi- path execution, and a high degree of out-of-order croarchitecture is targeted at obtaining substantial execution to help extract instruction level paral- program speedups on integer codes. The microar- lelism. Execution-time predication and time-tags for chitecture can speculatively execute hundreds of in- operands are used for maintaining program order. We structions ahead in the program instruction stream provide simulation results for several geometries of and thus expose large amounts of inherent ILP. We our microarchitecture illustrating a range of design use multipath execution to cover latencies associated tradeo®s. Results are also presented that show the with branch mispredictions. We also take advantage small performance impact over a range of memory of control and data independent instructions through system latencies. our use of execution-time predication. Since we pro- vide local bu®ering (a form of L0 caching) throughout 1 Introduction our grid layout, we can satisfy a large number of ac- cesses without accessing higher levels of the memory A number of studies into the limits of instruction level hierarchy. Further, since our design utilizes a form parallelism (ILP) have been promising in that they of value speculation on all operands, we consume the have shown that there is a signi¯cant amount of par- present value of a register, even if an outstanding load allelism within typical sequentially oriented single- is pending which targets this register. threaded programs (e.g., SpecInt-2000). The work of Lam and Wilson [8], Uht and Sindagi [18], Gon- 1.1 Related Work zalez and Gonzalez [4] have shown that there exists a great amount of instruction level parallelism (ILP) There have been several attempts at substantially that is not being exploited by any existing computer increasing program IPC through the exploitation of ILP. The Multiscalar processor architecture [15] at- tion penalties. Exploitation of control and data inde- tempted to realize substantial IPC speedups over con- pendent instructions beyond the join of a hammock ventional superscalar processors. However, the ap- branch [3, 17] is also capitalized upon where pos- proach is quite di®erent than ours in that Multiscalar sible. Choosing which paths in multipath execution relies on compiler participation where we do not. A should be given priority for machine resources is also notable attempt at realizing high IPC was done by addressed by the machine. The predicted program Lipasti and Shen on their Superspeculative architec- path is referred to as the mainline path. We give ex- ture [9]. They achieved an IPC of about 7 with re- ecution resource priority to this mainline path with alistic hardware assumptions. The Ultrascalar ma- respect to any possible alternative paths. Since al- chine [6] achieves asymptotic scalability, but only re- ternative paths have lower priority with respect to alizes a small amount of IPC due to its conservative the mainline path, they are referred to as disjoint execution model. Nagarajan et al proposed a Grid paths. This sort of strategy for the spawning of dis- Architecture of ALUs connected by an operand net- joint paths results in what is termed disjoint eager work [11]. This has some similarities to our work. execution (DEE). We therefore refer to disjoint paths However, unlike our work, their microarchitecture re- as simply DEE paths. These terms are taken from lies on the coordinated use of the compiler along with Uht [18]. a new ISA to obtain higher IPCs. Time-tags are the basic mechanism used in the mi- The rest of this paper is organized as follows. Sec- croarchitecture to order all operands, including mem- tion 2 reviews the critical elements of our proposed ory operands, while they are being used by instruc- microarchitecture. Section 3 presents simulation re- tions currently being executed. Time-tags are small sults for a range of machine con¯gurations, and shows values that are associated with operands that serve as the potential impact of multipath execution when ap- both an identifying tag and as a means to order them plied. We also discuss how our microarchitecture re- with respect to each other. The Warp Engine [2] also duces our dependence on the memory system by pro- used time-tags to manage large amounts of specula- viding a large amount of local caching on our datap- tive execution, but our use of them is much simpler ath. We summarize and conclude in section 4. than theirs. Time-tags are present on some recent machines (e.g., P6, Pentium 4), though are used for di®erent purposes than as employed in our microar- 2 The Microarchitecture chitecture. We have described most of the details of this microar- 2.1 Microarchitecture Components chitecture in [19]. We will review these details at a level to allow the reader to grasp a general under- Figure 1 provides a high-level view of our microar- standing as it applies to the memory system perfor- chitecture. Our microarchitecture shares many basic mance. In addition, this paper presents a microar- similarities to most conventional machines. The main chitecture with a realistic memory subsystem and we memory block, the L2 cache (uni¯ed in the present therefor are better able to evaluate the performance case), and the L1 instruction cache are all rather sim- impact of the memory hierarchy. ilar to those in common use. Except for the fact The microarchitecture is very aggressive in terms that the main memory, L2 cache, and L1 data cache of the amount of speculative execution it performs. are all address-interleaved, there is nothing further This is realized through a large amount of scalable unique about these components. Our L1 data cache execution resources. Resource scalability of the mi- is similar to most conventional data caches except croarchitecture is achieved through its distributed na- that it also has the ability to track speculative mem- ture along with repeater-like components that limit ory writes using a store bu®er. Our L1 d-cache shares the maximum bus spans. Contention for major cen- a similar goal with the Speculative Versioning Cache tralized structures is avoided. Conventional central- [5] but is simpler in some respects. Since we allow ized resources like a register ¯le, reorder bu®er, and speculative memory writes to propagate out to the centralized execution units, are eliminated. L1 data cache, multiple copies of a speculative write The microarchitecture also addresses several issues may be present in the L1 data cache store bu®er at associated with conditional branches. Spawning al- any time. They are di®erentiated from each other ternative speculative paths when encountering con- through the use of time-tags. ditional branches is done to avoid branch mispredic- The i-fetch unit ¯rst fetches instructions from i- memory operand transfer buses from and to L1 execution I-fetch L1 L1 d-cache D-cache window I-cache ML DEE ML DEE branch FU FU predictors and AS AS AS AS instruction AS AS AS AS load buffers AS AS AS AS shared operand PE PE forwarding buses sharing group sharing group FU FU AS AS AS AS AS AS AS AS L2 cache AS AS AS AS PE PE sharing group sharing group main from memory instruction forwarding buses forwarding buses dispatch buffer Figure 1: High-level View of the Distributed Microar- instruction dispatch buses chitecture. Shown are the major hardware compo- nents of the microarchitecture. With the exception Figure 2: The Execution Window. Shown is a layout of the execution window block, this is similar to most of the Active Stations (AS) and Processing Elements conventional microarchitectures (PE) along with some bus interconnections to imple- . ment a large, distributed microarchitecture. Groups of ASes share a PE; a group is called a sharing group. cache along one or more predicted program paths. Due to our relatively large instruction fetch band- silicon or circuit board space, from the ¯rst. This width requirement, we allow for the fetching of up to operation is termed operand forwarding.
Recommended publications
  • State Model Syllabus for Undergraduate Courses in Science (2019-2020)
    STATE MODEL SYLLABUS FOR UNDERGRADUATE COURSES IN SCIENCE (2019-2020) UNDER CHOICE BASED CREDIT SYSTEM Skill Development Employability Entrepreneurship All the three Skill Development and Employability Skill Development and Entrepreneurship Employability and Entrepreneurship Course Structure of U.G. Botany Honours Total Semester Course Course Name Credit marks AECC-I 4 100 Microbiology and C-1 (Theory) Phycology 4 75 Microbiology and C-1 (Practical) Phycology 2 25 Biomolecules and Cell Semester-I C-2 (Theory) Biology 4 75 Biomolecules and Cell C-2 (Practical) Biology 2 25 Biodiversity (Microbes, GE -1A (Theory) Algae, Fungi & 4 75 Archegoniate) Biodiversity (Microbes, GE -1A(Practical) Algae, Fungi & 2 25 Archegoniate) AECC-II 4 100 Mycology and C-3 (Theory) Phytopathology 4 75 Mycology and C-3 (Practical) Phytopathology 2 25 Semester-II C-4 (Theory) Archegoniate 4 75 C-4 (Practical) Archegoniate 2 25 Plant Physiology & GE -2A (Theory) Metabolism 4 75 Plant Physiology & GE -2A(Practical) Metabolism 2 25 Anatomy of C-5 (Theory) Angiosperms 4 75 Anatomy of C-5 (Practical) Angiosperms 2 25 C-6 (Theory) Economic Botany 4 75 C-6 (Practical) Economic Botany 2 25 Semester- III C-7 (Theory) Genetics 4 75 C-7 (Practical) Genetics 2 25 SEC-1 4 100 Plant Ecology & GE -1B (Theory) Taxonomy 4 75 Plant Ecology & GE -1B (Practical) Taxonomy 2 25 C-8 (Theory) Molecular Biology 4 75 Semester- C-8 (Practical) Molecular Biology 2 25 IV Plant Ecology & 4 75 C-9 (Theory) Phytogeography Plant Ecology & 2 25 C-9 (Practical) Phytogeography C-10 (Theory) Plant
    [Show full text]
  • 073-080.Pdf (568.3Kb)
    Graphics Hardware (2007) Timo Aila and Mark Segal (Editors) A Low-Power Handheld GPU using Logarithmic Arith- metic and Triple DVFS Power Domains Byeong-Gyu Nam, Jeabin Lee, Kwanho Kim, Seung Jin Lee, and Hoi-Jun Yoo Department of EECS, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea Abstract In this paper, a low-power GPU architecture is described for the handheld systems with limited power and area budgets. The GPU is designed using logarithmic arithmetic for power- and area-efficient design. For this GPU, a multifunction unit is proposed based on the hybrid number system of floating-point and logarithmic numbers and the matrix, vector, and elementary functions are unified into a single arithmetic unit. It achieves the single-cycle throughput for all these functions, except for the matrix-vector multipli- cation with 2-cycle throughput. The vertex shader using this function unit as its main datapath shows 49.3% cycle count reduction compared with the latest work for OpenGL transformation and lighting (TnL) kernel. The rendering engine uses also the logarithmic arithmetic for implementing the divisions in pipeline stages. The GPU is divided into triple dynamic voltage and frequency scaling power domains to minimize the power consumption at a given performance level. It shows a performance of 5.26Mvertices/s at 200MHz for the OpenGL TnL and 52.4mW power consumption at 60fps. It achieves 2.47 times per- formance improvement while reducing 50.5% power and 38.4% area consumption compared with the lat- est work. Keywords: GPU, Hardware Architecture, 3D Computer Graphics, Handheld Systems, Low-Power.
    [Show full text]
  • PERL – a Register-Less Processor
    PERL { A Register-Less Processor A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by P. Suresh to the Department of Computer Science & Engineering Indian Institute of Technology, Kanpur February, 2004 Certificate Certified that the work contained in the thesis entitled \PERL { A Register-Less Processor", by Mr.P. Suresh, has been carried out under my supervision and that this work has not been submitted elsewhere for a degree. (Dr. Rajat Moona) Professor, Department of Computer Science & Engineering, Indian Institute of Technology, Kanpur. February, 2004 ii Synopsis Computer architecture designs are influenced historically by three factors: market (users), software and hardware methods, and technology. Advances in fabrication technology are the most dominant factor among them. The performance of a proces- sor is defined by a judicious blend of processor architecture, efficient compiler tech- nology, and effective VLSI implementation. The choices for each of these strongly depend on the technology available for the others. Significant gains in the perfor- mance of processors are made due to the ever-improving fabrication technology that made it possible to incorporate architectural novelties such as pipelining, multiple instruction issue, on-chip caches, registers, branch prediction, etc. To supplement these architectural novelties, suitable compiler techniques extract performance by instruction scheduling, code and data placement and other optimizations. The performance of a computer system is directly related to the time it takes to execute programs, usually known as execution time. The expression for execution time (T), is expressed as a product of the number of instructions executed (N), the average number of machine cycles needed to execute one instruction (Cycles Per Instruction or CPI), and the clock cycle time (), as given in equation 1.
    [Show full text]
  • Instruction Pipelining in Computer Architecture Pdf
    Instruction Pipelining In Computer Architecture Pdf Which Sergei seesaws so soakingly that Finn outdancing her nitrile? Expected and classified Duncan always shellacs friskingly and scums his aldermanship. Andie discolor scurrilously. Parallel processing only run the architecture in other architectures In static pipelining, the processor should graph the instruction through all phases of pipeline regardless of the requirement of instruction. Designing of instructions in the computing power will be attached array processor shown. In computer in this can access memory! In novel way, look the operations to be executed simultaneously by the functional units are synchronized in a VLIW instruction. Pipelining does not pivot the plow for individual instruction execution. Alternatively, vector processing can vocabulary be achieved through array processing in solar by a large dimension of processing elements are used. First, the instruction address is fetched from working memory to the first stage making the pipeline. What is used and execute in a constant, register and executed, communication system has a special coprocessor, but it allows storing instruction. Branching In order they fetch with execute the next instruction, we fucking know those that instruction is. Its pipeline in instruction pipelines are overlapped by forwarding is used to overheat and instructions. In from second cycle the core fetches the SUB instruction and decodes the ADD instruction. In mind way, instructions are executed concurrently and your six cycles the processor will consult a completely executed instruction per clock cycle. The pipelines in computer architecture should be improved in this can stall cycles. By double clicking on the Instr. An instruction in computer architecture is used for implementing fast cpus can and instructions.
    [Show full text]
  • Utilizing Parametric Systems for Detection of Pipeline Hazards
    Software Tools for Technology Transfer manuscript No. (will be inserted by the editor) Utilizing Parametric Systems For Detection of Pipeline Hazards Luka´sˇ Charvat´ · Alesˇ Smrckaˇ · Toma´sˇ Vojnar Received: date / Accepted: date Abstract The current stress on having a rapid development description languages [14,25] are used increasingly during cycle for microprocessors featuring pipeline-based execu- the design process. Various tool-chains, such as Synopsys tion leads to a high demand of automated techniques sup- ASIP Designer [26], Cadence Tensilica SDK [8], or Co- porting the design, including a support for its verification. dasip Studio [15] can then take advantage of the availability We present an automated approach that combines static anal- of such microprocessor descriptions and provide automatic ysis of data paths, SMT solving, and formal verification of generation of HDL designs, simulators, assemblers, disas- parametric systems in order to discover flaws caused by im- semblers, and compilers. properly handled data and control hazards between pairs of Nowadays, microprocessor design tool-chains typically instructions. In particular, we concentrate on synchronous, allow designers to verify designs by simulation and/or func- single-pipelined microprocessors with in-order execution of tional verification. Simulation is commonly used to obtain instructions. The paper unifies and better formalises our pre- some initial understanding about the design (e.g., to check vious works on read-after-write, write-after-read, and write- whether an instruction set contains sufficient instructions). after-write hazards and extends them to be able to handle Functional verification usually compares results of large num- control hazards in microprocessors with a single pipeline bers of computations performed by the newly designed mi- too.
    [Show full text]
  • Pipeliningpipelining
    ChapterChapter 99 PipeliningPipelining Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Outline ¾ Basic Concepts ¾ Data Hazards ¾ Instruction Hazards Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 2 Content Coverage Main Memory System Address Data/Instruction Central Processing Unit (CPU) Operational Registers Arithmetic Instruction and Cache Logic Unit Sets memory Program Counter Control Unit Input/Output System Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 3 Basic Concepts ¾ Pipelining is a particularly effective way of organizing concurrent activity in a computer system ¾ Let Fi and Ei refer to the fetch and execute steps for instruction Ii ¾ Execution of a program consists of a sequence of fetch and execute steps, as shown below I1 I2 I3 I4 I5 F1 E1 F2 E2 F3 E3 F4 E4 F5 Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 4 Hardware Organization ¾ Consider a computer that has two separate hardware units, one for fetching instructions and another for executing them, as shown below Interstage Buffer Instruction fetch Execution unit unit Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 5 Basic Idea of Instruction Pipelining 12 3 4 5 Time I1 F1 E1 I2 F2 E2 I3 F3 E3 I4 F4 E4 F E Advanced Reliable Systems (ARES) Lab. Jin-Fu Li, EE, NCU 6 A 4-Stage Pipeline 12 3 4 567 Time I1 F1 D1 E1 W1 I2 F2 D2 E2 W2 I3 F3 D3 E3 W3 I4 F4 D4 E4 W4 D: Decode F: Fetch Instruction E: Execute W: Write instruction & fetch operation results operands B1 B2 B3 Advanced Reliable Systems (ARES) Lab.
    [Show full text]
  • Processor Design:System-On-Chip Computing For
    Processor Design Processor Design System-on-Chip Computing for ASICs and FPGAs Edited by Jari Nurmi Tampere University of Technology Finland A C.I.P. Catalogue record for this book is available from the Library of Congress. ISBN 978-1-4020-5529-4 (HB) ISBN 978-1-4020-5530-0 (e-book) Published by Springer, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. www.springer.com Printed on acid-free paper All Rights Reserved © 2007 Springer No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. To Pirjo, Lauri, Eero, and Santeri Preface When I started my computing career by programming a PDP-11 computer as a freshman in the university in early 1980s, I could not have dreamed that one day I’d be able to design a processor. At that time, the freshmen were only allowed to use PDP. Next year I was given the permission to use the famous brand-new VAX-780 computer. Also, my new roommate at the dorm had got one of the first personal computers, a Commodore-64 which we started to explore together. Again, I could not have imagined that hundreds of times the processing power will be available in an everyday embedded device just a quarter of century later.
    [Show full text]
  • Operand Registers and Explicit Operand Forwarding James Balfour, R
    IEEE Computer Architecture Letters Operand Registers and Explicit Operand Forwarding James Balfour, R. Curtis Harting and William J. Dally Fellow Computer Systems Laboratory, Stanford University, Stanford, CA, USA {jbalfour,dally}@cva.stanford.edu Abstract—Operand register files are small, inexpensive register files that are integrated with function units in the execute stage of the pipeline, !# !&( !( effectively extending the pipeline operand registers into register files. Explicit operand forwarding lets software opportunistically orchestrate the routing of operands through the forwarding network to avoid writing ephemeral values to registers. Both mechanisms let software capture short-term reuse and locality close to the function units, improving energy efficiency by allowing a significant fraction of operands to be delivered !# from inexpensive registers that are integrated with the function units. An !&( !( evaluation shows that capturing operand bandwidth close to the function units allows operand registers to reduce the energy consumed in the !&( !( #%)%""')#% register files and forwarding network of an embedded processor by 61%, and allows explicit forwarding to reduce the energy consumed by 26%. Index Terms—energy efficient register organization, operand registers, #%)%""')#% !#!&( !( !( !#!&( explicit operand forwarding, embedded processor #$%" %&'%& I. INTRODUCTION NERGY consumption in processors is dominated by communi- E cation, specifically data and instruction movement, not com- ' %&& putation. Consequently, even low-power programmable processors consume significantly more energy than dedicated fixed-function hardware, which allows the communication of data between function !# !&( !( units to be aggressively optimized. This is particularly problematic in Fig. 1. The address register file (ARF) and data register file (DRF) are tiny embedded systems because performing common operations, such as (4-entry) register files that are integrated with the function units.
    [Show full text]
  • A CUDA Program • a CUDA Program Consists of a Mixture of the Following Two Parts
    GPU programming basics Prof. Marco Bertini Data parallelism: GPU computing CPUs vs. GPUs • The design of a CPU is optimized for sequential code performance. • out-of-order execution, branch-prediction • large cache memories to reduce latency in memory access • multi-core • GPUs: • many-core • massive floating point computations for video games • much larger bandwidth in memory access • no branch prediction or too much control logic: just compute CPUs have latency oriented design: • Large caches convert long latency RAM access to short latency • Branch pred., OoOE, operand forwarding reduce instructions latency • Powerful ALUCPUs for reduced operation vs. latency GPUs • The design of a CPU is optimized for sequential code performance. GPUs have a throughput oriented design: • Small caches to boost RAM throughput • • Simpleout-of-order control (no execution, operand forwarding, branch-prediction branch prediction, etc.) • Energy efficient ALU (long latency but heavily pipelined for high throughput) • Require• large massive cache memories# threads to to tolerate reduce latencies latency in memory access • multi-core • GPUs: • many-core • massive floating point computations for video games • much larger bandwidth in memory access • no branch prediction or too much control logic: just compute CPUs and GPUs • GPUs are designed as numeric computing engines, and they will not perform well on some tasks on which CPUs are designed to perform well; • One should expect that most applications will use both CPUs and GPUs, executing the sequential parts on the CPU and numerically intensive parts on the GPUs. • We are going to deal with heterogenous architectures: CPUs + GPUs. Heterogeneous Computing • CPU computing is good for control-intensive tasks, and GPU computing is good for data-parallel computation-intensive tasks.
    [Show full text]
  • Pipelining Design Techniques
    mywbut.com 9 Pipelining Design Techniques There exist two basic techniques to increase the instruction execution rate of a pro- cessor. These are to increase the clock rate, thus decreasing the instruction execution time, or alternatively to increase the number of instructions that can be executed simultaneously. Pipelining and instruction-level parallelism are examples of the latter technique. Pipelining owes its origin to car assembly lines. The idea is to have more than one instruction being processed by the processor at the same time. Similar to the assembly line, the success of a pipeline depends upon dividing the execution of an instruction among a number of subunits (stages), each perform- ing part of the required operations. A possible division is to consider instruction fetch (F), instruction decode (D), operand fetch (F), instruction execution (E), and store of results (S) as the subtasks needed for the execution of an instruction. In this case, it is possible to have up to five instructions in the pipeline at the same time, thus reducing instruction execution latency. In this Chapter, we discuss the basic concepts involved in designing instruction pipelines. Performance measures of a pipeline are introduced. The main issues contributing to instruction pipeline hazards are discussed and some possible solutions are introduced. In addition, we introduce the concept of arithmetic pipelining together with the prob- lems involved in designing such a pipeline. Our coverage concludes with a review of a recent pipeline processor. 9.1. GENERAL CONCEPTS Pipelining refers to the technique in which a given task is divided into a number of subtasks that need to be performed in sequence.
    [Show full text]
  • Delicm: Scalar Dependence Removal at Zero Memory Cost
    DeLICM: Scalar Dependence Removal at Zero Memory Cost Michael Kruse Tobias Grosser Inria ETH Zürich Paris, France Zürich, Switzerland [email protected] [email protected] Abstract 1 Introduction Increasing data movement costs motivate the integration Advanced high-level loop optimizations, often using poly- of polyhedral loop optimizers in the standard flow (-O3) hedral program models [31], have been shown to effectively of production compilers. While polyhedral optimizers have improve data locality for image processing [22], iterated been shown to be effective when applied as source-to-source stencils [4], Lattice-Boltzmann computations [23], sparse- transformation, the single static assignment form used in matrices [30] and even provide some of the leading automatic modern compiler mid-ends makes such optimizers less effec- GPU code generation approaches [5, 32]. tive. Scalar dependencies (dependencies carried over a single While polyhedral optimizers have been shown to be ef- memory location) are the main obstacle preventing effective fective in research, but are rarely used in production, at optimization. We present DeLICM, a set of transformations least partly because they are disabled in the most common which, backed by a polyhedral value analysis, eliminate prob- optimization levels. because none of the established frame- lematic scalar dependences by 1) relocating scalar memory works is enabled by default in a standard optimization set- references to unused array locations and by 2) forwarding ting (e.g. -O3). With GCC/graphite [26], IBM XL-C [8], and computations that otherwise cause scalar dependences. Our LLVM/Polly [15] there are three polyhedral optimizers that experiments show that DeLICM effectively eliminates de- work directly in production C/C++/FORTRAN compilers and pendencies introduced by compiler-internal canonicalization are shipped to millions of users.
    [Show full text]
  • ECE 341 Lecture # 13 Lecture Topics Types of Pipeline Hazards
    Lecture Topics ECE 341 • Pipelining – Pipeline Hazards Lecture # 13 – Data Dependencies • Operand Forwarding • Dealing with Memory Delays Instructor: Zeshan Chishti – Instruction Hazards [email protected] • Branch penalty • Branch delay slot optimization November 12, 2014 • Reference: – Chapter 6: Sections 6.3, 6.4, 6.5 and 6.6 (Pages 197 - 205 of textbook) Portland State University Pipeline Hazards Types of Pipeline Hazards • Any condition that causes a pipeline to stall is called a There are three types of pipeline hazards: hazard • • Data hazard – any condition in which either the source or the Pipeline stalls cause degradation in pipeline performance destination operands of an instruction are not available, when needed in the pipeline – Instruction processing needs to be delayed until the operands become available We need to identify all pipeline hazards and find ways to minimize their impact • Instruction (control) hazard – a delay in the availability of an instruction or the memory address needed to fetch the instruction • Structural hazard – a situation where two (or more) instructions require the use of a given hardware resource at the same time. Data Hazards Stalls Caused by Data Hazards Time • Consider the following two instructions executed in a program sequence: Clock Cycle 1 2 3 4 5 6 7 8 9 Add R2, R3, #100 Add R2, R3, #100 FDCMW Subtract R9, R2, #30 • The destination register (R2) for the first instruction is a source register for the Subtract R9, R2, #30 FDCM W second instruction • • Register R2 carries data from the first
    [Show full text]