Partial Redundancy Elimination, Loop Optimization & Lazy Code Motion

Total Page:16

File Type:pdf, Size:1020Kb

Partial Redundancy Elimination, Loop Optimization & Lazy Code Motion Common-Subexpression Elimination Partial Redundancy Elimination, (Repetition) Loop Optimization An occurrence of an expression in a program is a common subexpression if there is another occurrence of the expression whose evaluation always precedes this one & Lazy Code Motion in execution order and if the operands of the expression remain unchanged between the two evaluations. Local Common Subexpression Elimination (CSE) keeps track of the set of available expressions within a basic block and replaces instances of them by references to Repetition: CSE Advanced Compiler Techniques new temporaries that keep their value. … … 2005 ttt=x+y; a=( x+y)+ z; Erik Stenman a=ttt+z; b=a-1; Virtutech b=a-1; c=x+y; c=ttt; … … Before CSE After CSE Advanced Compiler Techniques 4/22/2005 2 http://lamp.epfl.ch/teaching/advancedCompiler/ Partially Redundant Redundant Expressions Expressions An expression is redundant at a point p if, on every path to p An expression is partially redundant at p if it is redundant along 1. It is evaluated before reaching p, and some, but not all, paths reaching p. 2. None of its constituent values is redefined before p Example Example a←b+c b←b+1 a←b+c a←b+c b←b+1 a←b+c a←b+c a←b+c Redundant Expressions Some occurrences of a←b+c b+c are redundant Partially Redundant Expressions a←b+c a←b+c Not all occurrences ← ← a b+c b b+1 of b+c are redundant! Inserting a copy of “a←b+c” after the definition ← a b+c of b can make it redundant. Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 3 http://lamp.epfl.ch/teaching/advancedCompiler/ 4 http://lamp.epfl.ch/teaching/advancedCompiler/ Loop Invariant Expressions Loop Optimizations Another example: x←y*z x←y*z ♦ Loop Optimization is important because most x←y*z a←b+c a←b+c of the execution time occurs in loops. ♦ b + c is partially First, we will identify loops. a←b+c redundant here a←b+c ♦ We will study three optimizations: ♦ Loop-invariant code motion. Loop Optimizations ♦ Strength reduction. Partially Redundant Expressions ♦ Induction variable elimination. Loop invariant expressions are partially redundant. ♦ We will not talk about loop unrolling which ♦ Partial redundancy elimination performs code motion. ♦ Major part of the work is figuring out where to insert operations. is another important optimization technique. Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 5 http://lamp.epfl.ch/teaching/advancedCompiler/ 6 http://lamp.epfl.ch/teaching/advancedCompiler/ 1 What is a Loop? Anomalous Situations ♦Set of nodes ♦ Two back ♦ edges, two Loop header loops, one ♦ Single node header ♦ All iterations of ♦ Compiler Loop Optimizations Loop Optimizations loop go through merges loops header ♦ ♦Back edge No loop header, no loop Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 7 http://lamp.epfl.ch/teaching/advancedCompiler/ 8 http://lamp.epfl.ch/teaching/advancedCompiler/ Defining Loops With Identifying Loops Dominators Recall the concept of dominators: ♦ Node n dominates a node m if all paths from the ♦ A loop has a unique entry point – the header. start node to m go through n. ♦ At least one path back to header. ♦ The immediate dominator of m is the last dominator ♦ Find edges whose heads (>) dominate tails (-), of m on any path from start node. these edges are back edges of loops. ♦ A dominator tree is a tree rooted at the start node: ♦ Given a back edge n→d: ♦ Loop Optimizations: Dominators Nodes are nodes of control flow graph. ♦ The node d is the loop header. ♦ Loop Optimizations: Identifying loops Edge from d to n if d is the immediate dominator of n. ♦ The loop consists of n plus all nodes that can reach n without going through d (all nodes “between” d and n) Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 9 http://lamp.epfl.ch/teaching/advancedCompiler/ 10 http://lamp.epfl.ch/teaching/advancedCompiler/ Loop Construction Algorithm Nested Loops loop (d,n) ♦ If two loops do not have same header then loop = {d} ; stack = ∅; insert (n); ♦ Either one loop (inner loop) is contained in the while stack not empty do other (outer loop) ♦ Or the two loops are disjoint m = pop stack ; ♦ If two loops have same header, typically they for all p ∈ pred (m) do insert (p); are unioned and treated as one loop insert (m) 1 if m ∉ loop then Loop Optimizations: Identifying loops Loop Optimizations: Identifying loops 1 ∪ Two loops: loop = loop {m}; 2 3 {1,2} and {1, 3} push m onto stack ; 2 3 Unioned: {1,2,3} 4 Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 11 http://lamp.epfl.ch/teaching/advancedCompiler/ 12 http://lamp.epfl.ch/teaching/advancedCompiler/ 2 Loop Preheader Loop Optimizations ♦ Many optimizations stick code before loop. ♦Now that we have the loop, we can ♦ Put a special node (loop preheader ) before optimize it! loop to hold this code. ♦Loop invariant code motion: ♦ Move loop invariant code to the header. Loop Optimizations: Preheader Loop Optimizations: Loop invariant code motion Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 13 http://lamp.epfl.ch/teaching/advancedCompiler/ 14 http://lamp.epfl.ch/teaching/advancedCompiler/ Detecting Loop Invariant Loop Invariant Code Motion Code If a computation produces the same value in ♦ A statement is loop-invariant if operands are every loop iteration, move it out of the loop. ♦ Constant, ♦ Have all reaching definitions outside loop, or t1 = 100*N for i = 1 to N for i = 1 to N ♦ Have exactly one reaching definition, and that x = x + 1 definition comes from an invariant statement for j = 1 to N x = x + 1 a[i,j] = 100*N + 10*i + j + xxx t2 = t1 + 10*i + x ♦ Concept of exit node of loop for j = 1 to N ♦ node with successors outside loop a[i,j] = t2 + j Loop Optimizations: Loop invariant code motion Loop Optimizations: Loop invariant code motion Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 15 http://lamp.epfl.ch/teaching/advancedCompiler/ 16 http://lamp.epfl.ch/teaching/advancedCompiler/ Loop Invariant Code Detection Algorithm Loop Invariant Code Motion for all statements in loop if operands are constant or have all reaching definitions ♦ Conditions for moving a statement s: x = y+z outside loop, mark statement as invariant into loop header: do ♦ s dominates all exit nodes of loop for all statements in loop not already marked invariant ♦ If it does not, some use after loop might get wrong value if operands are constant, have all reaching definitions ♦ Alternate condition: definition of x from s reaches no use outside loop (but moving s may increase run time) outside loop, or have exactly one reaching definition from ♦ No other statement in loop assigns to x invariant statement ♦ If one does, assignments might get reordered then mark statement as invariant ♦ No use of x in loop is reached by definition other Loop Optimizations: Loop invariant code motion until there are no more invariant statements Loop Optimizations: Loop invariant code motion than s ♦ If one is, movement may change value read by use Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 17 http://lamp.epfl.ch/teaching/advancedCompiler/ 18 http://lamp.epfl.ch/teaching/advancedCompiler/ 3 Order of Statements in Induction Variables Preheader Preserve data dependences from original program (can use order in which discovered by algorithm) Example: b = 2 b = 2 for j = 1 to 100 i = 0 i = 0 *(&A + 4*j) = 202 - 2*j a = b * b i < 80 c = a + a Basic Induction variable: J = 1, 2, 3, 4, ….. a = b * b i < 80 c = a + a Loop Optimizations: Induction Variables Induction variable &A+4*j: Loop Optimizations: Loop invariant code motion i = i + c &A+4*j = &A+4, &A+8, &A+12, &A+16, …. i = i + c Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 19 http://lamp.epfl.ch/teaching/advancedCompiler/ 20 http://lamp.epfl.ch/teaching/advancedCompiler/ What are induction variables? Types of Induction Variables ♦ x is an induction variable of a loop L if ♦ Base induction variable: ♦ variable changes its value every iteration of the ♦ Only assignments in loop are of form i = i ± c loop ♦ Derived induction variables: ♦ the value is a function of number of iterations of ♦ Value is a linear function of a base induction the loop variable. ♦ Within loop, j = c*i + d, where i is a base induction ♦ In programs, this function is normally a linear variable. Loop Optimizations: Induction Variables Loop Optimizations: Induction Variables function ♦ Very common in array index expressions – Example: for loop index variable j, function d + c*j an access to a[i] produces code like p = a + 4*i. Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 21 http://lamp.epfl.ch/teaching/advancedCompiler/ 22 http://lamp.epfl.ch/teaching/advancedCompiler/ Strength Reduction for Elimination of Superfluous tion Derived Induction Variables Induction Variables i = 0 i = 0 i = 0 p = 0 p = 0 p = 0 i < 10 i < 10 p < 40 i < 10 i = i + 1 i = i + 1 use of p use of p i = i + 1 p = p + 4 use of p p = 4 * i p = p + 4 use of p p = p + 4 Loop Optimizations: Induction Variables – Elimination Loop Optimizations: Induction Variables – Strength Reduc Advanced Compiler Techniques 4/22/2005 Advanced Compiler Techniques 4/22/2005 23 http://lamp.epfl.ch/teaching/advancedCompiler/ 24 http://lamp.epfl.ch/teaching/advancedCompiler/ 4 Output of Induction Variable Three Algorithms Detection Algorithm ♦ Detection of induction variables: ♦Set of induction variables: ♦ Find base induction variables.
Recommended publications
  • Loop Pipelining with Resource and Timing Constraints
    UNIVERSITAT POLITÈCNICA DE CATALUNYA LOOP PIPELINING WITH RESOURCE AND TIMING CONSTRAINTS Autor: Fermín Sánchez October, 1995 1 1 1 1 1 1 1 1 2 1• SOFTWARE PIPELINING 1 1 1 2.1 INTRODUCTION 1 Software pipelining [ChaSl] comprises a family of techniques aimed at finding a pipelined schedule of the execution of loop iterations. The pipelined schedule represents a new loop body which may contain instructions belonging to different iterations. The sequential execution of the schedule takes less time than the sequential execution of the iterations of the loop as they were initially written. 1 In general, a pipelined loop schedule has the following characteristics1: • All the iterations (of the new loop) are executed in the same fashion. • The initiation interval (ÍÍ) between the issuing of two consecutive iterations is always the same. 1 Figure 2.1 shows an example of software pipelining a loop. The DDG representing the loop body to pipeline is presented in Figure 2.1(a). The loop must be executed ten times (with an iteration index i G [0,9]). Let us assume that all instructions in the loop are executed in a single cycle 1 and a new iteration may start every cycle (otherwise the dependence between instruction A from 1 We will assume here that the schedule contains a unique iteration of the loop. 1 15 1 1 I I 16 CHAPTER 2 I I time B,, Prol AO Q, BO A, I Q, B, A2 3<i<9 I Steady State D4 B7 I Epilogue Cy I D9 (a) (b) (c) I Figure 2.1 Software pipelining a loop (a) DDG representing a loop body (b) Parallel execution of the loop (c) New parallel loop body I iterations i and i+ 1 will not be honored).- With this assumption,-the loop can be executed in a I more parallel fashion than the sequential one, as shown in Figure 2.1(b).
    [Show full text]
  • Research of Register Pressure Aware Loop Unrolling Optimizations for Compiler
    MATEC Web of Conferences 228, 03008 (2018) https://doi.org/10.1051/matecconf/201822803008 CAS 2018 Research of Register Pressure Aware Loop Unrolling Optimizations for Compiler Xuehua Liu1,2, Liping Ding1,3 , Yanfeng Li1,2 , Guangxuan Chen1,4 , Jin Du5 1Laboratory of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Digital Forensics Lab, Institute of Software Application Technology, Guangzhou & Chinese Academy of Sciences, Guangzhou, China 4Zhejiang Police College, Hangzhou, China 5Yunnan Police College, Kunming, China Abstract. Register pressure problem has been a known problem for compiler because of the mismatch between the infinite number of pseudo registers and the finite number of hard registers. Too heavy register pressure may results in register spilling and then leads to performance degradation. There are a lot of optimizations, especially loop optimizations suffer from register spilling in compiler. In order to fight register pressure and therefore improve the effectiveness of compiler, this research takes the register pressure into account to improve loop unrolling optimization during the transformation process. In addition, a register pressure aware transformation is able to reduce the performance overhead of some fine-grained randomization transformations which can be used to defend against ROP attacks. Experiments showed a peak improvement of about 3.6% and an average improvement of about 1% for SPEC CPU 2006 benchmarks and a peak improvement of about 3% and an average improvement of about 1% for the LINPACK benchmark. 1 Introduction fundamental transformations, because it is not only performed individually at least twice during the Register pressure is the number of hard registers needed compilation process, it is also an important part of to store values in the pseudo registers at given program optimizations like vectorization, module scheduling and point [1] during the compilation process.
    [Show full text]
  • Equality Saturation: a New Approach to Optimization
    Logical Methods in Computer Science Vol. 7 (1:10) 2011, pp. 1–37 Submitted Oct. 12, 2009 www.lmcs-online.org Published Mar. 28, 2011 EQUALITY SATURATION: A NEW APPROACH TO OPTIMIZATION ROSS TATE, MICHAEL STEPP, ZACHARY TATLOCK, AND SORIN LERNER Department of Computer Science and Engineering, University of California, San Diego e-mail address: {rtate,mstepp,ztatlock,lerner}@cs.ucsd.edu Abstract. Optimizations in a traditional compiler are applied sequentially, with each optimization destructively modifying the program to produce a transformed program that is then passed to the next optimization. We present a new approach for structuring the optimization phase of a compiler. In our approach, optimizations take the form of equality analyses that add equality information to a common intermediate representation. The op- timizer works by repeatedly applying these analyses to infer equivalences between program fragments, thus saturating the intermediate representation with equalities. Once saturated, the intermediate representation encodes multiple optimized versions of the input program. At this point, a profitability heuristic picks the final optimized program from the various programs represented in the saturated representation. Our proposed way of structuring optimizers has a variety of benefits over previous approaches: our approach obviates the need to worry about optimization ordering, enables the use of a global optimization heuris- tic that selects among fully optimized programs, and can be used to perform translation validation, even on compilers other than our own. We present our approach, formalize it, and describe our choice of intermediate representation. We also present experimental results showing that our approach is practical in terms of time and space overhead, is effective at discovering intricate optimization opportunities, and is effective at performing translation validation for a realistic optimizer.
    [Show full text]
  • CS153: Compilers Lecture 19: Optimization
    CS153: Compilers Lecture 19: Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Contains content from lecture notes by Steve Zdancewic and Greg Morrisett Announcements •HW5: Oat v.2 out •Due in 2 weeks •HW6 will be released next week •Implementing optimizations! (and more) Stephen Chong, Harvard University 2 Today •Optimizations •Safety •Constant folding •Algebraic simplification • Strength reduction •Constant propagation •Copy propagation •Dead code elimination •Inlining and specialization • Recursive function inlining •Tail call elimination •Common subexpression elimination Stephen Chong, Harvard University 3 Optimizations •The code generated by our OAT compiler so far is pretty inefficient. •Lots of redundant moves. •Lots of unnecessary arithmetic instructions. •Consider this OAT program: int foo(int w) { var x = 3 + 5; var y = x * w; var z = y - 0; return z * 4; } Stephen Chong, Harvard University 4 Unoptimized vs. Optimized Output .globl _foo _foo: •Hand optimized code: pushl %ebp movl %esp, %ebp _foo: subl $64, %esp shlq $5, %rdi __fresh2: movq %rdi, %rax leal -64(%ebp), %eax ret movl %eax, -48(%ebp) movl 8(%ebp), %eax •Function foo may be movl %eax, %ecx movl -48(%ebp), %eax inlined by the compiler, movl %ecx, (%eax) movl $3, %eax so it can be implemented movl %eax, -44(%ebp) movl $5, %eax by just one instruction! movl %eax, %ecx addl %ecx, -44(%ebp) leal -60(%ebp), %eax movl %eax, -40(%ebp) movl -44(%ebp), %eax Stephen Chong,movl Harvard %eax,University %ecx 5 Why do we need optimizations? •To help programmers… •They write modular, clean, high-level programs •Compiler generates efficient, high-performance assembly •Programmers don’t write optimal code •High-level languages make avoiding redundant computation inconvenient or impossible •e.g.
    [Show full text]
  • Scalable Conditional Induction Variables (CIV) Analysis
    Scalable Conditional Induction Variables (CIV) Analysis ifact Cosmin E. Oancea Lawrence Rauchwerger rt * * Comple A t te n * A te s W i E * s e n l C l o D C O Department of Computer Science Department of Computer Science and Engineering o * * c u e G m s E u e C e n R t v e o d t * y * s E University of Copenhagen Texas A & M University a a l d u e a [email protected] [email protected] t Abstract k = k0 Ind. k = k0 DO i = 1, N DO i =1,N Var. DO i = 1, N IF(cond(b(i)))THEN Subscripts using induction variables that cannot be ex- k = k+2 ) a(k0+2*i)=.. civ = civ+1 )? pressed as a formula in terms of the enclosing-loop indices a(k)=.. Sub. ENDDO a(civ) = ... appear in the low-level implementation of common pro- ENDDO k=k0+MAX(2N,0) ENDIF ENDDO gramming abstractions such as filter, or stack operations and (a) (b) (c) pose significant challenges to automatic parallelization. Be- Figure 1. Loops with affine and CIV array accesses. cause the complexity of such induction variables is often due to their conditional evaluation across the iteration space of its closed-form equivalent k0+2*i, which enables its in- loops we name them Conditional Induction Variables (CIV). dependent evaluation by all iterations. More importantly, This paper presents a flow-sensitive technique that sum- the resulted code, shown in Figure 1(b), allows the com- marizes both such CIV-based and affine subscripts to pro- piler to verify that the set of points written by any dis- gram level, using the same representation.
    [Show full text]
  • Induction Variable Analysis with Delayed Abstractions1
    Induction Variable Analysis with Delayed Abstractions1 SEBASTIAN POP, and GEORGES-ANDRE´ SILBER CRI, Mines Paris, France and ALBERT COHEN ALCHEMY group, INRIA Futurs, Orsay, France We present the design of an induction variable analyzer suitable for the analysis of typed, low-level, three address representations in SSA form. At the heart of our analyzer stands a new algorithm that recognizes scalar evolutions. We define a representation called trees of recurrences that is able to capture different levels of abstractions: from the finer level that is a subset of the SSA representation restricted to arithmetic operations on scalar variables, to the coarser levels such as the evolution envelopes that abstract sets of possible evolutions in loops. Unlike previous work, our algorithm tracks induction variables without prior classification of a few evolution patterns: different levels of abstraction can be obtained on demand. The low complexity of the algorithm fits the constraints of a production compiler as illustrated by the evaluation of our implementation on standard benchmark programs. Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processors—compilers, interpreters, optimization, retargetable compilers; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages—partial evaluation, program analysis General Terms: Compilers Additional Key Words and Phrases: Scalar evolutions, static analysis, static single assignment representation, assessing compilers heuristics regressions. 1Extension of Conference Paper:
    [Show full text]
  • Fast-Path Loop Unrolling of Non-Counted Loops to Enable Subsequent Compiler Optimizations∗
    Fast-Path Loop Unrolling of Non-Counted Loops to Enable Subsequent Compiler Optimizations∗ David Leopoldseder Roland Schatz Lukas Stadler Johannes Kepler University Linz Oracle Labs Oracle Labs Austria Linz, Austria Linz, Austria [email protected] [email protected] [email protected] Manuel Rigger Thomas Würthinger Hanspeter Mössenböck Johannes Kepler University Linz Oracle Labs Johannes Kepler University Linz Austria Zurich, Switzerland Austria [email protected] [email protected] [email protected] ABSTRACT Non-Counted Loops to Enable Subsequent Compiler Optimizations. In 15th Java programs can contain non-counted loops, that is, loops for International Conference on Managed Languages & Runtimes (ManLang’18), September 12–14, 2018, Linz, Austria. ACM, New York, NY, USA, 13 pages. which the iteration count can neither be determined at compile https://doi.org/10.1145/3237009.3237013 time nor at run time. State-of-the-art compilers do not aggressively optimize them, since unrolling non-counted loops often involves 1 INTRODUCTION duplicating also a loop’s exit condition, which thus only improves run-time performance if subsequent compiler optimizations can Generating fast machine code for loops depends on a selective appli- optimize the unrolled code. cation of different loop optimizations on the main optimizable parts This paper presents an unrolling approach for non-counted loops of a loop: the loop’s exit condition(s), the back edges and the loop that uses simulation at run time to determine whether unrolling body. All these parts must be optimized to generate optimal code such loops enables subsequent compiler optimizations. Simulat- for a loop.
    [Show full text]
  • Strength Reduction of Induction Variables and Pointer Analysis – Induction Variable Elimination
    Loop optimizations • Optimize loops – Loop invariant code motion [last time] Loop Optimizations – Strength reduction of induction variables and Pointer Analysis – Induction variable elimination CS 412/413 Spring 2008 Introduction to Compilers 1 CS 412/413 Spring 2008 Introduction to Compilers 2 Strength Reduction Induction Variables • Basic idea: replace expensive operations (multiplications) with • An induction variable is a variable in a loop, cheaper ones (additions) in definitions of induction variables whose value is a function of the loop iteration s = 3*i+1; number v = f(i) while (i<10) { while (i<10) { j = 3*i+1; //<i,3,1> j = s; • In compilers, this a linear function: a[j] = a[j] –2; a[j] = a[j] –2; i = i+2; i = i+2; f(i) = c*i + d } s= s+6; } •Observation:linear combinations of linear • Benefit: cheaper to compute s = s+6 than j = 3*i functions are linear functions – s = s+6 requires an addition – Consequence: linear combinations of induction – j = 3*i requires a multiplication variables are induction variables CS 412/413 Spring 2008 Introduction to Compilers 3 CS 412/413 Spring 2008 Introduction to Compilers 4 1 Families of Induction Variables Representation • Basic induction variable: a variable whose only definition in the • Representation of induction variables in family i by triples: loop body is of the form – Denote basic induction variable i by <i, 1, 0> i = i + c – Denote induction variable k=i*a+b by triple <i, a, b> where c is a loop-invariant value • Derived induction variables: Each basic induction variable i defines
    [Show full text]
  • A Hierarchical Region-Based Static Single Assignment Form
    A Hierarchical Region-Based Static Single Assignment Form Jisheng Zhao and Vivek Sarkar Department of Computer Science, Rice University fjisheng.zhao,[email protected] Abstract. Modern compilation systems face the challenge of incrementally re- analyzing a program’s intermediate representation each time a code transforma- tion is performed. Current approaches typically either re-analyze the entire pro- gram after an individual transformation or limit the analysis information that is available after a transformation. To address both efficiency and precision goals in an optimizing compiler, we introduce a hierarchical static single-assignment form called Region Static Single-Assignment (Region-SSA) form. Static single assign- ment (SSA) form is an efficient intermediate representation that is well suited for solving many data flow analysis and optimization problems. By partitioning the program into hierarchical regions, Region-SSA form maintains a local SSA form for each region. Region-SSA supports a demand-driven re-computation of SSA form after a transformation is performed, since only the updated region’s SSA form needs to be reconstructed along with a potential propagation of exposed defs and uses. In this paper, we introduce the Region-SSA data structure, and present algo- rithms for construction and incremental reconstruction of Region-SSA form. The Region-SSA data structure includes a tree based region hierarchy, a region based control flow graph, and region-based SSA forms. We have implemented in Region- SSA form in the Habanero-Java (HJ) research compiler. Our experimental re- sults show significant improvements in compile-time compared to traditional ap- proaches that recompute the entire procedure’s SSA form exhaustively after trans- formation.
    [Show full text]
  • The Effect of C Ode Expanding Optimizations on Instruction Cache Design
    May 1991 UILU-ENG-91-2227 CRHC-91-17 Center for Reliable and High-Performance Computing " /ff ,./IJ 5; F.__. =, _. I J THE EFFECT OF C ODE EXPANDING OPTIMIZATIONS ON INSTRUCTION CACHE DESIGN William Y. Chen Pohua P. Chang Thomas M. Conte Wen-mei W. Hwu Coordinated Science Laboratory College of Engineering UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Approved for Public Release. Distribution Unlimited. UNCLASSIFIED ;ECURIrY CLASSIFICATION OF THiS PAGE REPORT DOCUMENTATION PAGE ii la. REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGS Unclassified None 2a. SECURITY CLASSIFICATION AUTHORITY 3 DISTRIBUTION / AVAILABILITY OF REPORT Approved for public release; 2b. DECLASSIFICATION I DOWNGRADING SCHEDULE distribution unlimited 4, PERFORMING ORGANIZATION REPORT NUMBER(S) S. MONITORING ORGANIZATION REPORT NUMBER(S) UILU-ENG-91-2227 CRHC-91-17 _.NAME OFPERFORMING ORGANIZATION 7a. NAME OF MONITORING ORGANIZATION Coordinated Science Lab (If ap_lic,_ble) NCR, NSF, AI_, NASA University of Illinois 6b. OFFICEN/A SYMBOL 6cAOORESS(O_,S_,andZ;PCode) 7b. AOORESS(O_,$tate, andZIPCodc) Dayton, OH 45420 ii01 W. Springfield Avenue Washington DC 20550 Langley VA 20200 Urbana, IL 61801 NAME OF FUNDING/SPONSORING 18b.OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER NO0014-9 l-J- 1283 NASA NAG 1-613 ORGANIZATION 7a I (If aDplitable) 8c. ADDRESS (City, State, and ZlP Code) 10. SOURCE OF FUNDING NUMBERS WORK UNIT ELEMENT NO. NO. ACCESSION NO. 7b I 11. TITLE (Include Securf(y C/aclrification) The Effect of Code Expanding Optimizations on Instruction Cache Design I 12. PERSONAL AUTHOR(S) Chen, William, Pohua Chang, Thomas Conte and Wen-Mei Hwu 13a. TYPE OF REPORT i13b, TIME COVERED 114.
    [Show full text]
  • Generalizing Loop-Invariant Code Motion in a Real-World Compiler
    Imperial College London Department of Computing Generalizing loop-invariant code motion in a real-world compiler Author: Supervisor: Paul Colea Fabio Luporini Co-supervisor: Prof. Paul H. J. Kelly MEng Computing Individual Project June 2015 Abstract Motivated by the perpetual goal of automatically generating efficient code from high-level programming abstractions, compiler optimization has developed into an area of intense research. Apart from general-purpose transformations which are applicable to all or most programs, many highly domain-specific optimizations have also been developed. In this project, we extend such a domain-specific compiler optimization, initially described and implemented in the context of finite element analysis, to one that is suitable for arbitrary applications. Our optimization is a generalization of loop-invariant code motion, a technique which moves invariant statements out of program loops. The novelty of the transformation is due to its ability to avoid more redundant recomputation than normal code motion, at the cost of additional storage space. This project provides a theoretical description of the above technique which is fit for general programs, together with an implementation in LLVM, one of the most successful open-source compiler frameworks. We introduce a simple heuristic-driven profitability model which manages to successfully safeguard against potential performance regressions, at the cost of missing some speedup opportunities. We evaluate the functional correctness of our implementation using the comprehensive LLVM test suite, passing all of its 497 whole program tests. The results of our performance evaluation using the same set of tests reveal that generalized code motion is applicable to many programs, but that consistent performance gains depend on an accurate cost model.
    [Show full text]
  • A Tiling Perspective for Register Optimization Fabrice Rastello, Sadayappan Ponnuswany, Duco Van Amstel
    A Tiling Perspective for Register Optimization Fabrice Rastello, Sadayappan Ponnuswany, Duco van Amstel To cite this version: Fabrice Rastello, Sadayappan Ponnuswany, Duco van Amstel. A Tiling Perspective for Register Op- timization. [Research Report] RR-8541, Inria. 2014, pp.24. hal-00998915 HAL Id: hal-00998915 https://hal.inria.fr/hal-00998915 Submitted on 3 Jun 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Tiling Perspective for Register Optimization Łukasz Domagała, Fabrice Rastello, Sadayappan Ponnuswany, Duco van Amstel RESEARCH REPORT N° 8541 May 2014 Project-Teams GCG ISSN 0249-6399 ISRN INRIA/RR--8541--FR+ENG A Tiling Perspective for Register Optimization Lukasz Domaga la∗, Fabrice Rastello†, Sadayappan Ponnuswany‡, Duco van Amstel§ Project-Teams GCG Research Report n° 8541 — May 2014 — 21 pages Abstract: Register allocation is a much studied problem. A particularly important context for optimizing register allocation is within loops, since a significant fraction of the execution time of programs is often inside loop code. A variety of algorithms have been proposed in the past for register allocation, but the complexity of the problem has resulted in a decoupling of several important aspects, including loop unrolling, register promotion, and instruction reordering.
    [Show full text]