COS 598C – Advanced Compilers

Total Page:16

File Type:pdf, Size:1020Kb

COS 598C – Advanced Compilers Where are we? • Analysis • Control Flow/Predicate Lecture 12: Optimization • Dataflow • SSA • Optimization COS 598C – Advanced Compilers Prof. David August Department of Computer Science Princeton University COS 598C - Advanced Compilers 1 Prof. David August Optimization Classical Optimizations • Make the code run faster on the target processor • Operation-level œ 1 operation in isolation • My favorite topic !! • Constant folding, strength reduction • Anything goes • Dead code elimination (global, but 1 op at a time) • Look at benchmark kernels, what‘s the bottleneck?? • Local/Global œ Pairs of operations • Invent your own optimizations (easier and harder than you think) • Constant propagation • Classes of optimization • Forward copy propagation • 1. Classical(machine independent) • Backward copy propagation • Reducing operation count (redundancy elimination) • CSE • Simplifying operations • Constant combining • Generally good for any kind of machine • Operation folding • 2. Machine specific • Peephole optimizations • Loop œ Body of a loop • Take advantage of specialized hardware features • Invariant code removal • 3. ILP enhancing • Global variable migration • Increasing parallelism • Induction variable strength reduction • Possibly increase instructions • Induction variable elimination COS 598C - Advanced Compilers 2 Prof. David August COS 598C - Advanced Compilers 3 Prof. David August Caveat Static Single Assignment (SSA) • Traditional compiler class • Sophisticated implementations of optimizations, efficient algorithms • Spend entire class on 1 optimization • For this class œ Go over concepts of each optimization • What it is • When can it be applied (set of conditions that must be satisfied) COS 598C - Advanced Compilers 4 Prof. David August COS 598C - Advanced Compilers 5 Prof. David August Dominance Property of SSA Dead Code Elimination COS 598C - Advanced Compilers 6 Prof. David August COS 598C - Advanced Compilers Prof. David August Dead Code Elimination Dead Code Elimination • Remove any operation who‘s result is never consumed • Rules r1 = 3 • X can be deleted r2 = 10 • no stores or branches • DU chain empty or destregister not live r4 = r4 + 1 • This misses some dead code!! r7 = r1 * r4 • Especially in loops • Critical operation r2 = 0 r3 = r3 + 1 • store or branch operation • Any operation that does not directly or indirectly feed a r3 = r2 + r1 critical operation is dead • Trace UD chains backwards from critical operations store (r1, r3) • Any op not visited is dead COS 598C - Advanced Compilers 8 Prof. David August COS 598C - Advanced Compilers 9 Prof. David August Constant Folding Strength Reduction • Simplify 1 operation based on values of srcoperands • Constant propagation creates opportunities for this • Replace expensive ops with cheaper ones • All constant operands • Constant propagation creates opportunities for this • Evaluate the op, replace with a move • Power of 2 constants • r1 = 3 * 4 r1 = 12 • Multiply by power of 2, replace with left shift • r1 = 3 / 0 ??? Don‘t evaluate excepting ops !, what about floating-point? • r1 = r2 * 8 r1 = r2 << 3 • Evaluate conditional branch, replace with BRU or noop • Divide by power of 2, replace with right shift • if (1 < 2) gotoBB2 BRU BB2 • r1 = r2 / 4 r1 = r2 >> 2 • if (1 > 2) gotoBB2 convert to a noop • Remainder by power of 2, replace with logical and • Algebraic identities • r1 = r2 REM 16 r1 = r2 & 15 • r1 = r2 + 0, r2 œ 0, r2 | 0, r2 ^ 0, r2 << 0, r2 >> 0 • More exotic • r1 = r2 • Replace multiply by constant by sequence of shift and adds/subs • r1 = 0 * r2, 0 / r2, 0 & r2 • r1 = r2 * 6 • r1 = 0 • r100 = r2 << 2; r101 = r2 << 1; r1 = r100 + r101 • r1 = r2 * 1, r2 / 1 • r1 = r2 * 7 • r1 = r2 • r100 = r2 << 3; r1 = r100 œ r2 COS 598C - Advanced Compilers 10 Prof. David August COS 598C - Advanced Compilers 11 Prof. David August Class Problem Constant Propagation • Forward propagation of moves of the form Optimize this applying r1 = 0 • rx = L (where L is a literal) r1 = 5 1. constant folding • Maximally propagate r2 = r1 + r3 2. strength reduction • Assume no instruction encoding r4 = r1 | -1 3. dead code elimination r7 = r1 * 4 restrictions r6 = r1 • When is it legal? r1 = r1 + r2 r7 = r1 + r4 • SRC: Literal is a hard coded constant, so never a problem r3 = 8 * r6 r3 = 8 / r6 r3 = r3 + r2 • DEST: Must be available r8 = r1 + 3 • Guaranteed to reach r2 = r2 + r1 • May reach not good enough r6 = r7 * r6 r9 = r1 + r11 r1 = r1 + 1 store (r1, r3) COS 598C - Advanced Compilers 12 Prof. David August COS 598C - Advanced Compilers 13 Prof. David August Simple Constant Propagation Local Constant Propagation • Consider 2 ops, X and Y in a BB, X is before Y • 1. X is a move • 2. src1(X) is a literal r1 = 5 • 3. Y consumes dest(X) r2 = ‘_x’ • 4. There is no definition of r3 = 7 dest(X) between X and Y r4 = r4 + r1 r1 = r1 + r2 • 5. No danger betwX and Y r1 = r1 + 1 • When dest(X) is a Macro reg, r3 = 12 BRL destroys the value r8 = r1 - r2 r9 = r3 + r5 r3 = r2 + 1 r10 = r3 – r1 COS 598C - Advanced Compilers 14 Prof. David August COS 598C - Advanced Compilers 15 Prof. David August Global Constant Propagation Class Problem • Consider 2 ops, X and Y in different BBs • 1. X is a move r1 = 0 Optimize this applying r1 = 5 • 2. src1(X) is a literal r2 = 10 r2 = ‘_x’ • 3. Y consumes dest(X) 1. constant propagation 2. constant folding • 4. X is in a_in(BB(Y)) r4 = 1 3. strength reduction • 5. Dest(x) is not modified between the r7 = r1 * 4 4. dead code elimination top of BB(Y) and Y r6 = 8 r1 = r1 + r2 r7 = r1 – r2 • 6. No danger betwX and Y • When dest(X) is a Macro reg, BRL destroys r2 = 0 r3 = r4 * r6 the value r8 = r1 * r2 r3 = r2 / r6 r3 = r3 + r2 r2 = r2 + r1 r9 = r1 + r2 r6 = r7 * r6 r1 = r1 + 1 store (r1, r3) COS 598C - Advanced Compilers 16 Prof. David August COS 598C - Advanced Compilers 17 Prof. David August Forward Copy Propagation Backward Copy Propagation • Forward propagation of the RHS of • Backward propagation of the LHS moves of moves • r1 = r2 • r1 = r2 + r3 r4 = r2 + r3 • … r1 = r2 • … • r5 = r1 + r6 r5 = r4 + r6 • r4 = r1 + 1 r4 = r2 + 1 r3 = r4 r1 = r8 + r9 • … r2 = r9 + r1 • Benefits • r4 = r1 noop r4 = r2 • Reduce chain of dependences r6 = r2 + 1 r2 = 0 r6 = r3 + 1 • Rules (ops X and Y in same BB) • Eliminate the move • dest(X) is a register r9 = r1 • Rules (ops X and Y) • dest(X) not live out of BB(X) r10 = r6 r5 = r6 + 1 • X is a move • Y is a move r4 = 0 r5 = r2 + r3 • dest(Y) is a register • src1(X) is a register r8 = r2 + r7 • Y consumes dest(X) • Y consumes dest(X) • dest(Y) not consumed in (X…Y) • X.destis an available def at Y • dest(Y) not defined in (X…Y) • X.src1 is an available exprat Y • There are no uses of dest(X) after the first redefinition of dest(Y) COS 598C - Advanced Compilers 18 Prof. David August COS 598C - Advanced Compilers 19 Prof. David August CSE ± Common Subexpression Elimination Class Problem • Eliminate recomputationof an expression by reusing the previous result Optimize this applying • r1 = r2 * r3 r1 = 9 r1 = r2 * r6 r4 = 4 1. constant propagation • r100 = r1 r3 = r4 / r7 r5 = 0 2. constant folding • … r6 = 16 3. strength reduction • r4 = r2 * r3 r4 = r100 r2 = r3 * r4 4. dead code elimination 5. forward copy propagation • Benefits r8 = r2 + r5 r2 = r2 + 1 r6 = r3 * 7 6. backward copy propagation • Reduce work r9 = r3 r7 = load(r2) 7. CSE • Moves can get copy propagated r5 = r9 * r4 • Rules (ops X and Y) r3 = load(r2) r5 = r2 * r6 r10 = r3 / r6 • X and Y have the same opcode r8 = r4 / r7 store (r8, r7) • src(X) = src(Y), for all srcs r9 = r3 * 7 r11 = r2 • expr(X) is available at Y r12 = load(r11) • if X is a load, then there is no store that if op is a load, call it redundant store(r12, r3) may write to address(X) along any path load elimination rather than CSE between X and Y COS 598C - Advanced Compilers 20 Prof. David August COS 598C - Advanced Compilers 21 Prof. David August Constant Combining Operation Folding • Combine 2 dependent ops into 1 by • Combine 2 dependent ops into 1 combining the literals complex op • r1 = r2 + 4 • Classic example is MPYADD • r1 = r2 * r3 • … r1 = r2 + 4 r1 = r2 & 4 • … • r5 = r1 - 9 r5 = r2 œ 5 r3 = r1 < 0 r3 = r1 ^ -1 • r5 = r1 + r4 r5 = r2 * r3 + r4 • First op often becomes dead r2 = r3 + 6 r2 = r3 < 6 r7 = r1 – 3 • First op often becomes dead r4 = r2 == 0 • Rules (ops X and Y in same BB) r8 = r7 + 5 • Borders on machine dependent r5 = r6 << 1 • X is of the form rx +- K opti(often it is !! ) r7 = r5 + r8 • dest(X) != src1(X) • Rules (ops X and Y in same BB) • Y is of the form ry+- K • X is an arithmetic operation (comparison also ok) • dest(X) != any src(X) • Y consumes dest(X) • Y is an arithmetic operation • src1(X) not modified in (X…Y) • Y consumes dest(X) • X and Y can be merged • src(X) not modified in (X…Y) COS 598C - Advanced Compilers 22 Prof. David August COS 598C - Advanced Compilers 23 Prof. David August.
Recommended publications
  • Polyhedral Compilation As a Design Pattern for Compiler Construction
    Polyhedral Compilation as a Design Pattern for Compiler Construction PLISS, May 19-24, 2019 [email protected] Polyhedra? Example: Tiles Polyhedra? Example: Tiles How many of you read “Design Pattern”? → Tiles Everywhere 1. Hardware Example: Google Cloud TPU Architectural Scalability With Tiling Tiles Everywhere 1. Hardware Google Edge TPU Edge computing zoo Tiles Everywhere 1. Hardware 2. Data Layout Example: XLA compiler, Tiled data layout Repeated/Hierarchical Tiling e.g., BF16 (bfloat16) on Cloud TPU (should be 8x128 then 2x1) Tiles Everywhere Tiling in Halide 1. Hardware 2. Data Layout Tiled schedule: strip-mine (a.k.a. split) 3. Control Flow permute (a.k.a. reorder) 4. Data Flow 5. Data Parallelism Vectorized schedule: strip-mine Example: Halide for image processing pipelines vectorize inner loop https://halide-lang.org Meta-programming API and domain-specific language (DSL) for loop transformations, numerical computing kernels Non-divisible bounds/extent: strip-mine shift left/up redundant computation (also forward substitute/inline operand) Tiles Everywhere TVM example: scan cell (RNN) m = tvm.var("m") n = tvm.var("n") 1. Hardware X = tvm.placeholder((m,n), name="X") s_state = tvm.placeholder((m,n)) 2. Data Layout s_init = tvm.compute((1,n), lambda _,i: X[0,i]) s_do = tvm.compute((m,n), lambda t,i: s_state[t-1,i] + X[t,i]) 3. Control Flow s_scan = tvm.scan(s_init, s_do, s_state, inputs=[X]) s = tvm.create_schedule(s_scan.op) 4. Data Flow // Schedule to run the scan cell on a CUDA device block_x = tvm.thread_axis("blockIdx.x") 5. Data Parallelism thread_x = tvm.thread_axis("threadIdx.x") xo,xi = s[s_init].split(s_init.op.axis[1], factor=num_thread) s[s_init].bind(xo, block_x) Example: Halide for image processing pipelines s[s_init].bind(xi, thread_x) xo,xi = s[s_do].split(s_do.op.axis[1], factor=num_thread) https://halide-lang.org s[s_do].bind(xo, block_x) s[s_do].bind(xi, thread_x) print(tvm.lower(s, [X, s_scan], simple_mode=True)) And also TVM for neural networks https://tvm.ai Tiling and Beyond 1.
    [Show full text]
  • CS153: Compilers Lecture 19: Optimization
    CS153: Compilers Lecture 19: Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Contains content from lecture notes by Steve Zdancewic and Greg Morrisett Announcements •HW5: Oat v.2 out •Due in 2 weeks •HW6 will be released next week •Implementing optimizations! (and more) Stephen Chong, Harvard University 2 Today •Optimizations •Safety •Constant folding •Algebraic simplification • Strength reduction •Constant propagation •Copy propagation •Dead code elimination •Inlining and specialization • Recursive function inlining •Tail call elimination •Common subexpression elimination Stephen Chong, Harvard University 3 Optimizations •The code generated by our OAT compiler so far is pretty inefficient. •Lots of redundant moves. •Lots of unnecessary arithmetic instructions. •Consider this OAT program: int foo(int w) { var x = 3 + 5; var y = x * w; var z = y - 0; return z * 4; } Stephen Chong, Harvard University 4 Unoptimized vs. Optimized Output .globl _foo _foo: •Hand optimized code: pushl %ebp movl %esp, %ebp _foo: subl $64, %esp shlq $5, %rdi __fresh2: movq %rdi, %rax leal -64(%ebp), %eax ret movl %eax, -48(%ebp) movl 8(%ebp), %eax •Function foo may be movl %eax, %ecx movl -48(%ebp), %eax inlined by the compiler, movl %ecx, (%eax) movl $3, %eax so it can be implemented movl %eax, -44(%ebp) movl $5, %eax by just one instruction! movl %eax, %ecx addl %ecx, -44(%ebp) leal -60(%ebp), %eax movl %eax, -40(%ebp) movl -44(%ebp), %eax Stephen Chong,movl Harvard %eax,University %ecx 5 Why do we need optimizations? •To help programmers… •They write modular, clean, high-level programs •Compiler generates efficient, high-performance assembly •Programmers don’t write optimal code •High-level languages make avoiding redundant computation inconvenient or impossible •e.g.
    [Show full text]
  • Cross-Platform Language Design
    Cross-Platform Language Design THIS IS A TEMPORARY TITLE PAGE It will be replaced for the final print by a version provided by the service academique. Thèse n. 1234 2011 présentée le 11 Juin 2018 à la Faculté Informatique et Communications Laboratoire de Méthodes de Programmation 1 programme doctoral en Informatique et Communications École Polytechnique Fédérale de Lausanne pour l’obtention du grade de Docteur ès Sciences par Sébastien Doeraene acceptée sur proposition du jury: Prof James Larus, président du jury Prof Martin Odersky, directeur de thèse Prof Edouard Bugnion, rapporteur Dr Andreas Rossberg, rapporteur Prof Peter Van Roy, rapporteur Lausanne, EPFL, 2018 It is better to repent a sin than regret the loss of a pleasure. — Oscar Wilde Acknowledgments Although there is only one name written in a large font on the front page, there are many people without which this thesis would never have happened, or would not have been quite the same. Five years is a long time, during which I had the privilege to work, discuss, sing, learn and have fun with many people. I am afraid to make a list, for I am sure I will forget some. Nevertheless, I will try my best. First, I would like to thank my advisor, Martin Odersky, for giving me the opportunity to fulfill a dream, that of being part of the design and development team of my favorite programming language. Many thanks for letting me explore the design of Scala.js in my own way, while at the same time always being there when I needed him.
    [Show full text]
  • Strength Reduction of Induction Variables and Pointer Analysis – Induction Variable Elimination
    Loop optimizations • Optimize loops – Loop invariant code motion [last time] Loop Optimizations – Strength reduction of induction variables and Pointer Analysis – Induction variable elimination CS 412/413 Spring 2008 Introduction to Compilers 1 CS 412/413 Spring 2008 Introduction to Compilers 2 Strength Reduction Induction Variables • Basic idea: replace expensive operations (multiplications) with • An induction variable is a variable in a loop, cheaper ones (additions) in definitions of induction variables whose value is a function of the loop iteration s = 3*i+1; number v = f(i) while (i<10) { while (i<10) { j = 3*i+1; //<i,3,1> j = s; • In compilers, this a linear function: a[j] = a[j] –2; a[j] = a[j] –2; i = i+2; i = i+2; f(i) = c*i + d } s= s+6; } •Observation:linear combinations of linear • Benefit: cheaper to compute s = s+6 than j = 3*i functions are linear functions – s = s+6 requires an addition – Consequence: linear combinations of induction – j = 3*i requires a multiplication variables are induction variables CS 412/413 Spring 2008 Introduction to Compilers 3 CS 412/413 Spring 2008 Introduction to Compilers 4 1 Families of Induction Variables Representation • Basic induction variable: a variable whose only definition in the • Representation of induction variables in family i by triples: loop body is of the form – Denote basic induction variable i by <i, 1, 0> i = i + c – Denote induction variable k=i*a+b by triple <i, a, b> where c is a loop-invariant value • Derived induction variables: Each basic induction variable i defines
    [Show full text]
  • Lecture Notes on Peephole Optimizations and Common Subexpression Elimination
    Lecture Notes on Peephole Optimizations and Common Subexpression Elimination 15-411: Compiler Design Frank Pfenning and Jan Hoffmann Lecture 18 October 31, 2017 1 Introduction In this lecture, we discuss common subexpression elimination and a class of optimiza- tions that is called peephole optimizations. The idea of common subexpression elimination is to avoid to perform the same operation twice by replacing (binary) operations with variables. To ensure that these substitutions are sound we intro- duce dominance, which ensures that substituted variables are always defined. Peephole optimizations are optimizations that are performed locally on a small number of instructions. The name is inspired from the picture that we look at the code through a peephole and make optimization that only involve the small amount code we can see and that are indented of the rest of the program. There is a large number of possible peephole optimizations. The LLVM com- piler implements for example more than 1000 peephole optimizations [LMNR15]. In this lecture, we discuss three important and representative peephole optimiza- tions: constant folding, strength reduction, and null sequences. 2 Constant Folding Optimizations have two components: (1) a condition under which they can be ap- plied and the (2) code transformation itself. The optimization of constant folding is a straightforward example of this. The code transformation itself replaces a binary operation with a single constant, and applies whenever c1 c2 is defined. l : x c1 c2 −! l : x c (where c =
    [Show full text]
  • Phase-Ordering in Optimizing Compilers
    Phase-ordering in optimizing compilers Matthieu Qu´eva Kongens Lyngby 2007 IMM-MSC-2007-71 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Summary The “quality” of code generated by compilers largely depends on the analyses and optimizations applied to the code during the compilation process. While modern compilers could choose from a plethora of optimizations and analyses, in current compilers the order of these pairs of analyses/transformations is fixed once and for all by the compiler developer. Of course there exist some flags that allow a marginal control of what is executed and how, but the most important source of information regarding what analyses/optimizations to run is ignored- the source code. Indeed, some optimizations might be better to be applied on some source code, while others would be preferable on another. A new compilation model is developed in this thesis by implementing a Phase Manager. This Phase Manager has a set of analyses/transformations available, and can rank the different possible optimizations according to the current state of the intermediate representation. Based on this ranking, the Phase Manager can decide which phase should be run next. Such a Phase Manager has been implemented for a compiler for a simple imper- ative language, the while language, which includes several Data-Flow analyses. The new approach consists in calculating coefficients, called metrics, after each optimization phase. These metrics are used to evaluate where the transforma- tions will be applicable, and are used by the Phase Manager to rank the phases.
    [Show full text]
  • Efficient Symbolic Analysis for Optimizing Compilers*
    Efficient Symbolic Analysis for Optimizing Compilers? Robert A. van Engelen Dept. of Computer Science, Florida State University, Tallahassee, FL 32306-4530 [email protected] Abstract. Because most of the execution time of a program is typically spend in loops, loop optimization is the main target of optimizing and re- structuring compilers. An accurate determination of induction variables and dependencies in loops is of paramount importance to many loop opti- mization and parallelization techniques, such as generalized loop strength reduction, loop parallelization by induction variable substitution, and loop-invariant expression elimination. In this paper we present a new method for induction variable recognition. Existing methods are either ad-hoc and not powerful enough to recognize some types of induction variables, or existing methods are powerful but not safe. The most pow- erful method known is the symbolic differencing method as demonstrated by the Parafrase-2 compiler on parallelizing the Perfect Benchmarks(R). However, symbolic differencing is inherently unsafe and a compiler that uses this method may produce incorrectly transformed programs without issuing a warning. In contrast, our method is safe, simpler to implement in a compiler, better adaptable for controlling loop transformations, and recognizes a larger class of induction variables. 1 Introduction It is well known that the optimization and parallelization of scientific applica- tions by restructuring compilers requires extensive analysis of induction vari- ables and dependencies
    [Show full text]
  • Dataflow Analysis: Constant Propagation
    Dataflow Analysis Monday, November 9, 15 Program optimizations • So far we have talked about different kinds of optimizations • Peephole optimizations • Local common sub-expression elimination • Loop optimizations • What about global optimizations • Optimizations across multiple basic blocks (usually a whole procedure) • Not just a single loop Monday, November 9, 15 Useful optimizations • Common subexpression elimination (global) • Need to know which expressions are available at a point • Dead code elimination • Need to know if the effects of a piece of code are never needed, or if code cannot be reached • Constant folding • Need to know if variable has a constant value • So how do we get this information? Monday, November 9, 15 Dataflow analysis • Framework for doing compiler analyses to drive optimization • Works across basic blocks • Examples • Constant propagation: determine which variables are constant • Liveness analysis: determine which variables are live • Available expressions: determine which expressions are have valid computed values • Reaching definitions: determine which definitions could “reach” a use Monday, November 9, 15 Example: constant propagation • Goal: determine when variables take on constant values • Why? Can enable many optimizations • Constant folding x = 1; y = x + 2; if (x > z) then y = 5 ... y ... • Create dead code x = 1; y = x + 2; if (y > x) then y = 5 ... y ... Monday, November 9, 15 Example: constant propagation • Goal: determine when variables take on constant values • Why? Can enable many optimizations • Constant folding x = 1; x = 1; y = x + 2; y = 3; if (x > z) then y = 5 if (x > z) then y = 5 ... y ... ... y ... • Create dead code x = 1; y = x + 2; if (y > x) then y = 5 ..
    [Show full text]
  • Code Optimizations Recap Peephole Optimization
    7/23/2016 Program Analysis Recap https://www.cse.iitb.ac.in/~karkare/cs618/ • Optimizations Code Optimizations – To improve efficiency of generated executable (time, space, resources …) Amey Karkare – Maintain semantic equivalence Dept of Computer Science and Engg • Two levels IIT Kanpur – Machine Independent Visiting IIT Bombay [email protected] – Machine Dependent [email protected] 2 Peephole Optimization Peephole optimization examples… • target code often contains redundant Redundant loads and stores instructions and suboptimal constructs • Consider the code sequence • examine a short sequence of target instruction (peephole) and replace by a Move R , a shorter or faster sequence 0 Move a, R0 • peephole is a small moving window on the target systems • Instruction 2 can always be removed if it does not have a label. 3 4 1 7/23/2016 Peephole optimization examples… Unreachable code example … Unreachable code constant propagation • Consider following code sequence if 0 <> 1 goto L2 int debug = 0 print debugging information if (debug) { L2: print debugging info } Evaluate boolean expression. Since if condition is always true the code becomes this may be translated as goto L2 if debug == 1 goto L1 goto L2 print debugging information L1: print debugging info L2: L2: The print statement is now unreachable. Therefore, the code Eliminate jumps becomes if debug != 1 goto L2 print debugging information L2: L2: 5 6 Peephole optimization examples… Peephole optimization examples… • flow of control: replace jump over jumps • Strength reduction
    [Show full text]
  • Compiler-Based Code-Improvement Techniques
    Compiler-Based Code-Improvement Techniques KEITH D. COOPER, KATHRYN S. MCKINLEY, and LINDA TORCZON Since the earliest days of compilation, code quality has been recognized as an important problem [18]. A rich literature has developed around the issue of improving code quality. This paper surveys one part of that literature: code transformations intended to improve the running time of programs on uniprocessor machines. This paper emphasizes transformations intended to improve code quality rather than analysis methods. We describe analytical techniques and specific data-flow problems to the extent that they are necessary to understand the transformations. Other papers provide excellent summaries of the various sub-fields of program analysis. The paper is structured around a simple taxonomy that classifies transformations based on how they change the code. The taxonomy is populated with example transformations drawn from the literature. Each transformation is described at a depth that facilitates broad understanding; detailed references are provided for deeper study of individual transformations. The taxonomy provides the reader with a framework for thinking about code-improving transformations. It also serves as an organizing principle for the paper. Copyright 1998, all rights reserved. You may copy this article for your personal use in Comp 512. Further reproduction or distribution requires written permission from the authors. 1INTRODUCTION This paper presents an overview of compiler-based methods for improving the run-time behavior of programs — often mislabeled code optimization. These techniques have a long history in the literature. For example, Backus makes it quite clear that code quality was a major concern to the implementors of the first Fortran compilers [18].
    [Show full text]
  • Redundant Computation Elimination Optimizations
    Redundant Computation Elimination Optimizations CS2210 Lecture 20 CS2210 Compiler Design 2004/5 Redundancy Elimination ■ Several categories: ■ Value Numbering ■ local & global ■ Common subexpression elimination (CSE) ■ local & global ■ Loop-invariant code motion ■ Partial redundancy elimination ■ Most complex ■ Subsumes CSE & loop-invariant code motion CS2210 Compiler Design 2004/5 Value Numbering ■ Goal: identify expressions that have same value ■ Approach: hash expression to a hash code ■ Then have to compare only expressions that hash to same value CS2210 Compiler Design 2004/5 1 Example a := x | y b := x | y t1:= !z if t1 goto L1 x := !z c := x & y t2 := x & y if t2 trap 30 CS2210 Compiler Design 2004/5 Global Value Numbering ■ Generalization of value numbering for procedures ■ Requires code to be in SSA form ■ Crucial notion: congruence ■ Two variables are congruent if their defining computations have identical operators and congruent operands ■ Only precise in SSA form, otherwise the definition may not dominate the use CS2210 Compiler Design 2004/5 Value Graphs ■ Labeled, directed graph ■ Nodes are labeled with ■ Operators ■ Function symbols ■ Constants ■ Edges point from operator (or function) to its operands ■ Number labels indicate operand position CS2210 Compiler Design 2004/5 2 Example entry receive n1(val) i := 1 1 B1 j1 := 1 i3 := φ2(i1,i2) B2 j3 := φ2(j1,j2) i3 mod 2 = 0 i := i +1 i := i +3 B3 4 3 5 3 B4 j4:= j3+1 j5:= j3+3 i2 := φ5(i4,i5) j3 := φ5(j4,j5) B5 j2 > n1 exit CS2210 Compiler Design 2004/5 Congruence Definition ■ Maximal relation on the value graph ■ Two nodes are congruent iff ■ They are the same node, or ■ Their labels are constants and the constants are the same, or ■ They have the same operators and their operands are congruent ■ Variable equivalence := x and y are equivalent at program point p iff they are congruent and their defining assignments dominate p CS2210 Compiler Design 2004/5 Global Value Numbering Algo.
    [Show full text]
  • 3. Constant Propagation 4. Copy Propagation 5. Constant Folding
    3. Constant Propagation If a variable is known to contain a particular constant value at a particular point in the program, replace references to the variable at that point with that constant value. 4. Copy Propagation After the assignment of one variable to another, a reference to one variable may be replaced with the value of the other variable (until one or the other of the variables is reassigned). (This may also “set up” dead code elimination. Why?) 5. Constant Folding An expression involving constant (literal) values may be evaluated and simplified to a constant result value. Particularly useful when constant propagation is performed. © CS 701 Fall 2007 20 6. Dead Code Elimination Expressions or statements whose values or effects are unused may be eliminated. 7. Loop Invariant Code Motion An expression that is invariant in a loop may be moved to the loop’s header, evaluated once, and reused within the loop. Safety and profitability issues may be involved. 8. Scalarization (Scalar Replacement) A field of a structure or an element of an array that is repeatedly read or written may be copied to a local variable, accessed using the local, and later (if necessary) copied back. This optimization allows the local variable (and in effect the field or array component) to be allocated to a register. © CS 701 Fall 2007 21 9. Local Register Allocation Within a basic block (a straight line sequence of code) track register contents and reuse variables and constants from registers. 10. Global Register Allocation Within a subprogram, frequently accessed variables and constants are allocated to registers.
    [Show full text]