Control-Flow and Low-Level Optimizations

Total Page:16

File Type:pdf, Size:1020Kb

Control-Flow and Low-Level Optimizations Outline • Unreachable-Code Elimination Control-Flow and Low-Level • Straightening • If and Loop Simplifications Optimizations • Loop Inversion and Unswitching • Branch Optimizations • Tail Merging (Cross Jumping) • Conditional Moves • Dead-Code Elimination • Branch Prediction • Peephole Optimization • Machine Idioms & Instruction Combining Unreachable-Code Elimination Unreachable-Code Elimination Unreachable code is code that cannot be executed , entry entry regardless of the input data c = a + b c = a + b – codthtide that is never execut tblfable for any i nput tdt data d = c d = c f = a + c e > c e > c – code that has become unreachable due to a previous g = e compiler tftransforma tion a = e +c f = c – g f = c – g b = c+1c + 1 b = c+1c + 1 Unreachable code elimination removes this code h = e + 1 d = 4 * a d = 4 * a e < a e = d – 7 e = d – 7 – reduces the code space f = e+2e + 2 f = e+2e + 2 – improves instruction-cache utilization – enables other control -flow transformations exit exit Straightening Straightening Example Straightening is applicable to pairs of basic blocks such Straightening in the presence of fall-throughs is tricky... that the first has no successors other than the second and the second has no predecessors other than the first L1: … L1: … a = b + c a = b + c goto L2 b = c * 2 a = a + 1 … … L6: … if c < 0 goto L3 a = b + c a=b+ca = b + c gotL4to L4 gotL5to L5 b = c * 2 b = c * 2 a = a + 1 L2:b: b = c * 2 L6: … a = a + 1 c < 0 a = a + 1 goto L4 c < 0 if c < 0 goto L3 L5: … L5: … If Simplifications If Simplification Example If simplifications apply to conditional constructs a > d a > d Y N Y N one or both of whose branches are empty: b = a – if either the then or the else part of an if-construct is c = 4 * b b = a (a >= d) or bool c = 4 * b empty, the corresponding branch can be eliminated Y N – one branch of an if with a constant-valued condition d = b d = c d = b can also be eliminated – we can also simplify ifs whose condition, C, occurs e = a + b e = a + b in the scope of a condition that implies C (and none of the condition’s operands has changed value) … … Loop Simplifications Loop Simplification Example • A loop whose body is empty can be eliminated s = 0 if the iteration-control code has no side-effects i = 0 s = 0 i = i+1i + 1 (Side-effects might be simple enough that they can be i = 0 s = s + i i = 4 replaced with non-looping code at compile time) L1: if i >= 4 goto L2 i = i + 1 ii+1i = i + 1 s = 10 s = s + i L2: … s = s + i i = i + 1 • If number of iterations is small enough , loops goto L1 ss+is = s + i can be unrolled into branchless code and the L2: … i = i + 1 s = s + i lbdbloop body can be execute d at compil ilie time L2: … Loop Inversion Loop Inversion Example 1 Loop inversion transforms a while loop into a for (i = 0; i < 100; i++) { Loop bounds repeat loop (i.e. moves the loop-closing test a[i] = i + 1; are known } from before the loop to after it). – Has the advantaggye that only one branch instruction needs to be executed to close the loop. – Requires that we determine that the loop is entered i=0;i = 0; i0i = 0; at least once! while (i < 100) { do { a[i] = i+1;i + 1; a[i]=i+1;a[i] = i + 1; i++; i++; } } while (i < 100) Loop Inversion Example 2 Unswitching Unswitching is a control -flow transformation that moves loop-invariant conditional branches out if (k > = n) goto L of loops for (i = k; i < n; i++) { i = k; if (k == 2) { a[i] = i + 1; do { for (i = 1; i < 100; i++) { for (i = 1; i < 100; i++) } a[i] = i + 1; if (k == 2) a[i] = a[i] + 1; i++; a[i] = a[i] + 1; }l} else { Loop bounds } while (i < n) else for (i = 1; i < 100; i++) are unknown L: a[i] = a[i] – 1; a[i] = a[i] – 1; } } Unswitching Example Branch Optimizations Branches to branches are remarkably common! if (k == 2) { – An unconditional branch to an unconditional branch can be for (i = 1; i < 100; i++) { replaced by a branch to the latter ’s target for (i = 1; i < 100; i++) { if (a[i] > 0) – A conditional branch to an unconditional branch can be if (k == 2&&2 && a[i] > 0) a[i] = a [i] + 1; replaced by the corresponding conditional branch to the latter branch’s target a[i] = a[i] + 1; } } }else{} else { – An unconditional branch to a conditional branch can be replaced by a copy of the conditional branch i = 100; } – A conditional branch to a conditional branch can be replaced by a conditional branch with the former’s test and the latter’s target as long as the latter condition is true whenever the fiformer one is Branch Optimization Examples Eliminating Useless Control -Flow if a == 0 go to L1 if a == 0 go to L2 The Problem: … … – After optimization, the CFG might contain empty blocks L1: if a >= 0 goto L2 L1: if a >= 0 goto L2 … … – “Empty” blocks still end with either a branch or jump L2: … L2: … – Produces jump to jump, which wastes time and space goto L1 The Algorithm: (Clean) L1: … L1: … – Use four distinct transformations if a == 0 goto L1 – AlhApply them in a caref flllully selected dd order goto L2 if a != 0 goto L2 – Iterate until done L1: … L1: … Eliminating Useless Control -Flow Eliminating Useless Control -Flow Transformation 1 Both sides of branch target B2 Transformation 2 Merging an empty block – Neither block must be empty – Empty B1 ends with a jump B1 B1 – Rep lace it with a j ump t o B1 empty – Coalesce B1 and B2 B1 – Move B1’s incoming edges – Simple rewrite of the last – Eliminates extraneous jjpump operation in B1 B2 B2 B2 B2 – Faster, smaller code BhtjBranch, not a jump How does this happen? How does this happen? Eliminating redundant branches – By rewriting other branches Eliminating empty blocks – By eliminating operations in B1 How do we recognize it? How do we recognize it? – Check each branch – Test for empty block Eliminating Useless Control -Flow Eliminating Useless Control -Flow Transformation 3 Coalescing blocks Transformation 4 Jump to a branch – Neither block must be empty – B1 ends with a jump, B2 is B1 – B1 ends with a jump to B2 B1 B1 emppyty – Eliminates pointless jump B1 – B2 has one predecessor – Combine the two blocks – Copy branch into end of B1 B2 empty empty – Might make B2 unreachable B2 – Eliminates a jump B2 B2 How does this happen? How does this happen? Eliminating non-empty blocks – By eliminating operations in B1 – By simplifying edges out of B1 Hoisting branches How do we recognize it? from empty blocks How do we recognize it? – Check target of jump – Jump to empty block Eliminating Useless Control -Flow Tail Merging (Cross Jumping) Puttinggg the transformations together Tail merging applies to basic blocks whose last few instructions – Process the blocks in postorder are identical and that continue to the same location. • Clean up Bi’s successors before Bi It replaces the matching instructions of one block with a branch to • Simplifies implementation and understanding the corresponding point in the other. – At each node, apply transformations in a fixed order … … r12+31 = r2 + r3 r12+31 = r2 + r3 • Eliminate redundant branch r4 = r3 shl 2 goto L2 • Eliminate empty block r2 = r2 + 1 … • Merge bloc k w ith successors r2 = r4 – r2 r5 = r4 – 6 • Hoist branch from empty successor goto L1 L2: r4 = r3 shl 2 – May need to iterate … r2 = r2 + 1 r5 = r4 – 6 r2 = r4 – r2 •Postorder ⇒ unprocessed successors along back edges r4 = r3 shl 2 L1: … • Can bound iterations, but derivinggg a tight bound is hard r2 = r2 + 1 • Must recompute postorder between iterations r2 = r4 – r2 L1: … Conditional Moves Conditional Moves Help Loop Unrolling Conditional moves are instructions that copy a source to for(i=1;i<=n;i++){for (i = 1; i <= n; i++) { for(i=1;i<=n;i++){for (i = 1; i <= n; i++) { a target if and only if a specified condition is satisfied x = a[i]; x = a[i]; if (x>0) u = z * x; w = z * x; u = b[i]; – availa ble in several mod ern arc hitect ures (SPARC-V9, PentiumPro) else u = b[i]; u = (x>0) w; – are used to replace simple branching code sequences with s = s + u; non-branching code s = s + u; } } if a > b go to L1 • BiBy using con ditilditional move i ittinstructions, we can unroll max = b t1 = a > b loops containing internal control-flow and end up with goto L2 max = b “straight-line” code L1: max = a max = (t1) a – helps because instruction scheduling is then more effective L2: … – works if the two instruction blocks of the if are small in size Dead-Code Elimination Dead-Code Elimination Example entry entry A variable is dead if it is not used on any path from the location in k is only used the code where it is defined to the exit point of the routine. i = 1 j = 2 to define new i = 1 An instruction is dead if it computes values that are not used on j=2j = 2 k = 3 values for itself! any executable path leading from the instruction.
Recommended publications
  • A Methodology for Assessing Javascript Software Protections
    A methodology for Assessing JavaScript Software Protections Pedro Fortuna A methodology for Assessing JavaScript Software Protections Pedro Fortuna About me Pedro Fortuna Co-Founder & CTO @ JSCRAMBLER OWASP Member SECURITY, JAVASCRIPT @pedrofortuna 2 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Agenda 1 4 7 What is Code Protection? Testing Resilience 2 5 Code Obfuscation Metrics Conclusions 3 6 JS Software Protections Q & A Checklist 3 What is Code Protection Part 1 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Intellectual Property Protection Legal or Technical Protection? Alice Bob Software Developer Reverse Engineer Sells her software over the Internet Wants algorithms and data structures Does not need to revert back to original source code 5 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Intellectual Property IP Protection Protection Legal Technical Encryption ? Trusted Computing Server-Side Execution Obfuscation 6 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Code Obfuscation Obfuscation “transforms a program into a form that is more difficult for an adversary to understand or change than the original code” [1] More Difficult “requires more human time, more money, or more computing power to analyze than the original program.” [1] in Collberg, C., and Nagra, J., “Surreptitious software: obfuscation, watermarking, and tamperproofing for software protection.”, Addison- Wesley Professional, 2010. 7 A methodology for Assessing
    [Show full text]
  • Attacking Client-Side JIT Compilers.Key
    Attacking Client-Side JIT Compilers Samuel Groß (@5aelo) !1 A JavaScript Engine Parser JIT Compiler Interpreter Runtime Garbage Collector !2 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !3 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !4 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !5 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !6 Agenda 1. Background: Runtime Parser • Object representation and Builtins JIT Compiler 2. JIT Compiler Internals • Problem: missing type information • Solution: "speculative" JIT Interpreter 3.
    [Show full text]
  • CS153: Compilers Lecture 19: Optimization
    CS153: Compilers Lecture 19: Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Contains content from lecture notes by Steve Zdancewic and Greg Morrisett Announcements •HW5: Oat v.2 out •Due in 2 weeks •HW6 will be released next week •Implementing optimizations! (and more) Stephen Chong, Harvard University 2 Today •Optimizations •Safety •Constant folding •Algebraic simplification • Strength reduction •Constant propagation •Copy propagation •Dead code elimination •Inlining and specialization • Recursive function inlining •Tail call elimination •Common subexpression elimination Stephen Chong, Harvard University 3 Optimizations •The code generated by our OAT compiler so far is pretty inefficient. •Lots of redundant moves. •Lots of unnecessary arithmetic instructions. •Consider this OAT program: int foo(int w) { var x = 3 + 5; var y = x * w; var z = y - 0; return z * 4; } Stephen Chong, Harvard University 4 Unoptimized vs. Optimized Output .globl _foo _foo: •Hand optimized code: pushl %ebp movl %esp, %ebp _foo: subl $64, %esp shlq $5, %rdi __fresh2: movq %rdi, %rax leal -64(%ebp), %eax ret movl %eax, -48(%ebp) movl 8(%ebp), %eax •Function foo may be movl %eax, %ecx movl -48(%ebp), %eax inlined by the compiler, movl %ecx, (%eax) movl $3, %eax so it can be implemented movl %eax, -44(%ebp) movl $5, %eax by just one instruction! movl %eax, %ecx addl %ecx, -44(%ebp) leal -60(%ebp), %eax movl %eax, -40(%ebp) movl -44(%ebp), %eax Stephen Chong,movl Harvard %eax,University %ecx 5 Why do we need optimizations? •To help programmers… •They write modular, clean, high-level programs •Compiler generates efficient, high-performance assembly •Programmers don’t write optimal code •High-level languages make avoiding redundant computation inconvenient or impossible •e.g.
    [Show full text]
  • Quaxe, Infinity and Beyond
    Quaxe, infinity and beyond Daniel Glazman — WWX 2015 /usr/bin/whoami Primary architect and developer of the leading Web and Ebook editors Nvu and BlueGriffon Former member of the Netscape CSS and Editor engineering teams Involved in Internet and Web Standards since 1990 Currently co-chair of CSS Working Group at W3C New-comer in the Haxe ecosystem Desktop Frameworks Visual Studio (Windows only) Xcode (OS X only) Qt wxWidgets XUL Adobe Air Mobile Frameworks Adobe PhoneGap/Air Xcode (iOS only) Qt Mobile AppCelerator Visual Studio Two solutions but many issues Fragmentation desktop/mobile Heavy runtimes Can’t easily reuse existing c++ libraries Complex to have native-like UI Qt/QtMobile still require c++ Qt’s QML is a weak and convoluted UI language Haxe 9 years success of Multiplatform OSS language Strong affinity to gaming Wide and vibrant community Some press recognition Dead code elimination Compiles to native on all But no native GUI… platforms through c++ and java Best of all worlds Haxe + Qt/QtMobile Multiplatform Native apps, native performance through c++/Java C++/Java lib reusability Introducing Quaxe Native apps w/o c++ complexity Highly dynamic applications on desktop and mobile Native-like UI through Qt HTML5-based UI, CSS-based styling Benefits from Haxe and Qt communities Going from HTML5 to native GUI completeness DOM dynamism in native UI var b: Element = document.getElementById("thirdButton"); var t: Element = document.createElement("input"); t.setAttribute("type", "text"); t.setAttribute("value", "a text field"); b.parentNode.insertBefore(t,
    [Show full text]
  • 9. Optimization
    9. Optimization Marcus Denker Optimization Roadmap > Introduction > Optimizations in the Back-end > The Optimizer > SSA Optimizations > Advanced Optimizations © Marcus Denker 2 Optimization Roadmap > Introduction > Optimizations in the Back-end > The Optimizer > SSA Optimizations > Advanced Optimizations © Marcus Denker 3 Optimization Optimization: The Idea > Transform the program to improve efficiency > Performance: faster execution > Size: smaller executable, smaller memory footprint Tradeoffs: 1) Performance vs. Size 2) Compilation speed and memory © Marcus Denker 4 Optimization No Magic Bullet! > There is no perfect optimizer > Example: optimize for simplicity Opt(P): Smallest Program Q: Program with no output, does not stop Opt(Q)? © Marcus Denker 5 Optimization No Magic Bullet! > There is no perfect optimizer > Example: optimize for simplicity Opt(P): Smallest Program Q: Program with no output, does not stop Opt(Q)? L1 goto L1 © Marcus Denker 6 Optimization No Magic Bullet! > There is no perfect optimizer > Example: optimize for simplicity Opt(P): Smallest ProgramQ: Program with no output, does not stop Opt(Q)? L1 goto L1 Halting problem! © Marcus Denker 7 Optimization Another way to look at it... > Rice (1953): For every compiler there is a modified compiler that generates shorter code. > Proof: Assume there is a compiler U that generates the shortest optimized program Opt(P) for all P. — Assume P to be a program that does not stop and has no output — Opt(P) will be L1 goto L1 — Halting problem. Thus: U does not exist. > There will
    [Show full text]
  • Value Numbering
    SOFTWARE—PRACTICE AND EXPERIENCE, VOL. 0(0), 1–18 (MONTH 1900) Value Numbering PRESTON BRIGGS Tera Computer Company, 2815 Eastlake Avenue East, Seattle, WA 98102 AND KEITH D. COOPER L. TAYLOR SIMPSON Rice University, 6100 Main Street, Mail Stop 41, Houston, TX 77005 SUMMARY Value numbering is a compiler-based program analysis method that allows redundant computations to be removed. This paper compares hash-based approaches derived from the classic local algorithm1 with partitioning approaches based on the work of Alpern, Wegman, and Zadeck2. Historically, the hash-based algorithm has been applied to single basic blocks or extended basic blocks. We have improved the technique to operate over the routine’s dominator tree. The partitioning approach partitions the values in the routine into congruence classes and removes computations when one congruent value dominates another. We have extended this technique to remove computations that define a value in the set of available expressions (AVA IL )3. Also, we are able to apply a version of Morel and Renvoise’s partial redundancy elimination4 to remove even more redundancies. The paper presents a series of hash-based algorithms and a series of refinements to the partitioning technique. Within each series, it can be proved that each method discovers at least as many redundancies as its predecessors. Unfortunately, no such relationship exists between the hash-based and global techniques. On some programs, the hash-based techniques eliminate more redundancies than the partitioning techniques, while on others, partitioning wins. We experimentally compare the improvements made by these techniques when applied to real programs. These results will be useful for commercial compiler writers who wish to assess the potential impact of each technique before implementation.
    [Show full text]
  • Language and Compiler Support for Dynamic Code Generation by Massimiliano A
    Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto S.B., Massachusetts Institute of Technology (1995) M.Eng., Massachusetts Institute of Technology (1995) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A u th or ............................................................................ Department of Electrical Engineering and Computer Science June 23, 1999 Certified by...............,. ...*V .,., . .* N . .. .*. *.* . -. *.... M. Frans Kaashoek Associate Pro essor of Electrical Engineering and Computer Science Thesis Supervisor A ccepted by ................ ..... ............ ............................. Arthur C. Smith Chairman, Departmental CommitteA on Graduate Students me 2 Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto Submitted to the Department of Electrical Engineering and Computer Science on June 23, 1999, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract Dynamic code generation, also called run-time code generation or dynamic compilation, is the cre- ation of executable code for an application while that application is running. Dynamic compilation can significantly improve the performance of software by giving the compiler access to run-time infor- mation that is not available to a traditional static compiler. A well-designed programming interface to dynamic compilation can also simplify the creation of important classes of computer programs. Until recently, however, no system combined efficient dynamic generation of high-performance code with a powerful and portable language interface. This thesis describes a system that meets these requirements, and discusses several applications of dynamic compilation.
    [Show full text]
  • Dead Code Elimination for Web Applications Written in Dynamic Languages
    Dead code elimination for web applications written in dynamic languages Master’s Thesis Hidde Boomsma Dead code elimination for web applications written in dynamic languages THESIS submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE in COMPUTER ENGINEERING by Hidde Boomsma born in Naarden 1984, the Netherlands Software Engineering Research Group Department of Software Technology Department of Software Engineering Faculty EEMCS, Delft University of Technology Hostnet B.V. Delft, the Netherlands Amsterdam, the Netherlands www.ewi.tudelft.nl www.hostnet.nl © 2012 Hidde Boomsma. All rights reserved Cover picture: Hoster the Hostnet mascot Dead code elimination for web applications written in dynamic languages Author: Hidde Boomsma Student id: 1174371 Email: [email protected] Abstract Dead code is source code that is not necessary for the correct execution of an application. Dead code is a result of software ageing. It is a threat for maintainability and should therefore be removed. Many organizations in the web domain have the problem that their software grows and de- mands increasingly more effort to maintain, test, check out and deploy. Old features often remain in the software, because their dependencies are not obvious from the software documentation. Dead code can be found by collecting the set of code that is used and subtract this set from the set of all code. Collecting the set can be done statically or dynamically. Web applications are often written in dynamic languages. For dynamic languages dynamic analysis suits best. From the maintainability perspective a dynamic analysis is preferred over static analysis because it is able to detect reachable but unused code.
    [Show full text]
  • Lecture Notes on Peephole Optimizations and Common Subexpression Elimination
    Lecture Notes on Peephole Optimizations and Common Subexpression Elimination 15-411: Compiler Design Frank Pfenning and Jan Hoffmann Lecture 18 October 31, 2017 1 Introduction In this lecture, we discuss common subexpression elimination and a class of optimiza- tions that is called peephole optimizations. The idea of common subexpression elimination is to avoid to perform the same operation twice by replacing (binary) operations with variables. To ensure that these substitutions are sound we intro- duce dominance, which ensures that substituted variables are always defined. Peephole optimizations are optimizations that are performed locally on a small number of instructions. The name is inspired from the picture that we look at the code through a peephole and make optimization that only involve the small amount code we can see and that are indented of the rest of the program. There is a large number of possible peephole optimizations. The LLVM com- piler implements for example more than 1000 peephole optimizations [LMNR15]. In this lecture, we discuss three important and representative peephole optimiza- tions: constant folding, strength reduction, and null sequences. 2 Constant Folding Optimizations have two components: (1) a condition under which they can be ap- plied and the (2) code transformation itself. The optimization of constant folding is a straightforward example of this. The code transformation itself replaces a binary operation with a single constant, and applies whenever c1 c2 is defined. l : x c1 c2 −! l : x c (where c =
    [Show full text]
  • Transparent Dynamic Optimization: the Design and Implementation of Dynamo
    Transparent Dynamic Optimization: The Design and Implementation of Dynamo Vasanth Bala, Evelyn Duesterwald, Sanjeev Banerjia HP Laboratories Cambridge HPL-1999-78 June, 1999 E-mail: [email protected] dynamic Dynamic optimization refers to the runtime optimization optimization, of a native program binary. This report describes the compiler, design and implementation of Dynamo, a prototype trace selection, dynamic optimizer that is capable of optimizing a native binary translation program binary at runtime. Dynamo is a realistic implementation, not a simulation, that is written entirely in user-level software, and runs on a PA-RISC machine under the HPUX operating system. Dynamo does not depend on any special programming language, compiler, operating system or hardware support. Contrary to intuition, we demonstrate that it is possible to use a piece of software to improve the performance of a native, statically optimized program binary, while it is executing. Dynamo not only speeds up real application programs, its performance improvement is often quite significant. For example, the performance of many +O2 optimized SPECint95 binaries running under Dynamo is comparable to the performance of their +O4 optimized version running without Dynamo. Internal Accession Date Only Ó Copyright Hewlett-Packard Company 1999 Contents 1 INTRODUCTION ........................................................................................... 7 2 RELATED WORK ......................................................................................... 9 3 OVERVIEW
    [Show full text]
  • Optimizing for Reduced Code Space Using Genetic Algorithms
    Optimizing for Reduced Code Space using Genetic Algorithms Keith D. Cooper, Philip J. Schielke, and Devika Subramanian Department of Computer Science Rice University Houston, Texas, USA {keith | phisch | devika}@cs.rice.edu Abstract is virtually impossible to select the best set of optimizations to run on a particular piece of code. Historically, compiler Code space is a critical issue facing designers of software for embedded systems. Many traditional compiler optimiza- writers have made one of two assumptions. Either a fixed tions are designed to reduce the execution time of compiled optimization order is “good enough” for all programs, or code, but not necessarily the size of the compiled code. Fur- giving the user a large set of flags that control optimization ther, different results can be achieved by running some opti- is sufficient, because it shifts the burden onto the user. mizations more than once and changing the order in which optimizations are applied. Register allocation only com- Interplay between optimizations occurs frequently. A plicates matters, as the interactions between different op- transformation can create opportunities for other transfor- timizations can cause more spill code to be generated. The mations. Similarly, a transformation can eliminate oppor- compiler for embedded systems, then, must take care to use tunities for other transformations. These interactions also the best sequence of optimizations to minimize code space. Since much of the code for embedded systems is compiled depend on the program being compiled, and they are of- once and then burned into ROM, the software designer will ten difficult to predict. Multiple applications of the same often tolerate much longer compile times in the hope of re- transformation at different points in the optimization se- ducing the size of the compiled code.
    [Show full text]
  • Peephole Optimization from Wikipedia, the Free Encyclopedia
    Peephole optimization From Wikipedia, the free encyclopedia In compiler theory, peephole optimization is a kind of optimization performed over a very small set of instructions in a segment of generated code. The set is called a "peephole" or a "window". It works by recognising sets of instructions that can be replaced by shorter or faster sets of instructions. Contents 1 Replacement rules 2 Examples 2.1 Replacing slow instructions with faster ones 2.2 Removing redundant code 2.3 Removing redundant stack instructions 3 Implementation 4 See also 5 References 6 External links Replacement rules Common techniques applied in peephole optimization:[1] Constant folding – Evaluate constant subexpressions in advance. Strength reduction – Replace slow operations with faster equivalents. Null sequences – Delete useless operations. Combine operations – Replace several operations with one equivalent. Algebraic laws – Use algebraic laws to simplify or reorder instructions. Special case instructions – Use instructions designed for special operand cases. Address mode operations – Use address modes to simplify code. There can, of course, be other types of peephole optimizations involving simplifying the target machine instructions, assuming that the target machine is known in advance. Advantages of a given architecture and instruction sets can be exploited, and disadvantages avoided in this case. Examples Replacing slow instructions with faster ones The following Java bytecode ... aload 1 aload 1 mul ... can be replaced by ... aload 1 dup mul ... This kind of optimization, like most peephole optimizations, makes certain assumptions about the efficiency of instructions. For instance, in this case, it is assumed that the dup operation (which duplicates and pushes the top of the stack) is more efficient than the aload X operation (which loads a local variable identified as X and pushes it on the stack).
    [Show full text]