Optimizing Compilers Compiler Structure

Total Page:16

File Type:pdf, Size:1020Kb

Optimizing Compilers Compiler Structure Optimizing compilers Compiler structure token stream Potential optimizations: Source-language (AST): Parser • constant bounds in loops/arrays syntax tree • loop unrolling Semantic analysis • suppressing run-time checks (eg, type checking) • enable later optimisations syntax tree IR: local and global Intermediate • CSE elimination code generator low!level IR • live variable analysis (eg, canonical trees/tuples) • code hoisting Optimizer • enable later optimisations low!level IR (eg, canonical trees/tuples) Code-generation (machine code): Machine code • register allocation generator • instruction scheduling machine code Copyright !c 2007 by Antony L. Hosking. Permission to make digital or hard copies of part or all of this work for • peephole optimization personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected]. CS502 Register allocation 1 CS502 Register allocation 2 Optimization Optimization Goal: produce fast code Definition: An optimization is a transformation that is expected to: • What is optimality? improve the running time of a program • Problems are often hard or decrease its space requirements • Many are intractable or even undecideable The point: • Many are NP-complete • Which optimizations should be used? • “improved” code, not “optimal” code • Many optimizations overlap or interact • sometimes produces worse code • range of speedup might be from 1.000001 to 4 (or more) CS502 Register allocation 3 CS502 Register allocation 4 Machine-independent transformations Machine-dependent transformations • applicable across broad range of machines • capitalize on machine-specific properties • remove redundant computations • improve mapping from IR onto machine • move evaluation to a less frequently executed place • replace a costly operation with a cheaper one • specialize some general-purpose code • hide latency • find useless code and remove it • replace sequence of instructions with more powerful one (use “exotic” instructions) • expose opportunities for other optimizations CS502 Register allocation 5 CS502 Register allocation 6 A classical distinction Optimization The distinction is not always clear: Desirable properties of an optimizing compiler replace multiply with shifts and adds • code at least as good as an assembler programmer • stable, robust performance (predictability) • architectural strengths fully exploited • architectural weaknesses fully hidden • broad, efficient support for language features • instantaneous compiles Unfortunately, modern compilers often drop the ball CS502 Register allocation 7 CS502 Register allocation 8 Optimization Scope of optimization Good compilers are crafted, not assembled Local (single block) • confined to straight-line code • consistent philosophy • simplest to analyse • careful selection of transformations • time frame: ’60s to present, particularly now • thorough application Intraprocedural (global) • coordinate transformations and data structures • consider the whole procedure • attention to results (code, time, space) • What do we need to optimize an entire procedure? • classical data-flow analysis, dependence analysis Compilers are engineered objects • time frame: ’70s to present • minimize running time of compiled code Interprocedural (whole program) • minimize compile time • analyse whole programs • use reasonable compile-time space (serious problem) • What do we need to optimize and entire program? • less information is discernible Thus, results are sometimes unexpected • time frame: late ’70s to present, particularly now CS502 Register allocation 9 CS502 Register allocation 10 Optimization Safety Three considerations arise in applying a transformation: Fundamental question Does the transformation change the results of executing the code? • safety yes ⇒ don’t do it! no ⇒ it is safe • profitability • opportunity Compile-time analysis • may be safe in all cases (loop unrolling) We need a clear understanding of these issues • analysis may be simple (DAGs and CSEs) • may require complex reasoning (data-flow analysis) • the literature often hides them • every discussion should list them clearly CS502 Register allocation 11 CS502 Register allocation 12 Profitability Opportunity Fundamental question Fundamental question Is there a reasonable expectation that the transformation will be an Can we efficiently locate sites for applying the transformation? improvement? yes ⇒ compilation time won’t suffer yes ⇒ do it! no ⇒ better be highly profitable no ⇒ don’t do it Issues • provides a framework for applying transformation • systematically find all sites • update safety information to reflect previous changes Compile-time estimation • order of application (hard) • always profitable • heuristic rules • compute benefit (rare) CS502 Register allocation 13 CS502 Register allocation 14 Optimization Example: loop unrolling Successful optimization requires Idea: reduce loop overhead by creating multiple successive copies of the loop’s • test for safety, profitability should be body and increasing the increment appropriately O(1) per transformation Safety: always safe or O(n) for whole routine (maybe nlogn) Profitability: reduces overhead • profit is local improvement× executions (instruction cache blowout) ⇒ focus on loops: (subtle secondary effects) – loop unrolling Opportunity: loops – factoring loop invariants – strength reduction • want to minimize side-effects like code growth Unrolling is easy to understand and perform CS502 Register allocation 15 CS502 Register allocation 16 Example: loop unrolling Example: loop unrolling Matrix-matrix multiply Matrix-matrix multiply (assume 4-word cache line) do i ← 1, n, 1 do i ← 1, n, 1 do j ← 1, n, 1 do j ← 1, n, 1 c(i,j) ← 0 c(i,j) ← 0 do k ← 1, n, 1 do k ← 1, n, 4 c(i,j) ← c(i,j) + a(i,k) * b(k,j) c(i,j) ← c(i,j) + a(i,k) * b(k,j) c(i,j) ← c(i,j) + a(i,k+1) * b(k+1,j) • 2n3 flops, n3 loop increments and branches c(i,j) ← c(i,j) + a(i,k+2) * b(k+2,j) c(i,j) ← c(i,j) + a(i,k+3) * b(k+3,j) • each iteration does 2 loads and 2 flops 3 n3 • 2n flops, 4 loop increments and branches • each iteration does 8 loads and 8 flops • memory traffic is better – c(i,j) is reused (put it in a register) – a(i,k) reference are from cache – b(k,j) is problematic This is the most overstudied example in the literature CS502 Register allocation 17 CS502 Register allocation 18 Example: loop unrolling Example: loop unrolling Matrix-matrix multiply (to improve traffic on b) What happened? do j ← 1, n, 1 • interchanged i and j loops do i ← 1, n, 4 c(i,j) ← 0 • unrolled i loop do k ← 1, n, 4 • fused inner loops c(i,j) ← c(i,j) + a(i,k) * b(k,j) 3 + a(i,k+1) * b(k+1,j) 3 n • 2n flops, 16 loop increments and branches + a(i,k+2) * b(k+2,j) + a(i,k+3) * b(k+3,j) • first assignment does 8 loads and 8 flops nd th c(i+1,j) ← c(i+1,j) + a(i+1,k) * b(k,j) • 2 through 4 do 4 loads and 8 flops + a(i+1,k+1) * b(k+1,j) + a(i+1,k+2) * b(k+2,j) • memory traffic is better + a(i+1,k+3) * b(k+3,j) c(i+2,j) ← c(i+2,j) + a(i+2,k) * b(k,j) – c(i,j) is reused (register) + a(i+2,k+1) * b(k+1,j) – a(i,k) references are from cache + a(i+2,k+2) * b(k+2,j) + a(i+2,k+3) * b(k+3,j) – b(k,j) is reused (register) c(i+3,j) ← c(i+3,j) + a(i+3,k) * b(k,j) + a(i+3,k+1) * b(k+1,j) + a(i+3,k+2) * b(k+2,j) + a(i+3,k+3) * b(k+3,j) CS502 Register allocation 19 CS502 Register allocation 20 Example: loop unrolling Loop optimizations: factoring loop-invariants It is not as easy as it looks: Loop invariants: expressions constant within loop body Safety: loop interchange? loop unrolling? loop fusion? Relevant variables: those used to compute and expression Profitability: machine dependent (mostly) Opportunity: Opportunity: find memory-bound loop nests 1.identifyvariablesdefinedinbodyofloop (LoopDef) Summary 2. loop invariants have no relevant variables in LoopDef • chance for large improvement 3. assign each loop-invariant to temp. in loop header • answering the fundamentals is tough 4. use temporary in loop body • resulting code is ugly Safety: loop-invariant expression may throw exception early Profitability: • loop may execute 0 times • loop-invariant may not be needed on every path through loop body Matrix-matrix multiply is everyone’s favorite example CS502 Register allocation 21 CS502 Register allocation 22 Example: factoring loop invariants Example: factoring loop invariants (cont.) for i := 1 to 100 do (∗ LoopDef = {i, j,k,A} ∗) Factoring the inner loop: And the second loop: for j := 1 to 100 do (∗ LoopDef = { j,k,A} ∗) for i := 1 to 100 do for i := 1 to 100 do for k := 1 to 100 do (∗ LoopDef = {k,A} ∗) for j := 1 to 100 do t3 := ADR(A[i]); A[i, j,k] := i ∗ j ∗ k; t1 := ADR(A[i][ j]); for j := 1 to 100 do t2 := i ∗ j; t1 := ADR(t3[ j]); for k := 1 to 100 do t := i ∗ j; • 3 million index operations 2 t1[k] := t2 ∗ k; for k := 1 to 100 do t1[k] := t2 ∗ k; • 2 million multiplications CS502 Register allocation 23 CS502 Register allocation 24 Strength reduction in loops Example: strength reduction in loops Loop induction variable: incremented on each iteration From previous example: , , ,... i0 i0 + 1 i0 + 2 for i := 1 to 100 do t := ADR(A[i]); Induction expression: ic + c , where c , c are loop invariant 3 1 2 1 2 t4 := i; (∗ i ∗ j0 = i ∗) i0c1 + c2,(i0 + 1)c1 + c2,(i0 + 2)c1 + c2,..
Recommended publications
  • The LLVM Instruction Set and Compilation Strategy
    The LLVM Instruction Set and Compilation Strategy Chris Lattner Vikram Adve University of Illinois at Urbana-Champaign lattner,vadve ¡ @cs.uiuc.edu Abstract This document introduces the LLVM compiler infrastructure and instruction set, a simple approach that enables sophisticated code transformations at link time, runtime, and in the field. It is a pragmatic approach to compilation, interfering with programmers and tools as little as possible, while still retaining extensive high-level information from source-level compilers for later stages of an application’s lifetime. We describe the LLVM instruction set, the design of the LLVM system, and some of its key components. 1 Introduction Modern programming languages and software practices aim to support more reliable, flexible, and powerful software applications, increase programmer productivity, and provide higher level semantic information to the compiler. Un- fortunately, traditional approaches to compilation either fail to extract sufficient performance from the program (by not using interprocedural analysis or profile information) or interfere with the build process substantially (by requiring build scripts to be modified for either profiling or interprocedural optimization). Furthermore, they do not support optimization either at runtime or after an application has been installed at an end-user’s site, when the most relevant information about actual usage patterns would be available. The LLVM Compilation Strategy is designed to enable effective multi-stage optimization (at compile-time, link-time, runtime, and offline) and more effective profile-driven optimization, and to do so without changes to the traditional build process or programmer intervention. LLVM (Low Level Virtual Machine) is a compilation strategy that uses a low-level virtual instruction set with rich type information as a common code representation for all phases of compilation.
    [Show full text]
  • CS153: Compilers Lecture 19: Optimization
    CS153: Compilers Lecture 19: Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Contains content from lecture notes by Steve Zdancewic and Greg Morrisett Announcements •HW5: Oat v.2 out •Due in 2 weeks •HW6 will be released next week •Implementing optimizations! (and more) Stephen Chong, Harvard University 2 Today •Optimizations •Safety •Constant folding •Algebraic simplification • Strength reduction •Constant propagation •Copy propagation •Dead code elimination •Inlining and specialization • Recursive function inlining •Tail call elimination •Common subexpression elimination Stephen Chong, Harvard University 3 Optimizations •The code generated by our OAT compiler so far is pretty inefficient. •Lots of redundant moves. •Lots of unnecessary arithmetic instructions. •Consider this OAT program: int foo(int w) { var x = 3 + 5; var y = x * w; var z = y - 0; return z * 4; } Stephen Chong, Harvard University 4 Unoptimized vs. Optimized Output .globl _foo _foo: •Hand optimized code: pushl %ebp movl %esp, %ebp _foo: subl $64, %esp shlq $5, %rdi __fresh2: movq %rdi, %rax leal -64(%ebp), %eax ret movl %eax, -48(%ebp) movl 8(%ebp), %eax •Function foo may be movl %eax, %ecx movl -48(%ebp), %eax inlined by the compiler, movl %ecx, (%eax) movl $3, %eax so it can be implemented movl %eax, -44(%ebp) movl $5, %eax by just one instruction! movl %eax, %ecx addl %ecx, -44(%ebp) leal -60(%ebp), %eax movl %eax, -40(%ebp) movl -44(%ebp), %eax Stephen Chong,movl Harvard %eax,University %ecx 5 Why do we need optimizations? •To help programmers… •They write modular, clean, high-level programs •Compiler generates efficient, high-performance assembly •Programmers don’t write optimal code •High-level languages make avoiding redundant computation inconvenient or impossible •e.g.
    [Show full text]
  • 9. Optimization
    9. Optimization Marcus Denker Optimization Roadmap > Introduction > Optimizations in the Back-end > The Optimizer > SSA Optimizations > Advanced Optimizations © Marcus Denker 2 Optimization Roadmap > Introduction > Optimizations in the Back-end > The Optimizer > SSA Optimizations > Advanced Optimizations © Marcus Denker 3 Optimization Optimization: The Idea > Transform the program to improve efficiency > Performance: faster execution > Size: smaller executable, smaller memory footprint Tradeoffs: 1) Performance vs. Size 2) Compilation speed and memory © Marcus Denker 4 Optimization No Magic Bullet! > There is no perfect optimizer > Example: optimize for simplicity Opt(P): Smallest Program Q: Program with no output, does not stop Opt(Q)? © Marcus Denker 5 Optimization No Magic Bullet! > There is no perfect optimizer > Example: optimize for simplicity Opt(P): Smallest Program Q: Program with no output, does not stop Opt(Q)? L1 goto L1 © Marcus Denker 6 Optimization No Magic Bullet! > There is no perfect optimizer > Example: optimize for simplicity Opt(P): Smallest ProgramQ: Program with no output, does not stop Opt(Q)? L1 goto L1 Halting problem! © Marcus Denker 7 Optimization Another way to look at it... > Rice (1953): For every compiler there is a modified compiler that generates shorter code. > Proof: Assume there is a compiler U that generates the shortest optimized program Opt(P) for all P. — Assume P to be a program that does not stop and has no output — Opt(P) will be L1 goto L1 — Halting problem. Thus: U does not exist. > There will
    [Show full text]
  • Analysis of Program Optimization Possibilities and Further Development
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Theoretical Computer Science 90 (1991) 17-36 17 Elsevier Analysis of program optimization possibilities and further development I.V. Pottosin Institute of Informatics Systems, Siberian Division of the USSR Acad. Sci., 630090 Novosibirsk, USSR 1. Introduction Program optimization has appeared in the framework of program compilation and includes special techniques and methods used in compiler construction to obtain a rather efficient object code. These techniques and methods constituted in the past and constitute now an essential part of the so called optimizing compilers whose goal is to produce an object code in run time, saving such computer resources as CPU time and memory. For contemporary supercomputers, the requirement of the proper use of hardware peculiarities is added. In our opinion, the existence of a sufficiently large number of optimizing compilers with real possibilities of producing “good” object code has evidently proved to be practically significant for program optimization. The methods and approaches that have accumulated in program optimization research seem to be no lesser valuable, since they may be successfully used in the general techniques of program construc- tion, i.e. in program synthesis, program transformation and at different steps of program development. At the same time, there has been criticism on program optimization and even doubts about its necessity. On the one hand, this viewpoint is based on blind faith in the new possibilities bf computers which will be so powerful that any necessity to worry about program efficiency will disappear.
    [Show full text]
  • Control Flow Analysis in Scheme
    Control Flow Analysis in Scheme Olin Shivers Carnegie Mellon University [email protected] Abstract Fortran). Representative texts describing these techniques are [Dragon], and in more detail, [Hecht]. Flow analysis is perhaps Traditional ¯ow analysis techniques, such as the ones typically the chief tool in the optimising compiler writer's bag of tricks; employed by optimising Fortran compilers, do not work for an incomplete list of the problems that can be addressed with Scheme-like languages. This paper presents a ¯ow analysis ¯ow analysis includes global constant subexpression elimina- technique Ð control ¯ow analysis Ð which is applicable to tion, loop invariant detection, redundant assignment detection, Scheme-like languages. As a demonstration application, the dead code elimination, constant propagation, range analysis, information gathered by control ¯ow analysis is used to per- code hoisting, induction variable elimination, copy propaga- form a traditional ¯ow analysis problem, induction variable tion, live variable analysis, loop unrolling, and loop jamming. elimination. Extensions and limitations are discussed. However, these traditional ¯ow analysis techniques have The techniques presented in this paper are backed up by never successfully been applied to the Lisp family of computer working code. They are applicable not only to Scheme, but languages. This is a curious omission. The Lisp community also to related languages, such as Common Lisp and ML. has had suf®cient time to consider the problem. Flow analysis dates back at least to 1960, ([Dragon], pp. 516), and Lisp is 1 The Task one of the oldest computer programming languages currently in use, rivalled only by Fortran and COBOL. Flow analysis is a traditional optimising compiler technique Indeed, the Lisp community has long been concerned with for determining useful information about a program at compile the execution speed of their programs.
    [Show full text]
  • Value Numbering
    SOFTWARE—PRACTICE AND EXPERIENCE, VOL. 0(0), 1–18 (MONTH 1900) Value Numbering PRESTON BRIGGS Tera Computer Company, 2815 Eastlake Avenue East, Seattle, WA 98102 AND KEITH D. COOPER L. TAYLOR SIMPSON Rice University, 6100 Main Street, Mail Stop 41, Houston, TX 77005 SUMMARY Value numbering is a compiler-based program analysis method that allows redundant computations to be removed. This paper compares hash-based approaches derived from the classic local algorithm1 with partitioning approaches based on the work of Alpern, Wegman, and Zadeck2. Historically, the hash-based algorithm has been applied to single basic blocks or extended basic blocks. We have improved the technique to operate over the routine’s dominator tree. The partitioning approach partitions the values in the routine into congruence classes and removes computations when one congruent value dominates another. We have extended this technique to remove computations that define a value in the set of available expressions (AVA IL )3. Also, we are able to apply a version of Morel and Renvoise’s partial redundancy elimination4 to remove even more redundancies. The paper presents a series of hash-based algorithms and a series of refinements to the partitioning technique. Within each series, it can be proved that each method discovers at least as many redundancies as its predecessors. Unfortunately, no such relationship exists between the hash-based and global techniques. On some programs, the hash-based techniques eliminate more redundancies than the partitioning techniques, while on others, partitioning wins. We experimentally compare the improvements made by these techniques when applied to real programs. These results will be useful for commercial compiler writers who wish to assess the potential impact of each technique before implementation.
    [Show full text]
  • Language and Compiler Support for Dynamic Code Generation by Massimiliano A
    Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto S.B., Massachusetts Institute of Technology (1995) M.Eng., Massachusetts Institute of Technology (1995) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A u th or ............................................................................ Department of Electrical Engineering and Computer Science June 23, 1999 Certified by...............,. ...*V .,., . .* N . .. .*. *.* . -. *.... M. Frans Kaashoek Associate Pro essor of Electrical Engineering and Computer Science Thesis Supervisor A ccepted by ................ ..... ............ ............................. Arthur C. Smith Chairman, Departmental CommitteA on Graduate Students me 2 Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto Submitted to the Department of Electrical Engineering and Computer Science on June 23, 1999, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract Dynamic code generation, also called run-time code generation or dynamic compilation, is the cre- ation of executable code for an application while that application is running. Dynamic compilation can significantly improve the performance of software by giving the compiler access to run-time infor- mation that is not available to a traditional static compiler. A well-designed programming interface to dynamic compilation can also simplify the creation of important classes of computer programs. Until recently, however, no system combined efficient dynamic generation of high-performance code with a powerful and portable language interface. This thesis describes a system that meets these requirements, and discusses several applications of dynamic compilation.
    [Show full text]
  • Strength Reduction of Induction Variables and Pointer Analysis – Induction Variable Elimination
    Loop optimizations • Optimize loops – Loop invariant code motion [last time] Loop Optimizations – Strength reduction of induction variables and Pointer Analysis – Induction variable elimination CS 412/413 Spring 2008 Introduction to Compilers 1 CS 412/413 Spring 2008 Introduction to Compilers 2 Strength Reduction Induction Variables • Basic idea: replace expensive operations (multiplications) with • An induction variable is a variable in a loop, cheaper ones (additions) in definitions of induction variables whose value is a function of the loop iteration s = 3*i+1; number v = f(i) while (i<10) { while (i<10) { j = 3*i+1; //<i,3,1> j = s; • In compilers, this a linear function: a[j] = a[j] –2; a[j] = a[j] –2; i = i+2; i = i+2; f(i) = c*i + d } s= s+6; } •Observation:linear combinations of linear • Benefit: cheaper to compute s = s+6 than j = 3*i functions are linear functions – s = s+6 requires an addition – Consequence: linear combinations of induction – j = 3*i requires a multiplication variables are induction variables CS 412/413 Spring 2008 Introduction to Compilers 3 CS 412/413 Spring 2008 Introduction to Compilers 4 1 Families of Induction Variables Representation • Basic induction variable: a variable whose only definition in the • Representation of induction variables in family i by triples: loop body is of the form – Denote basic induction variable i by <i, 1, 0> i = i + c – Denote induction variable k=i*a+b by triple <i, a, b> where c is a loop-invariant value • Derived induction variables: Each basic induction variable i defines
    [Show full text]
  • Lecture Notes on Peephole Optimizations and Common Subexpression Elimination
    Lecture Notes on Peephole Optimizations and Common Subexpression Elimination 15-411: Compiler Design Frank Pfenning and Jan Hoffmann Lecture 18 October 31, 2017 1 Introduction In this lecture, we discuss common subexpression elimination and a class of optimiza- tions that is called peephole optimizations. The idea of common subexpression elimination is to avoid to perform the same operation twice by replacing (binary) operations with variables. To ensure that these substitutions are sound we intro- duce dominance, which ensures that substituted variables are always defined. Peephole optimizations are optimizations that are performed locally on a small number of instructions. The name is inspired from the picture that we look at the code through a peephole and make optimization that only involve the small amount code we can see and that are indented of the rest of the program. There is a large number of possible peephole optimizations. The LLVM com- piler implements for example more than 1000 peephole optimizations [LMNR15]. In this lecture, we discuss three important and representative peephole optimiza- tions: constant folding, strength reduction, and null sequences. 2 Constant Folding Optimizations have two components: (1) a condition under which they can be ap- plied and the (2) code transformation itself. The optimization of constant folding is a straightforward example of this. The code transformation itself replaces a binary operation with a single constant, and applies whenever c1 c2 is defined. l : x c1 c2 −! l : x c (where c =
    [Show full text]
  • Register Allocation and Method Inlining
    Combining Offline and Online Optimizations: Register Allocation and Method Inlining Hiroshi Yamauchi and Jan Vitek Department of Computer Sciences, Purdue University, {yamauchi,jv}@cs.purdue.edu Abstract. Fast dynamic compilers trade code quality for short com- pilation time in order to balance application performance and startup time. This paper investigates the interplay of two of the most effective optimizations, register allocation and method inlining for such compil- ers. We present a bytecode representation which supports offline global register allocation, is suitable for fast code generation and verification, and yet is backward compatible with standard Java bytecode. 1 Introduction Programming environments which support dynamic loading of platform-indep- endent code must provide supports for efficient execution and find a good balance between responsiveness (shorter delays due to compilation) and performance (optimized compilation). Thus, most commercial Java virtual machines (JVM) include several execution engines. Typically, there is an interpreter or a fast com- piler for initial executions of all code, and a profile-guided optimizing compiler for performance-critical code. Improving the quality of the code of a fast compiler has the following bene- fits. It raises the performance of short-running and medium length applications which exit before the expensive optimizing compiler fully kicks in. It also ben- efits long-running applications with improved startup performance and respon- siveness (due to less eager optimizing compilation). One way to achieve this is to shift some of the burden to an offline compiler. The main question is what optimizations are profitable when performed offline and are either guaranteed to be safe or can be easily validated by a fast compiler.
    [Show full text]
  • Transparent Dynamic Optimization: the Design and Implementation of Dynamo
    Transparent Dynamic Optimization: The Design and Implementation of Dynamo Vasanth Bala, Evelyn Duesterwald, Sanjeev Banerjia HP Laboratories Cambridge HPL-1999-78 June, 1999 E-mail: [email protected] dynamic Dynamic optimization refers to the runtime optimization optimization, of a native program binary. This report describes the compiler, design and implementation of Dynamo, a prototype trace selection, dynamic optimizer that is capable of optimizing a native binary translation program binary at runtime. Dynamo is a realistic implementation, not a simulation, that is written entirely in user-level software, and runs on a PA-RISC machine under the HPUX operating system. Dynamo does not depend on any special programming language, compiler, operating system or hardware support. Contrary to intuition, we demonstrate that it is possible to use a piece of software to improve the performance of a native, statically optimized program binary, while it is executing. Dynamo not only speeds up real application programs, its performance improvement is often quite significant. For example, the performance of many +O2 optimized SPECint95 binaries running under Dynamo is comparable to the performance of their +O4 optimized version running without Dynamo. Internal Accession Date Only Ó Copyright Hewlett-Packard Company 1999 Contents 1 INTRODUCTION ........................................................................................... 7 2 RELATED WORK ......................................................................................... 9 3 OVERVIEW
    [Show full text]
  • Optimizing for Reduced Code Space Using Genetic Algorithms
    Optimizing for Reduced Code Space using Genetic Algorithms Keith D. Cooper, Philip J. Schielke, and Devika Subramanian Department of Computer Science Rice University Houston, Texas, USA {keith | phisch | devika}@cs.rice.edu Abstract is virtually impossible to select the best set of optimizations to run on a particular piece of code. Historically, compiler Code space is a critical issue facing designers of software for embedded systems. Many traditional compiler optimiza- writers have made one of two assumptions. Either a fixed tions are designed to reduce the execution time of compiled optimization order is “good enough” for all programs, or code, but not necessarily the size of the compiled code. Fur- giving the user a large set of flags that control optimization ther, different results can be achieved by running some opti- is sufficient, because it shifts the burden onto the user. mizations more than once and changing the order in which optimizations are applied. Register allocation only com- Interplay between optimizations occurs frequently. A plicates matters, as the interactions between different op- transformation can create opportunities for other transfor- timizations can cause more spill code to be generated. The mations. Similarly, a transformation can eliminate oppor- compiler for embedded systems, then, must take care to use tunities for other transformations. These interactions also the best sequence of optimizations to minimize code space. Since much of the code for embedded systems is compiled depend on the program being compiled, and they are of- once and then burned into ROM, the software designer will ten difficult to predict. Multiple applications of the same often tolerate much longer compile times in the hope of re- transformation at different points in the optimization se- ducing the size of the compiled code.
    [Show full text]