Code Size Optimization for Embedded Processors

Total Page:16

File Type:pdf, Size:1020Kb

Code Size Optimization for Embedded Processors UCAM-CL-TR-607 Technical Report ISSN 1476-2986 Number 607 Computer Laboratory Code size optimization for embedded processors Neil E. Johnson November 2004 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom phone +44 1223 763500 http://www.cl.cam.ac.uk/ c 2004 Neil E. Johnson This technical report is based on a dissertation submitted May 2004 by the author for the degree of Doctor of Philosophy to the University of Cambridge, Robinson College. Technical reports published by the University of Cambridge Computer Laboratory are freely available via the Internet: http://www.cl.cam.ac.uk/TechReports/ ISSN 1476-2986 Abstract This thesis studies the problem of reducing code size produced by an optimizing compiler. We develop the Value State Dependence Graph (VSDG) as a powerful intermediate form. Nodes represent computation, and edges represent value (data) and state (control) dependencies be- tween nodes. The edges specify a partial ordering of the nodes—sufficient ordering to maintain the I/O semantics of the source program, while allowing optimizers greater freedom to move nodes within the program to achieve better (smaller) code. Optimizations, both classical and new, transform the graph through graph rewriting rules prior to code generation. Additional (semantically inessential) state edges are added to transform the VSDG into a Control Flow Graph, from which target code is generated. We show how procedural abstraction can be advantageously applied to the VSDG. Graph patterns are extracted from a program's VSDG. We then select repeated patterns giving the greatest size reduction, generate new functions from these patterns, and replace all occurrences of the patterns in the original VSDG with calls to these abstracted functions. Several embedded processors have load- and store-multiple instructions, representing several loads (or stores) as one instruction. We present a method, benefiting from the VSDG form, for using these instruc- tions to reduce code size by provisionally combining loads and stores before code generation. The final contribution of this thesis is a combined register allocation and code motion (RACM) algorithm. We show that our RACM algorithm formulates these two previously antagonis- tic phases as one combined pass over the VSDG, transforming the graph (moving or cloning nodes, or spilling edges) to fit within the physical resources of the target processor. We have implemented our ideas within a prototype C compiler and suite of VSDG opti- mizers, generating code for the Thumb 32-bit processor. Our results show improvements for each optimization and that we can achieve code sizes comparable to, and in some cases bet- ter than, that produced by commercial compilers with significant investments in optimization technology. 4 Contents 1 Introduction 15 1.1 Compilation and Optimization . 16 1.1.1 What is a Compiler? . 16 1.1.2 Intermediate Code Optimization . 17 1.1.3 The Phase Order Problem . 18 1.2 Size Reducing Optimizations . 18 1.2.1 Compaction and Compression . 19 1.2.2 Procedural Abstraction . 19 1.2.3 Multiple Memory Access Optimization . 19 1.2.4 Combined Code Motion and Register Allocation . 21 1.3 Experimental Framework . 21 1.4 Thesis Outline . 21 2 Prior Art 25 2.1 A Cornucopia of Program Graphs . 26 2.1.1 Control Flow Graph . 26 2.1.2 Data Flow Graph . 26 2.1.3 Program Dependence Graph . 27 2.1.4 Program Dependence Web . 27 2.1.5 Click's IR . 27 2.1.6 Value Dependence Graph . 28 2.1.7 Static Single Assignment . 29 2.1.8 Gated Single Assignment . 30 2.2 Choosing a Program Graph . 31 2.2.1 Best Graph for Control Flow Optimization . 32 2.2.2 Best Graph for Loop Optimization . 32 2.2.3 Best Graph for Expression Optimization . 32 2.2.4 Best Graph for Whole Program Optimization . 33 2.3 Introducing the Value State Dependence Graph . 33 2.3.1 Control Flow Optimization . 33 2.3.2 Loop Optimization . 33 CONTENTS 6 2.3.3 Expression Optimization . 33 2.3.4 Whole Program Optimization . 34 2.4 Our Approaches to Code Compaction . 34 2.4.1 Procedural Abstraction . 34 2.4.2 Multiple Memory Access Optimization . 36 2.4.3 Combining Register Allocation and Code Motion . 37 2.5 10002 Code Compacting Optimizations . 39 2.5.1 Procedural Abstraction . 39 2.5.2 Cross Linking . 40 2.5.3 Algebraic Reassociation . 41 2.5.4 Address Code Optimization . 41 2.5.5 Leaf Function Optimization . 42 2.5.6 Type Conversion Optimization . 42 2.5.7 Dead Code Elimination . 43 2.5.8 Unreachable Code Elimination . 44 2.6 Summary . 45 3 The Value State Dependence Graph 47 3.1 A Critique of the Program Dependence Graph . 48 3.1.1 Definition of the Program Dependence Graph . 48 3.1.2 Weaknesses of the Program Dependence Graph . 49 3.2 Graph Theoretic Foundations . 50 3.2.1 Dominance and Post-Dominance . 50 3.2.2 The Dominance Relation . 50 3.2.3 Successors and Predecessors . 51 3.2.4 Depth From Root . 52 3.3 Definition of the Value State Dependence Graph . 53 3.3.1 Node Labelling with Instructions . 54 3.4 Semantics of the VSDG . 57 3.4.1 The VSDG's Pull Semantics . 57 3.4.2 A Brief Summary of Push Semantics . 60 3.4.3 Equivalence Between Push and Pull Semantics . 61 3.4.4 The Benefits of Pull Semantics . 63 3.5 Properties of the VSDG . 63 3.5.1 VSDG Well-Formedness . 63 3.5.2 VSDG Normalization . 64 3.5.3 Correspondence Between θ-nodes and GSA Form . 64 3.6 Compiling to VSDGs . 66 3.6.1 The LCC Compiler . 66 3.6.2 VSDG File Description . 67 3.6.3 Compiling Functions . 67 3.6.4 Compiling Expressions . 69 3.6.5 Compiling if Statements . 70 3.6.6 Compiling Loops . 71 3.7 Handling Irreducibility . 73 3.7.1 The Reducibility Property . 73 CONTENTS 7 3.7.2 Irreducible Programs in the Real World . 76 3.7.3 Eliminating Irreducibility . 76 3.8 Classical Optimizations and the VSDG . 77 3.8.1 Dead Node Elimination . 78 3.8.2 Common Subexpression Elimination . 79 3.8.3 Loop-Invariant Code Motion . 81 3.8.4 Partial Redundancy Elimination . 81 3.8.5 Reassociation . 82 3.8.6 Constant Folding . 83 3.8.7 γ Folding . 83 3.9 Summary . 84 4 Procedural Abstraction via Patterns 85 4.1 Pattern Abstraction Algorithm . 86 4.2 Pattern Generation . 87 4.2.1 Pattern Generation Algorithm . 88 4.2.2 Analysis of Pattern Generation Algorithm . 88 4.3 Pattern Selection . 90 4.3.1 Pattern Cost Model . 91 4.3.2 Observations on the Cost Model . 91 4.3.3 Overlapping Patterns . 92 4.4 Abstracting the Chosen Pattern . 92 4.4.1 Generating the Abstract Function . 92 4.4.2 Generating Abstract Function Calls . 92 4.5 Summary . 93 5 Multiple Memory Access Optimization 95 5.1 Examples of MMA Instructions . 96 5.2 Simple Offset Assignment . 96 5.3 Multiple Memory Access on the Control Flow Graph . 97 5.3.1 Generic MMA Instructions . 98 5.3.2 Access Graph and Access Paths . 98 5.3.3 Construction of the Access Graph . 99 5.3.4 SOLVEMMA and Maximum Weight Path Covering . 100 5.3.5 The Phase Order Problem . 101 5.3.6 Scheduling SOLVEMMA Within A Compiler . 101 5.3.7 Complexity of Heuristic Algorithm . 102 5.4 Multiple Memory Access on the VSDG . 103 5.4.1 Modifying SOLVEMMA for the VSDG . 103 5.5 Target-Specific MMA Instructions . 104 5.6 Motivating Example . ..
Recommended publications
  • Polyhedral Compilation As a Design Pattern for Compiler Construction
    Polyhedral Compilation as a Design Pattern for Compiler Construction PLISS, May 19-24, 2019 [email protected] Polyhedra? Example: Tiles Polyhedra? Example: Tiles How many of you read “Design Pattern”? → Tiles Everywhere 1. Hardware Example: Google Cloud TPU Architectural Scalability With Tiling Tiles Everywhere 1. Hardware Google Edge TPU Edge computing zoo Tiles Everywhere 1. Hardware 2. Data Layout Example: XLA compiler, Tiled data layout Repeated/Hierarchical Tiling e.g., BF16 (bfloat16) on Cloud TPU (should be 8x128 then 2x1) Tiles Everywhere Tiling in Halide 1. Hardware 2. Data Layout Tiled schedule: strip-mine (a.k.a. split) 3. Control Flow permute (a.k.a. reorder) 4. Data Flow 5. Data Parallelism Vectorized schedule: strip-mine Example: Halide for image processing pipelines vectorize inner loop https://halide-lang.org Meta-programming API and domain-specific language (DSL) for loop transformations, numerical computing kernels Non-divisible bounds/extent: strip-mine shift left/up redundant computation (also forward substitute/inline operand) Tiles Everywhere TVM example: scan cell (RNN) m = tvm.var("m") n = tvm.var("n") 1. Hardware X = tvm.placeholder((m,n), name="X") s_state = tvm.placeholder((m,n)) 2. Data Layout s_init = tvm.compute((1,n), lambda _,i: X[0,i]) s_do = tvm.compute((m,n), lambda t,i: s_state[t-1,i] + X[t,i]) 3. Control Flow s_scan = tvm.scan(s_init, s_do, s_state, inputs=[X]) s = tvm.create_schedule(s_scan.op) 4. Data Flow // Schedule to run the scan cell on a CUDA device block_x = tvm.thread_axis("blockIdx.x") 5. Data Parallelism thread_x = tvm.thread_axis("threadIdx.x") xo,xi = s[s_init].split(s_init.op.axis[1], factor=num_thread) s[s_init].bind(xo, block_x) Example: Halide for image processing pipelines s[s_init].bind(xi, thread_x) xo,xi = s[s_do].split(s_do.op.axis[1], factor=num_thread) https://halide-lang.org s[s_do].bind(xo, block_x) s[s_do].bind(xi, thread_x) print(tvm.lower(s, [X, s_scan], simple_mode=True)) And also TVM for neural networks https://tvm.ai Tiling and Beyond 1.
    [Show full text]
  • CS153: Compilers Lecture 19: Optimization
    CS153: Compilers Lecture 19: Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Contains content from lecture notes by Steve Zdancewic and Greg Morrisett Announcements •HW5: Oat v.2 out •Due in 2 weeks •HW6 will be released next week •Implementing optimizations! (and more) Stephen Chong, Harvard University 2 Today •Optimizations •Safety •Constant folding •Algebraic simplification • Strength reduction •Constant propagation •Copy propagation •Dead code elimination •Inlining and specialization • Recursive function inlining •Tail call elimination •Common subexpression elimination Stephen Chong, Harvard University 3 Optimizations •The code generated by our OAT compiler so far is pretty inefficient. •Lots of redundant moves. •Lots of unnecessary arithmetic instructions. •Consider this OAT program: int foo(int w) { var x = 3 + 5; var y = x * w; var z = y - 0; return z * 4; } Stephen Chong, Harvard University 4 Unoptimized vs. Optimized Output .globl _foo _foo: •Hand optimized code: pushl %ebp movl %esp, %ebp _foo: subl $64, %esp shlq $5, %rdi __fresh2: movq %rdi, %rax leal -64(%ebp), %eax ret movl %eax, -48(%ebp) movl 8(%ebp), %eax •Function foo may be movl %eax, %ecx movl -48(%ebp), %eax inlined by the compiler, movl %ecx, (%eax) movl $3, %eax so it can be implemented movl %eax, -44(%ebp) movl $5, %eax by just one instruction! movl %eax, %ecx addl %ecx, -44(%ebp) leal -60(%ebp), %eax movl %eax, -40(%ebp) movl -44(%ebp), %eax Stephen Chong,movl Harvard %eax,University %ecx 5 Why do we need optimizations? •To help programmers… •They write modular, clean, high-level programs •Compiler generates efficient, high-performance assembly •Programmers don’t write optimal code •High-level languages make avoiding redundant computation inconvenient or impossible •e.g.
    [Show full text]
  • 9. Optimization
    9. Optimization Marcus Denker Optimization Roadmap > Introduction > Optimizations in the Back-end > The Optimizer > SSA Optimizations > Advanced Optimizations © Marcus Denker 2 Optimization Roadmap > Introduction > Optimizations in the Back-end > The Optimizer > SSA Optimizations > Advanced Optimizations © Marcus Denker 3 Optimization Optimization: The Idea > Transform the program to improve efficiency > Performance: faster execution > Size: smaller executable, smaller memory footprint Tradeoffs: 1) Performance vs. Size 2) Compilation speed and memory © Marcus Denker 4 Optimization No Magic Bullet! > There is no perfect optimizer > Example: optimize for simplicity Opt(P): Smallest Program Q: Program with no output, does not stop Opt(Q)? © Marcus Denker 5 Optimization No Magic Bullet! > There is no perfect optimizer > Example: optimize for simplicity Opt(P): Smallest Program Q: Program with no output, does not stop Opt(Q)? L1 goto L1 © Marcus Denker 6 Optimization No Magic Bullet! > There is no perfect optimizer > Example: optimize for simplicity Opt(P): Smallest ProgramQ: Program with no output, does not stop Opt(Q)? L1 goto L1 Halting problem! © Marcus Denker 7 Optimization Another way to look at it... > Rice (1953): For every compiler there is a modified compiler that generates shorter code. > Proof: Assume there is a compiler U that generates the shortest optimized program Opt(P) for all P. — Assume P to be a program that does not stop and has no output — Opt(P) will be L1 goto L1 — Halting problem. Thus: U does not exist. > There will
    [Show full text]
  • Cross-Platform Language Design
    Cross-Platform Language Design THIS IS A TEMPORARY TITLE PAGE It will be replaced for the final print by a version provided by the service academique. Thèse n. 1234 2011 présentée le 11 Juin 2018 à la Faculté Informatique et Communications Laboratoire de Méthodes de Programmation 1 programme doctoral en Informatique et Communications École Polytechnique Fédérale de Lausanne pour l’obtention du grade de Docteur ès Sciences par Sébastien Doeraene acceptée sur proposition du jury: Prof James Larus, président du jury Prof Martin Odersky, directeur de thèse Prof Edouard Bugnion, rapporteur Dr Andreas Rossberg, rapporteur Prof Peter Van Roy, rapporteur Lausanne, EPFL, 2018 It is better to repent a sin than regret the loss of a pleasure. — Oscar Wilde Acknowledgments Although there is only one name written in a large font on the front page, there are many people without which this thesis would never have happened, or would not have been quite the same. Five years is a long time, during which I had the privilege to work, discuss, sing, learn and have fun with many people. I am afraid to make a list, for I am sure I will forget some. Nevertheless, I will try my best. First, I would like to thank my advisor, Martin Odersky, for giving me the opportunity to fulfill a dream, that of being part of the design and development team of my favorite programming language. Many thanks for letting me explore the design of Scala.js in my own way, while at the same time always being there when I needed him.
    [Show full text]
  • Value Numbering
    SOFTWARE—PRACTICE AND EXPERIENCE, VOL. 0(0), 1–18 (MONTH 1900) Value Numbering PRESTON BRIGGS Tera Computer Company, 2815 Eastlake Avenue East, Seattle, WA 98102 AND KEITH D. COOPER L. TAYLOR SIMPSON Rice University, 6100 Main Street, Mail Stop 41, Houston, TX 77005 SUMMARY Value numbering is a compiler-based program analysis method that allows redundant computations to be removed. This paper compares hash-based approaches derived from the classic local algorithm1 with partitioning approaches based on the work of Alpern, Wegman, and Zadeck2. Historically, the hash-based algorithm has been applied to single basic blocks or extended basic blocks. We have improved the technique to operate over the routine’s dominator tree. The partitioning approach partitions the values in the routine into congruence classes and removes computations when one congruent value dominates another. We have extended this technique to remove computations that define a value in the set of available expressions (AVA IL )3. Also, we are able to apply a version of Morel and Renvoise’s partial redundancy elimination4 to remove even more redundancies. The paper presents a series of hash-based algorithms and a series of refinements to the partitioning technique. Within each series, it can be proved that each method discovers at least as many redundancies as its predecessors. Unfortunately, no such relationship exists between the hash-based and global techniques. On some programs, the hash-based techniques eliminate more redundancies than the partitioning techniques, while on others, partitioning wins. We experimentally compare the improvements made by these techniques when applied to real programs. These results will be useful for commercial compiler writers who wish to assess the potential impact of each technique before implementation.
    [Show full text]
  • Language and Compiler Support for Dynamic Code Generation by Massimiliano A
    Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto S.B., Massachusetts Institute of Technology (1995) M.Eng., Massachusetts Institute of Technology (1995) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A u th or ............................................................................ Department of Electrical Engineering and Computer Science June 23, 1999 Certified by...............,. ...*V .,., . .* N . .. .*. *.* . -. *.... M. Frans Kaashoek Associate Pro essor of Electrical Engineering and Computer Science Thesis Supervisor A ccepted by ................ ..... ............ ............................. Arthur C. Smith Chairman, Departmental CommitteA on Graduate Students me 2 Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto Submitted to the Department of Electrical Engineering and Computer Science on June 23, 1999, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract Dynamic code generation, also called run-time code generation or dynamic compilation, is the cre- ation of executable code for an application while that application is running. Dynamic compilation can significantly improve the performance of software by giving the compiler access to run-time infor- mation that is not available to a traditional static compiler. A well-designed programming interface to dynamic compilation can also simplify the creation of important classes of computer programs. Until recently, however, no system combined efficient dynamic generation of high-performance code with a powerful and portable language interface. This thesis describes a system that meets these requirements, and discusses several applications of dynamic compilation.
    [Show full text]
  • Transparent Dynamic Optimization: the Design and Implementation of Dynamo
    Transparent Dynamic Optimization: The Design and Implementation of Dynamo Vasanth Bala, Evelyn Duesterwald, Sanjeev Banerjia HP Laboratories Cambridge HPL-1999-78 June, 1999 E-mail: [email protected] dynamic Dynamic optimization refers to the runtime optimization optimization, of a native program binary. This report describes the compiler, design and implementation of Dynamo, a prototype trace selection, dynamic optimizer that is capable of optimizing a native binary translation program binary at runtime. Dynamo is a realistic implementation, not a simulation, that is written entirely in user-level software, and runs on a PA-RISC machine under the HPUX operating system. Dynamo does not depend on any special programming language, compiler, operating system or hardware support. Contrary to intuition, we demonstrate that it is possible to use a piece of software to improve the performance of a native, statically optimized program binary, while it is executing. Dynamo not only speeds up real application programs, its performance improvement is often quite significant. For example, the performance of many +O2 optimized SPECint95 binaries running under Dynamo is comparable to the performance of their +O4 optimized version running without Dynamo. Internal Accession Date Only Ó Copyright Hewlett-Packard Company 1999 Contents 1 INTRODUCTION ........................................................................................... 7 2 RELATED WORK ......................................................................................... 9 3 OVERVIEW
    [Show full text]
  • Phase-Ordering in Optimizing Compilers
    Phase-ordering in optimizing compilers Matthieu Qu´eva Kongens Lyngby 2007 IMM-MSC-2007-71 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Summary The “quality” of code generated by compilers largely depends on the analyses and optimizations applied to the code during the compilation process. While modern compilers could choose from a plethora of optimizations and analyses, in current compilers the order of these pairs of analyses/transformations is fixed once and for all by the compiler developer. Of course there exist some flags that allow a marginal control of what is executed and how, but the most important source of information regarding what analyses/optimizations to run is ignored- the source code. Indeed, some optimizations might be better to be applied on some source code, while others would be preferable on another. A new compilation model is developed in this thesis by implementing a Phase Manager. This Phase Manager has a set of analyses/transformations available, and can rank the different possible optimizations according to the current state of the intermediate representation. Based on this ranking, the Phase Manager can decide which phase should be run next. Such a Phase Manager has been implemented for a compiler for a simple imper- ative language, the while language, which includes several Data-Flow analyses. The new approach consists in calculating coefficients, called metrics, after each optimization phase. These metrics are used to evaluate where the transforma- tions will be applicable, and are used by the Phase Manager to rank the phases.
    [Show full text]
  • Optimizing for Reduced Code Space Using Genetic Algorithms
    Optimizing for Reduced Code Space using Genetic Algorithms Keith D. Cooper, Philip J. Schielke, and Devika Subramanian Department of Computer Science Rice University Houston, Texas, USA {keith | phisch | devika}@cs.rice.edu Abstract is virtually impossible to select the best set of optimizations to run on a particular piece of code. Historically, compiler Code space is a critical issue facing designers of software for embedded systems. Many traditional compiler optimiza- writers have made one of two assumptions. Either a fixed tions are designed to reduce the execution time of compiled optimization order is “good enough” for all programs, or code, but not necessarily the size of the compiled code. Fur- giving the user a large set of flags that control optimization ther, different results can be achieved by running some opti- is sufficient, because it shifts the burden onto the user. mizations more than once and changing the order in which optimizations are applied. Register allocation only com- Interplay between optimizations occurs frequently. A plicates matters, as the interactions between different op- transformation can create opportunities for other transfor- timizations can cause more spill code to be generated. The mations. Similarly, a transformation can eliminate oppor- compiler for embedded systems, then, must take care to use tunities for other transformations. These interactions also the best sequence of optimizations to minimize code space. Since much of the code for embedded systems is compiled depend on the program being compiled, and they are of- once and then burned into ROM, the software designer will ten difficult to predict. Multiple applications of the same often tolerate much longer compile times in the hope of re- transformation at different points in the optimization se- ducing the size of the compiled code.
    [Show full text]
  • Generalizing Loop-Invariant Code Motion in a Real-World Compiler
    Imperial College London Department of Computing Generalizing loop-invariant code motion in a real-world compiler Author: Supervisor: Paul Colea Fabio Luporini Co-supervisor: Prof. Paul H. J. Kelly MEng Computing Individual Project June 2015 Abstract Motivated by the perpetual goal of automatically generating efficient code from high-level programming abstractions, compiler optimization has developed into an area of intense research. Apart from general-purpose transformations which are applicable to all or most programs, many highly domain-specific optimizations have also been developed. In this project, we extend such a domain-specific compiler optimization, initially described and implemented in the context of finite element analysis, to one that is suitable for arbitrary applications. Our optimization is a generalization of loop-invariant code motion, a technique which moves invariant statements out of program loops. The novelty of the transformation is due to its ability to avoid more redundant recomputation than normal code motion, at the cost of additional storage space. This project provides a theoretical description of the above technique which is fit for general programs, together with an implementation in LLVM, one of the most successful open-source compiler frameworks. We introduce a simple heuristic-driven profitability model which manages to successfully safeguard against potential performance regressions, at the cost of missing some speedup opportunities. We evaluate the functional correctness of our implementation using the comprehensive LLVM test suite, passing all of its 497 whole program tests. The results of our performance evaluation using the same set of tests reveal that generalized code motion is applicable to many programs, but that consistent performance gains depend on an accurate cost model.
    [Show full text]
  • A Tiling Perspective for Register Optimization Fabrice Rastello, Sadayappan Ponnuswany, Duco Van Amstel
    A Tiling Perspective for Register Optimization Fabrice Rastello, Sadayappan Ponnuswany, Duco van Amstel To cite this version: Fabrice Rastello, Sadayappan Ponnuswany, Duco van Amstel. A Tiling Perspective for Register Op- timization. [Research Report] RR-8541, Inria. 2014, pp.24. hal-00998915 HAL Id: hal-00998915 https://hal.inria.fr/hal-00998915 Submitted on 3 Jun 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Tiling Perspective for Register Optimization Łukasz Domagała, Fabrice Rastello, Sadayappan Ponnuswany, Duco van Amstel RESEARCH REPORT N° 8541 May 2014 Project-Teams GCG ISSN 0249-6399 ISRN INRIA/RR--8541--FR+ENG A Tiling Perspective for Register Optimization Lukasz Domaga la∗, Fabrice Rastello†, Sadayappan Ponnuswany‡, Duco van Amstel§ Project-Teams GCG Research Report n° 8541 — May 2014 — 21 pages Abstract: Register allocation is a much studied problem. A particularly important context for optimizing register allocation is within loops, since a significant fraction of the execution time of programs is often inside loop code. A variety of algorithms have been proposed in the past for register allocation, but the complexity of the problem has resulted in a decoupling of several important aspects, including loop unrolling, register promotion, and instruction reordering.
    [Show full text]
  • Efficient Symbolic Analysis for Optimizing Compilers*
    Efficient Symbolic Analysis for Optimizing Compilers? Robert A. van Engelen Dept. of Computer Science, Florida State University, Tallahassee, FL 32306-4530 [email protected] Abstract. Because most of the execution time of a program is typically spend in loops, loop optimization is the main target of optimizing and re- structuring compilers. An accurate determination of induction variables and dependencies in loops is of paramount importance to many loop opti- mization and parallelization techniques, such as generalized loop strength reduction, loop parallelization by induction variable substitution, and loop-invariant expression elimination. In this paper we present a new method for induction variable recognition. Existing methods are either ad-hoc and not powerful enough to recognize some types of induction variables, or existing methods are powerful but not safe. The most pow- erful method known is the symbolic differencing method as demonstrated by the Parafrase-2 compiler on parallelizing the Perfect Benchmarks(R). However, symbolic differencing is inherently unsafe and a compiler that uses this method may produce incorrectly transformed programs without issuing a warning. In contrast, our method is safe, simpler to implement in a compiler, better adaptable for controlling loop transformations, and recognizes a larger class of induction variables. 1 Introduction It is well known that the optimization and parallelization of scientific applica- tions by restructuring compilers requires extensive analysis of induction vari- ables and dependencies
    [Show full text]