Inline Function Expansion for Compiling C Programs

Total Page:16

File Type:pdf, Size:1020Kb

Inline Function Expansion for Compiling C Programs Inline Function Expansion for Compiling C Programs Wen-meiW. Hwu and Pohua P. Chang Coordinated Science Laboratory University of Illinois 1101 W. Springfield Ave. Urbana, IL 61801 (217) 244-8270 [email protected] Abstract 1. Introduction Inline function expansion replaces a function call Large computing tasks are often divided into many with the function body. With automatic inline function smaller subtasks which can be more easily developed, expansion, programs can be constructed with many small understood, and reused. Function definition and invoca- functions to handle complexity and then rely on the com- tion in high level languages provide a natural means to pilation to eliminate most of the function calls. Therefore, define and coordinate subtasks to perform to original task. inline expansion serves a tool for satisfying two Structured programming techniques therefore encourage conflicting goals: minizing the complexity of the program the use of functions. Unfortunately, function invocation development and minimizing the function call overhead of disrupts compile-time code optimization such as register program execution. A simple inline expansion procedure allocation, code compaction, common subexpression is presented which uses profile information to address elimination, and constant propagation. The decreased three critical issues: code expansion, stack expansion, and effectiveness of these optimization techniques increases unavailable function bodies. Experiments show that a memory accesses, decreases pipeline efficiency, and large percentage of function calls/returns (about 59%) can increases redundant computation, be eliminated with a modest code expansion cost (about Emer and Clark reported, for a composite VAX 17%) for twelve UNIX* programs. workload, 4.5% of all dynamic instructions are procedure calls and retumsl. If we assume equal numbers of call and return instructions, the above number indicates that * UNIX is a trademark of the AT&T Bell Laboratories. there is a function call instruction for every 44 instructions executed. Berkeley RISC researchers have reported that procedure call is the most costly source language state- ment2. 1.1. Existing Remedies Some recent processors provide hardware support for minimizing the extra memory accessesdue to function calls. For example, the Berkeley RISC processors provide Permission to copy without fee all or part of this material is granted provided that overlapping register windows to reduce the number of the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is memory accessesrequired to save/restore registers and to given that copying is by permission of the Association for Computing Machinery. pass parameters2. Another example is the CRISP proces- To copy other+& or to republish, requires a fee and/or specific permission. 0 I989 ACM O-8979 I -306-X/89/0006/0246 $1.50 sor that uses stack buffers to capture the memory accesses 246 to local variables so that register allocation crossing func- piler, the programmers can use the keyword inline as a tion calls can be simulated in hatdware3. The problems hint to the compiler for inline expanding function calls16 . with these hardware approaches are that they tend to con- In the MIPS C compiler, the compiler examines the code sume a signitlcant amount of hardware, stretch the proces- structure (e.g. loops) to choose the function calls for sor cycle time, and provide little assistance for enlarging inline expansion15. It should be noticed that the careful the scope of compiler code optimization. use of the macro expansion and language preprocessing In the software realm, inter-procedural register allo- utilities has the same effect as inline expansion, when cation schemes have been shown to reduce the register inline expansion decisions are made entirely by the pro- save/restore cost across function call boundaries4. Callers grammers. and callees can also communicate parameters and results The IMPACT-I (Illinois Microarchitecture Project through a small number of registers’. Wall has shown using Advanced Compiler Technology - stage I) C com- that combining profiling and inline expansion, link-time piler expands function calls to increase the effectiveness register allocation is comparable in performance to of compiler code optimization*7-19. As far as the hardware register window scheme@. Inter-procedural hardware is concerned, the goal of inline expansion is to analysis have been shown to be effective in reducing the reduce the function calls so that mechanisms such as negative effects of function calls on the code scheduling register windows and stack buffers become unnecessary. and other code optimization techniques. These software For compiler code optimization, the inline expansion remedies assume that frequent function calls can not be serves to enlarge the scope of register allocation, code avoided. If most of the function calls can be eliminated, scheduling, and other optimizations. The IMPACT-I these complicated remedies would be unnecessary. Profiler to C Compiler interface allows the profile infor- mation to be automatically used by the IMPACT-I C 1.2, Inline Function Expansion Compiler. The inline expansion is based on execution Inline expansion replaces a function call with the profile information to ensure that only the important func- function body. Inline expansion removes the function tion calls are expanded. It is critical that the inputs used calls/returns costs and provides larger and specialized for executing the equivalent C program are representative. execution plans to the code optimizers. With automatic Therefore, this approach is more suitable for characteriz- inline function expansion, the advantages of using func- ing realistic programs for which representative inputs can tions remain and the costs are reduced. In a recent study, be easily collex3ed. Allen and Johnson identified inline expansion as an essen- tial part of an optimizing C compiler7-lt. They gave a 1.3. Organization of Paper few critical reasons for implementing inline expansion. Section two discusses major implementation issues First, the variable abasing problem becomes less onerous and potential hazards. Section three describes a simple after inline expansion. Second, the code optimizer can inline expansion procedure. Section four shows the work on the real effects of the callee after inlining. Third, experimental results. Finally, we give some concluding inlining function calls contained in loops may increase the remarks in section five. opportunities for vectorization. Several code improving techniques may be applica- 2. Implementation Issues ble after inline expansion. These include register alloca- The core inline expansion algorithm is fairly sim- tion, code scheduling, common subexpression elimination, ple. Most of the implementation difficulties are due to constant propagation, and dead code elimination. hazards, missing information, and minimizing the compi- Richardson and Ganapathi have discussed the effect of lation time. The implementation issues are listed as fol- inline expansion and code optimization across pro- lows. ceduresJ2. - Determine when inline expansion should be applied. Many optimizing compilers can perform inline - Propose an appropriate program representation. expansion1 l* 13-16. IBM PL.8 compiler does inline expan- - Select call sites for inline expansion, sion of all leaf-level procedures 13 . In the (3Jl-J C corn- -Avoid hazards. 247 - Prevent code eqhsion. main is the first function executed in this program. - Prevent control stack overflow. Each node contains three major pieces of informa- - Evaluate cost. tion: 1) the body of the function, 2) the node weight, and - Inline a funCrion. 3) a set of outgoing arcs which points to the callees. Each - Code duplication. arc contains four essential pieces of information: 1) a -Variable retuaming. unique identifier, 2) the caller, 3) the callee, and 4) the arc - Changes to variable de&rations. weight. It is necessary to assign each arc a unique - Resolve missing information. identifier because there may be several arcs between the - Ehinate unreachablejimctions. same pair of caller and callee. Later, we will also associ- - Reduce the number of expanviom. ate each arc with a status attribute which tells us whether - Order the expansion sequence. this arc should be considered for inline expansion, rejected for inline expansion, or inline expanded. 2.1. When to Perform Mine Function Expansion The node weights and arc weights may be deter- Inline expansion should be performed before other mined either by program structure analysis or by profiling. code optimizations, such as constant folding and dead Since a node may be entered from any one of its incoming code elimination. Therefore, tt is natural to perform inline arcs, it is necessary to know the weights of all outgoing expansion at compile-time. Another advantage of per- arcs associated with a particular incoming arc. Therefore, forming Mine expansion at compile-time is that the pro- after inline expansion the arc weights remain accurate. It gram structures, such as loop and if-then-else. are visible is assumed in the remaining part of this section that we and can be used to make inline expansion decisions. Yet have complete and accurate weights for all nodes and another advantage is that if inline expansion can be arcs. applied to a system-independent three-address code, inline The most
Recommended publications
  • A Methodology for Assessing Javascript Software Protections
    A methodology for Assessing JavaScript Software Protections Pedro Fortuna A methodology for Assessing JavaScript Software Protections Pedro Fortuna About me Pedro Fortuna Co-Founder & CTO @ JSCRAMBLER OWASP Member SECURITY, JAVASCRIPT @pedrofortuna 2 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Agenda 1 4 7 What is Code Protection? Testing Resilience 2 5 Code Obfuscation Metrics Conclusions 3 6 JS Software Protections Q & A Checklist 3 What is Code Protection Part 1 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Intellectual Property Protection Legal or Technical Protection? Alice Bob Software Developer Reverse Engineer Sells her software over the Internet Wants algorithms and data structures Does not need to revert back to original source code 5 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Intellectual Property IP Protection Protection Legal Technical Encryption ? Trusted Computing Server-Side Execution Obfuscation 6 A methodology for Assessing JavaScript Software Protections Pedro Fortuna Code Obfuscation Obfuscation “transforms a program into a form that is more difficult for an adversary to understand or change than the original code” [1] More Difficult “requires more human time, more money, or more computing power to analyze than the original program.” [1] in Collberg, C., and Nagra, J., “Surreptitious software: obfuscation, watermarking, and tamperproofing for software protection.”, Addison- Wesley Professional, 2010. 7 A methodology for Assessing
    [Show full text]
  • A Deep Dive Into the Interprocedural Optimization Infrastructure
    Stes Bais [email protected] Kut el [email protected] Shi Oku [email protected] A Deep Dive into the Luf Cen Interprocedural [email protected] Hid Ue Optimization Infrastructure [email protected] Johs Dor [email protected] Outline ● What is IPO? Why is it? ● Introduction of IPO passes in LLVM ● Inlining ● Attributor What is IPO? What is IPO? ● Pass Kind in LLVM ○ Immutable pass Intraprocedural ○ Loop pass ○ Function pass ○ Call graph SCC pass ○ Module pass Interprocedural IPO considers more than one function at a time Call Graph ● Node : functions ● Edge : from caller to callee A void A() { B(); C(); } void B() { C(); } void C() { ... B C } Call Graph SCC ● SCC stands for “Strongly Connected Component” A D G H I B C E F Call Graph SCC ● SCC stands for “Strongly Connected Component” A D G H I B C E F Passes In LLVM IPO passes in LLVM ● Where ○ Almost all IPO passes are under llvm/lib/Transforms/IPO Categorization of IPO passes ● Inliner ○ AlwaysInliner, Inliner, InlineAdvisor, ... ● Propagation between caller and callee ○ Attributor, IP-SCCP, InferFunctionAttrs, ArgumentPromotion, DeadArgumentElimination, ... ● Linkage and Globals ○ GlobalDCE, GlobalOpt, GlobalSplit, ConstantMerge, ... ● Others ○ MergeFunction, OpenMPOpt, HotColdSplitting, Devirtualization... 13 Why is IPO? ● Inliner ○ Specialize the function with call site arguments ○ Expose local optimization opportunities ○ Save jumps, register stores/loads (calling convention) ○ Improve instruction locality ● Propagation between caller and callee ○ Other passes would benefit from the propagated information ● Linkage
    [Show full text]
  • The LLVM Instruction Set and Compilation Strategy
    The LLVM Instruction Set and Compilation Strategy Chris Lattner Vikram Adve University of Illinois at Urbana-Champaign lattner,vadve ¡ @cs.uiuc.edu Abstract This document introduces the LLVM compiler infrastructure and instruction set, a simple approach that enables sophisticated code transformations at link time, runtime, and in the field. It is a pragmatic approach to compilation, interfering with programmers and tools as little as possible, while still retaining extensive high-level information from source-level compilers for later stages of an application’s lifetime. We describe the LLVM instruction set, the design of the LLVM system, and some of its key components. 1 Introduction Modern programming languages and software practices aim to support more reliable, flexible, and powerful software applications, increase programmer productivity, and provide higher level semantic information to the compiler. Un- fortunately, traditional approaches to compilation either fail to extract sufficient performance from the program (by not using interprocedural analysis or profile information) or interfere with the build process substantially (by requiring build scripts to be modified for either profiling or interprocedural optimization). Furthermore, they do not support optimization either at runtime or after an application has been installed at an end-user’s site, when the most relevant information about actual usage patterns would be available. The LLVM Compilation Strategy is designed to enable effective multi-stage optimization (at compile-time, link-time, runtime, and offline) and more effective profile-driven optimization, and to do so without changes to the traditional build process or programmer intervention. LLVM (Low Level Virtual Machine) is a compilation strategy that uses a low-level virtual instruction set with rich type information as a common code representation for all phases of compilation.
    [Show full text]
  • Attacking Client-Side JIT Compilers.Key
    Attacking Client-Side JIT Compilers Samuel Groß (@5aelo) !1 A JavaScript Engine Parser JIT Compiler Interpreter Runtime Garbage Collector !2 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !3 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !4 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !5 A JavaScript Engine • Parser: entrypoint for script execution, usually emits custom Parser bytecode JIT Compiler • Bytecode then consumed by interpreter or JIT compiler • Executing code interacts with the Interpreter runtime which defines the Runtime representation of various data structures, provides builtin functions and objects, etc. Garbage • Garbage collector required to Collector deallocate memory !6 Agenda 1. Background: Runtime Parser • Object representation and Builtins JIT Compiler 2. JIT Compiler Internals • Problem: missing type information • Solution: "speculative" JIT Interpreter 3.
    [Show full text]
  • TASKING VX-Toolset for Tricore User Guide
    TASKING VX-toolset for TriCore User Guide MA160-800 (v6.1) September 14, 2016 Copyright © 2016 Altium Limited. All rights reserved.You are permitted to print this document provided that (1) the use of such is for personal use only and will not be copied or posted on any network computer or broadcast in any media, and (2) no modifications of the document is made. Unauthorized duplication, in whole or part, of this document by any means, mechanical or electronic, including translation into another language, except for brief excerpts in published reviews, is prohibited without the express written permission of Altium Limited. Unauthorized duplication of this work may also be prohibited by local statute. Violators may be subject to both criminal and civil penalties, including fines and/or imprisonment. Altium®, TASKING®, and their respective logos are registered trademarks of Altium Limited or its subsidiaries. All other registered or unregistered trademarks referenced herein are the property of their respective owners and no trademark rights to the same are claimed. Table of Contents 1. C Language .................................................................................................................. 1 1.1. Data Types ......................................................................................................... 1 1.1.1. Half Precision Floating-Point ....................................................................... 3 1.1.2. Fractional Types .......................................................................................
    [Show full text]
  • CS153: Compilers Lecture 19: Optimization
    CS153: Compilers Lecture 19: Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Contains content from lecture notes by Steve Zdancewic and Greg Morrisett Announcements •HW5: Oat v.2 out •Due in 2 weeks •HW6 will be released next week •Implementing optimizations! (and more) Stephen Chong, Harvard University 2 Today •Optimizations •Safety •Constant folding •Algebraic simplification • Strength reduction •Constant propagation •Copy propagation •Dead code elimination •Inlining and specialization • Recursive function inlining •Tail call elimination •Common subexpression elimination Stephen Chong, Harvard University 3 Optimizations •The code generated by our OAT compiler so far is pretty inefficient. •Lots of redundant moves. •Lots of unnecessary arithmetic instructions. •Consider this OAT program: int foo(int w) { var x = 3 + 5; var y = x * w; var z = y - 0; return z * 4; } Stephen Chong, Harvard University 4 Unoptimized vs. Optimized Output .globl _foo _foo: •Hand optimized code: pushl %ebp movl %esp, %ebp _foo: subl $64, %esp shlq $5, %rdi __fresh2: movq %rdi, %rax leal -64(%ebp), %eax ret movl %eax, -48(%ebp) movl 8(%ebp), %eax •Function foo may be movl %eax, %ecx movl -48(%ebp), %eax inlined by the compiler, movl %ecx, (%eax) movl $3, %eax so it can be implemented movl %eax, -44(%ebp) movl $5, %eax by just one instruction! movl %eax, %ecx addl %ecx, -44(%ebp) leal -60(%ebp), %eax movl %eax, -40(%ebp) movl -44(%ebp), %eax Stephen Chong,movl Harvard %eax,University %ecx 5 Why do we need optimizations? •To help programmers… •They write modular, clean, high-level programs •Compiler generates efficient, high-performance assembly •Programmers don’t write optimal code •High-level languages make avoiding redundant computation inconvenient or impossible •e.g.
    [Show full text]
  • Handout – Dataflow Optimizations Assignment
    Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.035, Spring 2013 Handout – Dataflow Optimizations Assignment Tuesday, Mar 19 DUE: Thursday, Apr 4, 9:00 pm For this part of the project, you will add dataflow optimizations to your compiler. At the very least, you must implement global common subexpression elimination. The other optimizations listed below are optional. You may also wait until the next project to implement them if you are going to; there is no requirement to implement other dataflow optimizations in this project. We list them here as suggestions since past winners of the compiler derby typically implement each of these optimizations in some form. You are free to implement any other optimizations you wish. Note that you will be implementing register allocation for the next project, so you don’t need to concern yourself with it now. Global CSE (Common Subexpression Elimination): Identification and elimination of redun- dant expressions using the algorithm described in lecture (based on available-expression anal- ysis). See §8.3 and §13.1 of the Whale book, §10.6 and §10.7 in the Dragon book, and §17.2 in the Tiger book. Global Constant Propagation and Folding: Compile-time interpretation of expressions whose operands are compile time constants. See the algorithm described in §12.1 of the Whale book. Global Copy Propagation: Given a “copy” assignment like x = y , replace uses of x by y when legal (the use must be reached by only this def, and there must be no modification of y on any path from the def to the use).
    [Show full text]
  • Quaxe, Infinity and Beyond
    Quaxe, infinity and beyond Daniel Glazman — WWX 2015 /usr/bin/whoami Primary architect and developer of the leading Web and Ebook editors Nvu and BlueGriffon Former member of the Netscape CSS and Editor engineering teams Involved in Internet and Web Standards since 1990 Currently co-chair of CSS Working Group at W3C New-comer in the Haxe ecosystem Desktop Frameworks Visual Studio (Windows only) Xcode (OS X only) Qt wxWidgets XUL Adobe Air Mobile Frameworks Adobe PhoneGap/Air Xcode (iOS only) Qt Mobile AppCelerator Visual Studio Two solutions but many issues Fragmentation desktop/mobile Heavy runtimes Can’t easily reuse existing c++ libraries Complex to have native-like UI Qt/QtMobile still require c++ Qt’s QML is a weak and convoluted UI language Haxe 9 years success of Multiplatform OSS language Strong affinity to gaming Wide and vibrant community Some press recognition Dead code elimination Compiles to native on all But no native GUI… platforms through c++ and java Best of all worlds Haxe + Qt/QtMobile Multiplatform Native apps, native performance through c++/Java C++/Java lib reusability Introducing Quaxe Native apps w/o c++ complexity Highly dynamic applications on desktop and mobile Native-like UI through Qt HTML5-based UI, CSS-based styling Benefits from Haxe and Qt communities Going from HTML5 to native GUI completeness DOM dynamism in native UI var b: Element = document.getElementById("thirdButton"); var t: Element = document.createElement("input"); t.setAttribute("type", "text"); t.setAttribute("value", "a text field"); b.parentNode.insertBefore(t,
    [Show full text]
  • Analysis of Program Optimization Possibilities and Further Development
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Theoretical Computer Science 90 (1991) 17-36 17 Elsevier Analysis of program optimization possibilities and further development I.V. Pottosin Institute of Informatics Systems, Siberian Division of the USSR Acad. Sci., 630090 Novosibirsk, USSR 1. Introduction Program optimization has appeared in the framework of program compilation and includes special techniques and methods used in compiler construction to obtain a rather efficient object code. These techniques and methods constituted in the past and constitute now an essential part of the so called optimizing compilers whose goal is to produce an object code in run time, saving such computer resources as CPU time and memory. For contemporary supercomputers, the requirement of the proper use of hardware peculiarities is added. In our opinion, the existence of a sufficiently large number of optimizing compilers with real possibilities of producing “good” object code has evidently proved to be practically significant for program optimization. The methods and approaches that have accumulated in program optimization research seem to be no lesser valuable, since they may be successfully used in the general techniques of program construc- tion, i.e. in program synthesis, program transformation and at different steps of program development. At the same time, there has been criticism on program optimization and even doubts about its necessity. On the one hand, this viewpoint is based on blind faith in the new possibilities bf computers which will be so powerful that any necessity to worry about program efficiency will disappear.
    [Show full text]
  • XL C/C++: Language Reference About This Document
    IBM XL C/C++ for Linux, V16.1.1 IBM Language Reference Version 16.1.1 SC27-8045-01 IBM XL C/C++ for Linux, V16.1.1 IBM Language Reference Version 16.1.1 SC27-8045-01 Note Before using this information and the product it supports, read the information in “Notices” on page 63. First edition This edition applies to IBM XL C/C++ for Linux, V16.1.1 (Program 5765-J13, 5725-C73) and to all subsequent releases and modifications until otherwise indicated in new editions. Make sure you are using the correct edition for the level of the product. © Copyright IBM Corporation 1998, 2018. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents About this document ......... v Chapter 4. IBM extension features ... 11 Who should read this document........ v IBM extension features for both C and C++.... 11 How to use this document.......... v General IBM extensions ......... 11 How this document is organized ....... v Extensions for GNU C compatibility ..... 15 Conventions .............. v Extensions for vector processing support ... 47 Related information ........... viii IBM extension features for C only ....... 56 Available help information ........ ix Extensions for GNU C compatibility ..... 56 Standards and specifications ........ x Extensions for vector processing support ... 58 Technical support ............ xi IBM extension features for C++ only ...... 59 How to send your comments ........ xi Extensions for C99 compatibility ...... 59 Extensions for C11 compatibility ...... 59 Chapter 1. Standards and specifications 1 Extensions for GNU C++ compatibility .... 60 Chapter 2. Language levels and Notices .............. 63 language extensions ......... 3 Trademarks .............
    [Show full text]
  • Comparative Studies of Programming Languages; Course Lecture Notes
    Comparative Studies of Programming Languages, COMP6411 Lecture Notes, Revision 1.9 Joey Paquet Serguei A. Mokhov (Eds.) August 5, 2010 arXiv:1007.2123v6 [cs.PL] 4 Aug 2010 2 Preface Lecture notes for the Comparative Studies of Programming Languages course, COMP6411, taught at the Department of Computer Science and Software Engineering, Faculty of Engineering and Computer Science, Concordia University, Montreal, QC, Canada. These notes include a compiled book of primarily related articles from the Wikipedia, the Free Encyclopedia [24], as well as Comparative Programming Languages book [7] and other resources, including our own. The original notes were compiled by Dr. Paquet [14] 3 4 Contents 1 Brief History and Genealogy of Programming Languages 7 1.1 Introduction . 7 1.1.1 Subreferences . 7 1.2 History . 7 1.2.1 Pre-computer era . 7 1.2.2 Subreferences . 8 1.2.3 Early computer era . 8 1.2.4 Subreferences . 8 1.2.5 Modern/Structured programming languages . 9 1.3 References . 19 2 Programming Paradigms 21 2.1 Introduction . 21 2.2 History . 21 2.2.1 Low-level: binary, assembly . 21 2.2.2 Procedural programming . 22 2.2.3 Object-oriented programming . 23 2.2.4 Declarative programming . 27 3 Program Evaluation 33 3.1 Program analysis and translation phases . 33 3.1.1 Front end . 33 3.1.2 Back end . 34 3.2 Compilation vs. interpretation . 34 3.2.1 Compilation . 34 3.2.2 Interpretation . 36 3.2.3 Subreferences . 37 3.3 Type System . 38 3.3.1 Type checking . 38 3.4 Memory management .
    [Show full text]
  • Introduction Inline Expansion
    CSc 553 Principles of Compilation Introduction 29 : Optimization IV Department of Computer Science University of Arizona [email protected] Copyright c 2011 Christian Collberg Inline Expansion I The most important and popular inter-procedural optimization is inline expansion, that is replacing the call of a procedure Inline Expansion with the procedure’s body. Why would you want to perform inlining? There are several reasons: 1 There are a number of things that happen when a procedure call is made: 1 evaluate the arguments of the call, 2 push the arguments onto the stack or move them to argument transfer registers, 3 save registers that contain live values and that might be trashed by the called routine, 4 make the jump to the called routine, Inline Expansion II Inline Expansion III 1 continued.. 3 5 make the jump to the called routine, ... This is the result of programming with abstract data types. 6 set up an activation record, Hence, there is often very little opportunity for optimization. 7 execute the body of the called routine, However, when inlining is performed on a sequence of 8 return back to the callee, possibly returning a result, procedure calls, the code from the bodies of several procedures 9 deallocate the activation record. is combined, opening up a larger scope for optimization. 2 Many of these actions don’t have to be performed if we inline There are problems, of course. Obviously, in most cases the the callee in the caller, and hence much of the overhead size of the procedure call code will be less than the size of the associated with procedure calls is optimized away.
    [Show full text]