Optimizations in C++ Compilers

Total Page:16

File Type:pdf, Size:1020Kb

Optimizations in C++ Compilers DOI:10.1145/3369754 Article development led by queue.acm.org A practical journey. BY MATT GODBOLT Optimizations in C++ Compilers COMPILERS ARE A necessary technology to turn high- level, easier-to-write code into efficient machine code for computers to execute. Their sophistication at doing this is often overlooked. You may spend a lot of time carefully considering algorithms and fighting error messages but perhaps not enough time looking at what compilers are capable of doing. This article introduces some com- many of these optimizations are also piler and code generation concepts, available in other compiled languages. and then shines a torch over a few of the Indeed, the advent of front-end-agnos- very impressive feats of transformation tic compiler toolkits such as LLVM3 your compilers are doing for you, with means most of these optimizations some practical demonstrations of my work in the exact same way in languag- favorite optimizations. I hope you will es such as Rust, Swift, and D. gain an appreciation for what kinds I have always been fascinated by of optimizations you can expect your what compilers are capable of. I spent compiler to do for you, and how you a decade making video games where ev- might explore the subject further. Most ery CPU cycle counted in the war to get of all, you may learn to love looking at more sprites, explosions, or complicat- the assembly output and may learn to ed scenes on the screen than our com- respect the quality of the engineering petitors. Writing custom assembly, and in your compilers. reading the compiler output to see what The examples shown here are in C it was capable of, was par for the course. or C++, which are the languages I have Fast-forward five years and I was at a had the most experience with, but trading company, having switched out FEBRUARY 2020 | VOL. 63 | NO. 2 | COMMUNICATIONS OF THE ACM 41 practice sprites and polygons for fast process- apply similarly on other architectures. ing of financial data. Just as before, Additionally, I cover only the GCC and knowing what the compiler was doing Clang compilers, but equally clever with code helped inform the way we optimizations show up on compilers wrote the code. from Microsoft and Intel. Many optimizations Obviously, nicely written, testable fall under the code is extremely important—espe- Optimization 101 cially if that code has the potential to This is far from a deep dive into com- umbrella of make thousands of financial trans- piler optimizations, but some concepts strength reduction: actions per second. Being fastest is used by compilers are useful to know. great, but not having bugs is even In these pages, you will note a running taking expensive more important. column of examples of scripts and in- In 2012, we were debating which of structions for the processes and opera- operations and the new C++11 features could be adopt- tions discussed. All are linked by the transforming ed as part of the canon of acceptable cod- corresponding (letter). ing practices. When every nanosecond Many optimizations fall under the them to use less counts, you want to be able to give advice umbrella of strength reduction: taking expensive ones. to programmers about how best to write expensive operations and transforming their code without being antagonistic them to use less expensive ones. A very to performance. While experimenting simple example of strength reduction with how code uses new features such as would be taking a loop with a multiplica- auto, lambdas, and range-based for, I tion involving the loop counter, as shown wrote a shell script (a) to run the com- in (b). Even on today’s CPUs, multiplica- piler continuously and show its filtered tion is a little slower than simpler arith- output. This proved so useful in answer- metic, so the compiler will rewrite that ing all these “what if?” questions that loop to be something like (c). I went home that evening and created Here, strength reduction took a loop Compiler Explorer.1 involving multiplication and turned Over the years I have been constantly it into a sequence of equivalent op- amazed by the lengths to which com- erations using only addition. There pilers go in order to take our code and are many forms of strength reduction, turn it into a work of assembly code more of which show up in the practical art. I encourage all compiled language examples given later. programmers to learn a little assembly Another key optimization is inlining, in order to appreciate what their com- in which the compiler replaces a call to pilers are doing for them. Even if you a function with the body of that func- cannot write it yourself, being able to tion. This removes the overhead of the read it is a useful skill. call and often unlocks further optimi- All the assembly code shown here zations, as the compiler can optimize is for 64-bit x86 processors, as that is the combined code as a single unit. You the CPU I’m most familiar with and will see plenty of examples of this later. is one of the most common server Other optimization categories architectures. Some of the examples include: shown here are x86-specific, but in ˲ Constant folding. The compiler reality, many types of optimizations takes expressions whose values can be 42 COMMUNICATIONS OF THE ACM | FEBRUARY 2020 | VOL. 63 | NO. 2 practice calculated at compile time and replac- es them with the result of the calcula- tion directly. ˲ Constant propagation. The com- piler tracks the provenance of values and takes advantage of knowing that certain values are constant for all pos- sible executions. ˲ Common subexpression elimina- tion. Duplicated calculations are re- written to calculate once and dupli- cate the result. ˲ Dead code removal. After many of the other optimizations, there may be areas of the code that have no effect on the output, and these can be removed. This includes loads and stores whose values are unused, as well as entire functions and expressions. ˲ Instruction selection. This is not an optimization as such, but as the com- piler takes its internal representation of the program and generates CPU instructions, it usually has a large set of equivalent instruction sequences from which to choose. Making the right choice requires the compiler to can make a big difference. Of course, does the comparison cmp al, 1, know a lot about the architecture of the more information a compiler which sets the processor carry flag if the processor it’s targeting. has, the longer it could take to run, so testFunc() returned false, otherwise ˲ Loop invariant code movement. The there is a balance to be struck here. it clears it. The sbb r12d, -1 instruc- compiler recognizes that some expres- Let’s take a look at an example (d), tion then subtracts -1 with borrow, the sions within a loop are constant for the counting the number of elements of a subtract equivalent of carrying, which duration of that loop and moves them vector that pass some test (compiled also uses the carry flag. This has the de- outside of the loop. On top of this, with GCC, optimization level 3; https:// sired side effect: If the carry is clear the compiler is able to move a loop godbolt.org/z/acm19_count1). If the (testFunc() returned true), it sub- invariant conditional check out of a compiler has no information about tracts –1, which is the same as adding loop, and then duplicate the loop body testFunc, it will generate an inner 1. If the carry is set, it subtracts –1 + 1, twice: once if the condition is true, and loop like(e) . which has no effect on the value. Avoid- once if it is false. This can lead to fur- To understand this code, it’s useful ing branches can be advantageous in ther optimizations. to know that a s t d::v e c t or < > con- some cases if the branch is not easily ˲ Peephole optimizations. The com- tains some pointers: one to the begin- predictable by the processor. piler takes short sequences of instruc- ning of the data; one to the end of the It may seem surprising that the com- tions and looks for local optimizations data; and one to the end of the stor- piler reloads the b egin() and end() between those instructions. age currently allocated (f). The size pointers each loop iteration, and in- ˲ Tail call removal. A recursive func- of the vector is not directly stored, it’s deed it rederives size() each time tion that ends in a call to itself can of- implied in the difference between the too. However, the compiler is forced to ten be rewritten as a loop, reducing call b egin() and end() pointers. Note do so: it has no idea what testFunc() overhead and reducing the chance of that the calls to v e c t or < >::si z e() does and must assume the worst. stack overflow. and vector<>::operator[] have That is, it must assume calls to test- The golden rule for helping the been inlined completely. Func() may cause the vec to be modi- compiler optimize is to ensure it has In the assembly code (e), ebp points fied. Theconst reference here doesn’t as much information as possible to to the vector object, and the b egin() allow any additional optimizations for make the right optimization deci- and end() pointers are therefore a couple of reasons: testFunc() may sions. One source of information is QWORD PTR [rbp+0] and QWORD PTR have a non-const reference to vec your code: If the compiler can see [rbp+8], respectively.
Recommended publications
  • CS153: Compilers Lecture 19: Optimization
    CS153: Compilers Lecture 19: Optimization Stephen Chong https://www.seas.harvard.edu/courses/cs153 Contains content from lecture notes by Steve Zdancewic and Greg Morrisett Announcements •HW5: Oat v.2 out •Due in 2 weeks •HW6 will be released next week •Implementing optimizations! (and more) Stephen Chong, Harvard University 2 Today •Optimizations •Safety •Constant folding •Algebraic simplification • Strength reduction •Constant propagation •Copy propagation •Dead code elimination •Inlining and specialization • Recursive function inlining •Tail call elimination •Common subexpression elimination Stephen Chong, Harvard University 3 Optimizations •The code generated by our OAT compiler so far is pretty inefficient. •Lots of redundant moves. •Lots of unnecessary arithmetic instructions. •Consider this OAT program: int foo(int w) { var x = 3 + 5; var y = x * w; var z = y - 0; return z * 4; } Stephen Chong, Harvard University 4 Unoptimized vs. Optimized Output .globl _foo _foo: •Hand optimized code: pushl %ebp movl %esp, %ebp _foo: subl $64, %esp shlq $5, %rdi __fresh2: movq %rdi, %rax leal -64(%ebp), %eax ret movl %eax, -48(%ebp) movl 8(%ebp), %eax •Function foo may be movl %eax, %ecx movl -48(%ebp), %eax inlined by the compiler, movl %ecx, (%eax) movl $3, %eax so it can be implemented movl %eax, -44(%ebp) movl $5, %eax by just one instruction! movl %eax, %ecx addl %ecx, -44(%ebp) leal -60(%ebp), %eax movl %eax, -40(%ebp) movl -44(%ebp), %eax Stephen Chong,movl Harvard %eax,University %ecx 5 Why do we need optimizations? •To help programmers… •They write modular, clean, high-level programs •Compiler generates efficient, high-performance assembly •Programmers don’t write optimal code •High-level languages make avoiding redundant computation inconvenient or impossible •e.g.
    [Show full text]
  • Strength Reduction of Induction Variables and Pointer Analysis – Induction Variable Elimination
    Loop optimizations • Optimize loops – Loop invariant code motion [last time] Loop Optimizations – Strength reduction of induction variables and Pointer Analysis – Induction variable elimination CS 412/413 Spring 2008 Introduction to Compilers 1 CS 412/413 Spring 2008 Introduction to Compilers 2 Strength Reduction Induction Variables • Basic idea: replace expensive operations (multiplications) with • An induction variable is a variable in a loop, cheaper ones (additions) in definitions of induction variables whose value is a function of the loop iteration s = 3*i+1; number v = f(i) while (i<10) { while (i<10) { j = 3*i+1; //<i,3,1> j = s; • In compilers, this a linear function: a[j] = a[j] –2; a[j] = a[j] –2; i = i+2; i = i+2; f(i) = c*i + d } s= s+6; } •Observation:linear combinations of linear • Benefit: cheaper to compute s = s+6 than j = 3*i functions are linear functions – s = s+6 requires an addition – Consequence: linear combinations of induction – j = 3*i requires a multiplication variables are induction variables CS 412/413 Spring 2008 Introduction to Compilers 3 CS 412/413 Spring 2008 Introduction to Compilers 4 1 Families of Induction Variables Representation • Basic induction variable: a variable whose only definition in the • Representation of induction variables in family i by triples: loop body is of the form – Denote basic induction variable i by <i, 1, 0> i = i + c – Denote induction variable k=i*a+b by triple <i, a, b> where c is a loop-invariant value • Derived induction variables: Each basic induction variable i defines
    [Show full text]
  • Lecture Notes on Peephole Optimizations and Common Subexpression Elimination
    Lecture Notes on Peephole Optimizations and Common Subexpression Elimination 15-411: Compiler Design Frank Pfenning and Jan Hoffmann Lecture 18 October 31, 2017 1 Introduction In this lecture, we discuss common subexpression elimination and a class of optimiza- tions that is called peephole optimizations. The idea of common subexpression elimination is to avoid to perform the same operation twice by replacing (binary) operations with variables. To ensure that these substitutions are sound we intro- duce dominance, which ensures that substituted variables are always defined. Peephole optimizations are optimizations that are performed locally on a small number of instructions. The name is inspired from the picture that we look at the code through a peephole and make optimization that only involve the small amount code we can see and that are indented of the rest of the program. There is a large number of possible peephole optimizations. The LLVM com- piler implements for example more than 1000 peephole optimizations [LMNR15]. In this lecture, we discuss three important and representative peephole optimiza- tions: constant folding, strength reduction, and null sequences. 2 Constant Folding Optimizations have two components: (1) a condition under which they can be ap- plied and the (2) code transformation itself. The optimization of constant folding is a straightforward example of this. The code transformation itself replaces a binary operation with a single constant, and applies whenever c1 c2 is defined. l : x c1 c2 −! l : x c (where c =
    [Show full text]
  • Efficient Symbolic Analysis for Optimizing Compilers*
    Efficient Symbolic Analysis for Optimizing Compilers? Robert A. van Engelen Dept. of Computer Science, Florida State University, Tallahassee, FL 32306-4530 [email protected] Abstract. Because most of the execution time of a program is typically spend in loops, loop optimization is the main target of optimizing and re- structuring compilers. An accurate determination of induction variables and dependencies in loops is of paramount importance to many loop opti- mization and parallelization techniques, such as generalized loop strength reduction, loop parallelization by induction variable substitution, and loop-invariant expression elimination. In this paper we present a new method for induction variable recognition. Existing methods are either ad-hoc and not powerful enough to recognize some types of induction variables, or existing methods are powerful but not safe. The most pow- erful method known is the symbolic differencing method as demonstrated by the Parafrase-2 compiler on parallelizing the Perfect Benchmarks(R). However, symbolic differencing is inherently unsafe and a compiler that uses this method may produce incorrectly transformed programs without issuing a warning. In contrast, our method is safe, simpler to implement in a compiler, better adaptable for controlling loop transformations, and recognizes a larger class of induction variables. 1 Introduction It is well known that the optimization and parallelization of scientific applica- tions by restructuring compilers requires extensive analysis of induction vari- ables and dependencies
    [Show full text]
  • Code Optimizations Recap Peephole Optimization
    7/23/2016 Program Analysis Recap https://www.cse.iitb.ac.in/~karkare/cs618/ • Optimizations Code Optimizations – To improve efficiency of generated executable (time, space, resources …) Amey Karkare – Maintain semantic equivalence Dept of Computer Science and Engg • Two levels IIT Kanpur – Machine Independent Visiting IIT Bombay [email protected] – Machine Dependent [email protected] 2 Peephole Optimization Peephole optimization examples… • target code often contains redundant Redundant loads and stores instructions and suboptimal constructs • Consider the code sequence • examine a short sequence of target instruction (peephole) and replace by a Move R , a shorter or faster sequence 0 Move a, R0 • peephole is a small moving window on the target systems • Instruction 2 can always be removed if it does not have a label. 3 4 1 7/23/2016 Peephole optimization examples… Unreachable code example … Unreachable code constant propagation • Consider following code sequence if 0 <> 1 goto L2 int debug = 0 print debugging information if (debug) { L2: print debugging info } Evaluate boolean expression. Since if condition is always true the code becomes this may be translated as goto L2 if debug == 1 goto L1 goto L2 print debugging information L1: print debugging info L2: L2: The print statement is now unreachable. Therefore, the code Eliminate jumps becomes if debug != 1 goto L2 print debugging information L2: L2: 5 6 Peephole optimization examples… Peephole optimization examples… • flow of control: replace jump over jumps • Strength reduction
    [Show full text]
  • Compiler-Based Code-Improvement Techniques
    Compiler-Based Code-Improvement Techniques KEITH D. COOPER, KATHRYN S. MCKINLEY, and LINDA TORCZON Since the earliest days of compilation, code quality has been recognized as an important problem [18]. A rich literature has developed around the issue of improving code quality. This paper surveys one part of that literature: code transformations intended to improve the running time of programs on uniprocessor machines. This paper emphasizes transformations intended to improve code quality rather than analysis methods. We describe analytical techniques and specific data-flow problems to the extent that they are necessary to understand the transformations. Other papers provide excellent summaries of the various sub-fields of program analysis. The paper is structured around a simple taxonomy that classifies transformations based on how they change the code. The taxonomy is populated with example transformations drawn from the literature. Each transformation is described at a depth that facilitates broad understanding; detailed references are provided for deeper study of individual transformations. The taxonomy provides the reader with a framework for thinking about code-improving transformations. It also serves as an organizing principle for the paper. Copyright 1998, all rights reserved. You may copy this article for your personal use in Comp 512. Further reproduction or distribution requires written permission from the authors. 1INTRODUCTION This paper presents an overview of compiler-based methods for improving the run-time behavior of programs — often mislabeled code optimization. These techniques have a long history in the literature. For example, Backus makes it quite clear that code quality was a major concern to the implementors of the first Fortran compilers [18].
    [Show full text]
  • Redundant Computation Elimination Optimizations
    Redundant Computation Elimination Optimizations CS2210 Lecture 20 CS2210 Compiler Design 2004/5 Redundancy Elimination ■ Several categories: ■ Value Numbering ■ local & global ■ Common subexpression elimination (CSE) ■ local & global ■ Loop-invariant code motion ■ Partial redundancy elimination ■ Most complex ■ Subsumes CSE & loop-invariant code motion CS2210 Compiler Design 2004/5 Value Numbering ■ Goal: identify expressions that have same value ■ Approach: hash expression to a hash code ■ Then have to compare only expressions that hash to same value CS2210 Compiler Design 2004/5 1 Example a := x | y b := x | y t1:= !z if t1 goto L1 x := !z c := x & y t2 := x & y if t2 trap 30 CS2210 Compiler Design 2004/5 Global Value Numbering ■ Generalization of value numbering for procedures ■ Requires code to be in SSA form ■ Crucial notion: congruence ■ Two variables are congruent if their defining computations have identical operators and congruent operands ■ Only precise in SSA form, otherwise the definition may not dominate the use CS2210 Compiler Design 2004/5 Value Graphs ■ Labeled, directed graph ■ Nodes are labeled with ■ Operators ■ Function symbols ■ Constants ■ Edges point from operator (or function) to its operands ■ Number labels indicate operand position CS2210 Compiler Design 2004/5 2 Example entry receive n1(val) i := 1 1 B1 j1 := 1 i3 := φ2(i1,i2) B2 j3 := φ2(j1,j2) i3 mod 2 = 0 i := i +1 i := i +3 B3 4 3 5 3 B4 j4:= j3+1 j5:= j3+3 i2 := φ5(i4,i5) j3 := φ5(j4,j5) B5 j2 > n1 exit CS2210 Compiler Design 2004/5 Congruence Definition ■ Maximal relation on the value graph ■ Two nodes are congruent iff ■ They are the same node, or ■ Their labels are constants and the constants are the same, or ■ They have the same operators and their operands are congruent ■ Variable equivalence := x and y are equivalent at program point p iff they are congruent and their defining assignments dominate p CS2210 Compiler Design 2004/5 Global Value Numbering Algo.
    [Show full text]
  • CS 4120 Introduction to Compilers Andrew Myers Cornell University
    CS 4120 Introduction to Compilers Andrew Myers Cornell University Lecture 19: Introduction to Optimization 2020 March 9 Optimization • Next topic: how to generate better code through optimization. • Tis course covers the most valuable and straightforward optimizations – much more to learn! –Other sources: • Appel, Dragon book • Muchnick has 10 chapters of optimization techniques • Cooper and Torczon 2 CS 4120 Introduction to Compilers How fast can you go? 10000 direct source code interpreters (parsing included!) 1000 tokenized program interpreters (BASIC, Tcl) AST interpreters (Perl 4) 100 bytecode interpreters (Java, Perl 5, OCaml, Python) call-threaded interpreters 10 pointer-threaded interpreters (FORTH) simple code generation (PA4, JIT) 1 register allocation naive assembly code local optimization global optimization FPGA expert assembly code GPU 0.1 3 CS 4120 Introduction to Compilers Goal of optimization • Compile clean, modular, high-level source code to expert assembly-code performance. • Can’t change meaning of program to behavior not allowed by source. • Different goals: –space optimization: reduce memory use –time optimization: reduce execution time –power optimization: reduce power usage 4 CS 4120 Introduction to Compilers Why do we need optimization? • Programmers may write suboptimal code for clarity. • Source language may make it hard to avoid redundant computation a[i][j] = a[i][j] + 1 • Architectural independence • Modern architectures assume optimization—hard to optimize by hand! 5 CS 4120 Introduction to Compilers Where to optimize? • Usual goal: improve time performance • But: many optimizations trade off space vs. time. • Example: loop unrolling replaces a loop body with N copies. –Increasing code space speeds up one loop but slows rest of program down a little.
    [Show full text]
  • Study Topics Test 3
    Printed by David Whalley Apr 22, 20 9:44 studytopics_test3.txt Page 1/2 Apr 22, 20 9:44 studytopics_test3.txt Page 2/2 Topics to Study for Exam 3 register allocation array padding scalar replacement Chapter 6 loop interchange prefetching intermediate code representations control flow optimizations static versus dynamic type checking branch chaining equivalence of type expressions reversing branches coercions, overloading, and polymorphism code positioning translation of boolean expressions loop inversion backpatching and translation of flow−of−control constructs useless jump elimination translation of record declarations unreachable code elimination translation of switch statements data flow optimizations translation of array references common subexpression elimination partial redundancy elimination dead assignment elimination Chapter 5 evaluation order determination recurrence elimination syntax directed definitions machine specific optimizations synthesized and inherited attributes instruction scheduling dependency graphs filling delay slots L−attributed definitions exploiting instruction−level parallelism translation schemes peephole optimization syntax−directed construction of syntax trees and DAGs constructing a control flow graph syntax−directed translation with YACC optimizations before and after code generation instruction selection optimization Chapter 7 Chapter 10 activation records call graphs instruction pipelining typical actions during calls and returns data dependences storage allocation strategies true dependences static,
    [Show full text]
  • Optimization.Pdf
    1. statement-by-statement code generation 2. peephole optimization 3. “global” optimization 0-0 Peephole optimization Peephole: a short sequence of target instructions that may be replaced by a shorter/faster sequence One optimization may make further optimizations possible ) several passes may be required 1. Redundant load/store elimination: 9.9 MOV R0 a MOV a R0 delete if in same B.B. 2. Algebraic simplification: eliminate instructions like the Section following: x = x + 0 x = x * 1 3. Strength reduction: replace expensive operations by equivalent cheaper operations, e.g. x2 ! x * x fixed-point multiplication/division by power of 2 ! shift Compiler Construction: Code Optimization – p. 1/35 Peephole optimization 4. Jumps: goto L1 if a < b goto L1 ... ... L1: goto L2 L1: goto L2 + + goto L2 if a < b goto L2 ... ... L1: goto L2 L1: goto L2 If there are no other jumps to L1, and it is preceded by an unconditional jump, it may be eliminated. If there is only jump to L1 and it is preceded by an unconditional goto goto L1 if a < b goto L2 ... goto L3 L1: if a < b goto L2 ) ... L3: L3: Compiler Construction: Code Optimization – p. 2/35 Peephole optimization 5. Unreachable code: unlabeled instruction following an unconditional jump may be eliminated #define DEBUG 0 if debug = 1 goto L1 ... goto L2 if (debug) { ) L1: /* print */ /* print stmts */ L2: } + if 0 != 1 goto L2 if debug != 1 goto L2 /* print stmts */ ( /* print stmts */ L2: L2: eliminate Compiler Construction: Code Optimization – p. 3/35 Example i = m-1; j = n; v = a[n]; while (1) { do i = i+1; while (a[i] < v); do j = j-1; while (a[j] > v); if (i >= j) break; x = a[i]; a[i] = a[j]; a[j] = x; } 10.1 x = a[i]; a[i] = a[n]; a[n] = x; Section Compiler Construction: Code Optimization – p.
    [Show full text]
  • CSCI 742 - Compiler Construction
    CSCI 742 - Compiler Construction Lecture 31 Introduction to Optimizations Instructor: Hossein Hojjat April 11, 2018 What Next? • At this point we can generate bytecode for a given program • Next: how to generate better code through optimization • Most complexity in modern compilers is in their optimizers • This course covers some straightforward optimizations • There is much more to learn! “Advanced Compiler Design and Implementation” (Whale Book) by Steven Muchnick • 10 chapters (∼ 400 pages) on optimization techniques • Maybe an independent study? ^¨ 1 Goal of Optimization • Optimizations: code transformations that improve the program • Must not change meaning of program to behavior not allowed by source code • Different kinds - Space optimizations: reduce memory use - Time optimizations: reduce execution time - Power optimization: reduce power usage 2 Why Optimize? • Programmers may write suboptimal code to make it clearer • Many programmers cannot recognize ways to improve the efficiency Example. - Assume a is a field of a class - a[i][j] = a[i][j] + 1; 18 bytecode instructions (gets the field a twice) - a[i][j]++; 12 bytecode instructions (gets the field a once) • High-level language may make some optimizations inconvenient or impossible to express 3 Where to Optimize? • Usual goal: improve time performance • Problem: many optimizations trade off space versus time Example. Loop unrolling here reduces the number of iterations from 100 to 50 for (i = 0; i < 100; i++) f (); for (i = 0; i < 100; i += 2) { f (); f (); } 4 Where to Optimize? •
    [Show full text]
  • Technical Review of Peephole Technique in Compiler to Optimize Intermediate Code
    American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-05, pp-140-144 www.ajer.us Research Paper Open Access Technical Review of peephole Technique in compiler to optimize intermediate code Vaishali Sanghvi1, Meri Dedania2, Kaushik Manavadariya3, Ankit Faldu4 1(Department of MCA/ Atmiya Institute of Technology & Science-Rajot, India) 2(Department of MCA/ Atmiya Institute of Technology & Science-Rajot, India) 3(Department of MCA/ Atmiya Institute of Technology & Science-Rajot, India) 4(Department of MCA/ Atmiya Institute of Technology & Science-Rajot, India) Abstract: Peephole optimization is a efficient and easy optimization technique used by compilers sometime called window or peephole is set of code that replace one sequence of instructions by another equivalent set of instructions, but shorter, faster. Peephole optimization is traditionally done through String pattern matching that is using regular expression. There are some the techniques of peephole optimization like constant folding, Strength reduction, null sequences, combine operation, algebraic laws, special case instructions, address mode operations. The peephole optimization is applied to several parts or section of program or code so main question is where to apply it before compilation, on the intermediate code or after compilation of the code .The aim of this dissertation to show the current state of peephole optimization and how apply it to the IR (Intermediate Representation) code that is generated by any programming language. Keywords: Compiler, peephole, Intermediate code, code generation, PO (peephole optimization) I. INTRODUCTION Compilers take a source code as input, and typically produce semantically correspondent target machine instructions as output.
    [Show full text]