1 Instruction Selection DAG Tiling Problems Pentium ISA Example Tiling

Total Page:16

File Type:pdf, Size:1020Kb

1 Instruction Selection DAG Tiling Problems Pentium ISA Example Tiling Instruction Selection CS412/CS413 1. Translate low-level IR code into DAG representation Introduction to Compilers 2. Then find a good tiling of the DAG Tim Teitelbaum - disjoint set of tiles that cover the DAG - Maximal munch algorithm Lecture 32: More Instruction Selection - Dynamic programming algorithm 15 Apr 07 CS 412/413 Spring 2007 Introduction to Compilers 1 CS 412/413 Spring 2007 Introduction to Compilers 2 DAG Tiling Problems • Classes of registers •Goal:find a good covering of DAG with tiles – Registers may have specific purposes – Example: Pentium multiply instruction •Problem:need to know what variables are in registers - multiply register eax by contents of another register - store result in eax (low 32 bits) and edx (high 32 bits) • Assume abstract assembly: - need extra instructions to move values into eax – Machine with infinite number of registers • Two-address machine instructions – Temporary/local variables stored in registers – Three-address low-level code – Parameters/heap variables: use memory accesses – Need multiple machine instructions for a single tile •CISC versus RISC – Complex instruction sets => many possible tiles and tilings – Example: multiple addressing modes (CISC) versus load/store architectures (RISC) CS 412/413 Spring 2007 Introduction to Compilers 3 CS 412/413 Spring 2007 Introduction to Compilers 4 Pentium ISA Example Tiling •Consider: t = t + i •Pentium:two-address CISC architecture t = temporary variable i = parameter • Multiple addressing modes: source operands may be = – Immediate value: imm • Need new temporary registers t + – Register: reg between tiles (unless operand t1 t – Indirect address: [reg], [imm], [reg+imm], node is labeled with temporary) load {i} – Indexed address: [reg+reg’], [reg+imm*reg’], t0 [reg+imm*reg’+imm’] • Resulting code: - mov %ebp, t0 ebp 20 • Destination operands = same, except immediate values sub $20, t0 mov 0(t0), t1 add t1, t CS 412/413 Spring 2007 Introduction to Compilers 5 CS 412/413 Spring 2007 Introduction to Compilers 6 1 Tiles Some Tiles t2 store + mov t1, t2 = mov t1 mov t2, t1 + add $1, t2 t1 t2 $10, 0(t1,t2) 10 1 t1 t2 • Tiles capture compiler’s understanding of instruction set t3 • Each tile: sequence of machine instructions that mov t2, t3 mov t1, %eax t3 match a subgraph of the DAG + add t1, t3 mul t2 * • May need additional move instructions t1 t2 mov %eax, t3 t1 t2 • Tiling = cover the DAG with tiles CS 412/413 Spring 2007 Introduction to Compilers 7 CS 412/413 Spring 2007 Introduction to Compilers 8 Conditional Branches Maximal Munch Algorithm • How to tile a conditional jump? • Maximal Munch = find largest tiles (greedy algorithm) • Fold comparison into tile • Start from top of tree • Find largest tile that matches top node • Tile remaining subtrees recursively tjump tjump store t1 L == L + 4 t1 t2 load * test t1,t1 cmp t1,t2 + 4 load jnz L je L ebp 8 + ebp 12 CS 412/413 Spring 2007 Introduction to Compilers 9 CS 412/413 Spring 2007 Introduction to Compilers 10 DAG Representation Example •DAG:a node may have multiple parents • Algorithm: same, but a node with multiple parents x = x + 1; occurs inside a tile only if all its parents are in the tile store store + + + load * ebp 8 load 1 + 4 load + ebp 8 + ebp 8 ebp 12 CS 412/413 Spring 2007 Introduction to Compilers 11 CS 412/413 Spring 2007 Introduction to Compilers 12 2 Example Alternate (CISC) Tiling x = x + 1; x = x + 1; store store add $1, 8(%ebp) t2 mov 8(%ebp), t1 + + + + t1 mov t1, t2 = ebp 8 load 1 add $1, t2 ebp 8 load 1 + + mov t2, 8(%ebp) + r/m32 const ebp 8 ebp 8 r/m32 CS 412/413 Spring 2007 Introduction to Compilers 13 CS 412/413 Spring 2007 Introduction to Compilers 14 ADD Expression Tiles ADD Statement Tiles = t1 t1 Intel Architecture + mov t2, t1 r/m32 const + + add r/m32, t1 t2 r/m32 add imm32, %eax t2 t3 add imm32, r/m32 = add imm8, r/m32 + t1 add r32, r/m32 r/m32 mov t2, t1 + add r/m32, r32 = add imm32, t1 t2 const + r32 r/m32 CS 412/413 Spring 2007 Introduction to Compilers 15 CS 412/413 Spring 2007 Introduction to Compilers 16 Designing Tiles More Handy Tiles • Only add tiles that are useful to compiler lea instruction computes a memory address • Many instructions will be too hard to use effectively or will offer no advantage t3 • Need tiles for all single-node trees to guarantee lea (t1,t2), t3 + that every tree can be tiled, e.g. t1 t2 t3 t1 mov t2, t1 + + add t3, t1 lea c1(t1,t2,c2), t3 + * t2 t3 t1 c1 c2 t2 CS 412/413 Spring 2007 Introduction to Compilers 17 CS 412/413 Spring 2007 Introduction to Compilers 18 3 Matching Jump for RISC Condition Code ISA • As defined in lecture, have • Pentium: condition encoded in jump instruction tjump(cond, destination) • cmp: compare operands and set flags fjump(cond, destination) • jcc: conditional jump according to flags • Our tjump/fjump translates easily to RISC ISAs that have explicit comparison result set condition codes tjump cmp t1, t2 tjump MIPS < L jl L t1 cmplt t2, t3, t1 t2 < L t3 br t1, L test condition codes t2 t3 CS 412/413 Spring 2007 Introduction to Compilers 19 CS 412/413 Spring 2007 Introduction to Compilers 20 Fixed-register instructions Implementation mul r/m32 • Maximal Munch: start from top node Multiply value in register eax • Find largest tile matching top node and all of the Result: low 32 bits in eax, high 32 bits in edx children nodes jecxz L • Invoke recursively on all children of tile Jump to label L if ecx is zero • Generate code for this tile • Code for children will have been generated already in add r/m32, %eax recursive calls Add to eax • How to find matching tiles? • No fixed registers in low IR except frame pointer • Need extra move instructions CS 412/413 Spring 2007 Introduction to Compilers 21 CS 412/413 Spring 2007 Introduction to Compilers 22 Matching Tiles Tile Specifications abstract class LIR_Stmt { Assembly munch(); • Previous approach simple, efficient, but hard-codes tiles } and their priorities class LIR_Assign extends LIR_Stmt { LIR_Expr src, dst; • Another option: explicitly create data structures Assembly munch() { representing each tile in instruction set if (src instanceof IR_Plus && = ((IR_Plus)src).lhs.equals(dst) && – Tiling performed by a generic tree-matching and + is_regmem32(dst)) { code generation procedure Assembly e = ((LIR_Plus)src).rhs.munch(); – Can generate from instruction set description: return e.append(new AddIns(dst, r/m32 e.target())); code generator generators } – For RISC instruction sets, over-engineering else if ... } } CS 412/413 Spring 2007 Introduction to Compilers 23 CS 412/413 Spring 2007 Introduction to Compilers 24 4 How Good Is It? Improving Instruction Selection • Very rough approximation on modern pipelined • Because it is greedy, Maximal Munch does not architectures: execution time is number of tiles necessarily generate best code – Always selects largest tile, but not necessarily the • Maximal munch finds a locally optimal (two adjacent fastest instruction tiles can never be combined into one) but not – May pull nodes up into tiles inappropriately – it may necessarily globally optimum tiling (least cost of all covers) be better to leave below (use smaller tiles above and larger, or faster tiles below) • Metric used: tile size • Better to use dynamic programming, an optimization technique that uses memoization to assure that subproblems are never solved more than once. CS 412/413 Spring 2007 Introduction to Compilers 25 CS 412/413 Spring 2007 Introduction to Compilers 26 Timing Cost Model Finding globally optimum tiling •Idea:associate cost with each tile (say proportional to •Goal:find minimum total cost tiling of DAG number of cycles to execute) – may not be a good metric on modern architectures • Algorithm: for every node, find minimum total cost tiling • Total execution time is sum of costs of all tiles of that node and subgraph below it Cost = 2 store • Lemma: Given minimum cost tiling of all nodes in + + Cost=1 subgraph, we can find minimum cost tiling of the node by trying out all possible tiles matching the node Total cost: 5 ebp 8 load 1 • Therefore: start from leaves, work upward to top node + ebp 8 Cost = 2 CS 412/413 Spring 2007 Introduction to Compilers 27 CS 412/413 Spring 2007 Introduction to Compilers 28 Dynamic Programming: a[i] Recursive Implementation mov 8(%ebp), t1 • Traverse DAG recursively, and for each node n, record <t,c>, where load mov 12(%ebp), t2 mov (t1,t2,4), t3 – t is the best tile to use for subgraph rooted at n, – c is the total cost of tiling the subgraph rooted at n if t is chosen. + • To compute <t,c> for node n load * – Consider every tile t’ that matches rooted at n, and compute 4 load total cost c’ = cost of tile t’ + sum of the costs of tiling the + subgraphs rooted at the leaves of t’ (which costs can be + computed recursively and memoized) ebp 8 ebp 12 – Store lowest-cost tile t’ and its total cost c’ • To emit code, traverse least-cost tiles recursively and emit code in postorder CS 412/413 Spring 2007 Introduction to Compilers 29 CS 412/413 Spring 2007 Introduction to Compilers 30 5 Memoization Problems with Model class IR_Move extends IR_Stmt { = • Modern processors: IR_Expr src, dst; + – execution time not sum of tile times Assembly best; // initialized to null – instruction order matters int optTileCost() { if (best != null) return best.cost(); r/m32 • Processors pipeline instructions and execute if (src instanceof IR_Plus && different pieces of instructions in parallel ((IR_Plus)src).lhs.equals(dst) && is_regmem32(dst)) { • bad ordering (e.g.
Recommended publications
  • Modern Compiler Implementation in Java. Second Edition
    Team-Fly Modern Compiler Implementation in Java, Second Edition by Andrew W. ISBN:052182060x Appel and Jens Palsberg Cambridge University Press © 2002 (501 pages) This textbook describes all phases of a compiler, and thorough coverage of current techniques in code generation and register allocation, and the compilation of functional and object-oriented languages. Table of Contents Modern Compiler Implementation in Java, Second Edition Preface Part One - Fundamentals of Compilation Ch apt - Introduction er 1 Ch apt - Lexical Analysis er 2 Ch apt - Parsing er 3 Ch apt - Abstract Syntax er 4 Ch apt - Semantic Analysis er 5 Ch apt - Activation Records er 6 Ch apt - Translation to Intermediate Code er 7 Ch apt - Basic Blocks and Traces er 8 Ch apt - Instruction Selection er 9 Ch apt - Liveness Analysis er 10 Ch apt - Register Allocation er 11 Ch apt - Putting It All Together er 12 Part Two - Advanced Topics Ch apt - Garbage Collection er 13 Ch apt - Object-Oriented Languages er 14 Ch apt - Functional Programming Languages er 15 Ch apt - Polymorphic Types er 16 Ch apt - Dataflow Analysis er 17 Ch apt - Loop Optimizations er 18 Ch apt - Static Single-Assignment Form er 19 Ch apt - Pipelining and Scheduling er 20 Ch apt - The Memory Hierarchy er 21 Ap pe ndi - MiniJava Language Reference Manual x A Bibliography Index List of Figures List of Tables List of Examples Team-Fly Team-Fly Back Cover This textbook describes all phases of a compiler: lexical analysis, parsing, abstract syntax, semantic actions, intermediate representations, instruction selection via tree matching, dataflow analysis, graph-coloring register allocation, and runtime systems.
    [Show full text]
  • Maximal-Munch” Tokenization in Linear Time Tom Reps [TOPLAS 1998]
    Fall 2016-2017 Compiler Principles Lecture 1: Lexical Analysis Roman Manevich Ben-Gurion University of the Negev Agenda • Understand role of lexical analysis in a compiler • Regular languages reminder • Lexical analysis algorithms • Scanner generation 2 Javascript example • Can you some identify basic units in this code? var currOption = 0; // Choose content to display in lower pane. function choose ( id ) { var menu = ["about-me", "publications", "teaching", "software", "activities"]; for (i = 0; i < menu.length; i++) { currOption = menu[i]; var elt = document.getElementById(currOption); if (currOption == id && elt.style.display == "none") { elt.style.display = "block"; } else { elt.style.display = "none"; } } } 3 Javascript example • Can you some identify basic units in this code? keyword ? ? ? ? ? var currOption = 0; // Choose content to display in lower pane. ? function choose ( id ) { var menu = ["about-me", "publications", "teaching", "software", "activities"]; ? for (i = 0; i < menu.length; i++) { currOption = menu[i]; var elt = document.getElementById(currOption); if (currOption == id && elt.style.display == "none") { elt.style.display = "block"; } else { elt.style.display = "none"; } } } 4 Javascript example • Can you some identify basic units in this code? keyword identifier operator numeric literal punctuation comment var currOption = 0; // Choose content to display in lower pane. string literal function choose ( id ) { var menu = ["about-me", "publications", "teaching", "software", "activities"]; whitespace for (i = 0; i < menu.length; i++) { currOption = menu[i]; var elt = document.getElementById(currOption); if (currOption == id && elt.style.display == "none") { elt.style.display = "block"; } else { elt.style.display = "none"; } } } 5 Role of lexical analysis • First part of compiler front-end Lexical Syntax AST Symbol Inter. Code High-level Analysis Analysis Table Rep.
    [Show full text]
  • CS 444: Compiler Construction
    CS 444: Compiler Construction Gabriel Wong Contents Introduction 3 Front-end Analysis . .3 Formal Languages . .3 Lexical Analysis / Scanning 3 Regular Expressions . .3 Regex to DFA . .4 Definition of NFA and DFA . .4 Computing -closure . .4 NFA to DFA Conversion . .5 Scanning . .6 Constructing Scanning DFA . .6 Parsing 7 Context-Free Grammar . .7 Top-down / Bottom-up Parsing . .8 LL(1) Parsing . .9 Augmented Grammar . 10 LL(1) Algorithm . 10 LR(0) Parsing . 11 LR(1) Parsing . 15 SLR(1) Parser . 15 Comparison Between LR Parsers . 16 LR(1) NFA . 16 LALR(1) Parser . 17 Abstract Syntax Tree . 19 Semantic Analysis 19 Implementing Environments . 20 Namespaces . 20 Java Name Resolution . 21 1. Create class environment . 21 2. Resolve type names . 21 3. Check class hierachy . 21 4. Disambiguate ambiguous names . 23 5. Resolve expressions (variables, fields) . 23 6. Type checking . 24 7. Resolve methods and instance fields . 27 Static Analysis 28 Java Reachability Analysis (JLS 14.20) . 28 Java Definite Assignment Analysis (JLS 16) . 29 1 CONTENTS CONTENTS Live Variable Analysis . 30 Code Generation 31 IA-32 Assembly . 32 Assembler Directives . 32 Strategy for Code Generation . 33 Data Layout . 33 Constants . 34 Local Variables . 34 Method Calls . 35 Object Layout . 36 Vtables . 37 Dispatching Interface Methods . 37 Subtype Testing . 38 Arrays................................................. 38 2 Regular Expressions LEXICAL ANALYSIS / SCANNING Notes taken from lectures by Ondřej Lhoták in Winter 2016. Introduction A compiler translates from a source language (eg. Java) to a target machine language (eg. x86). Scanning, parsing (A1) and context sensitive analysis (A234) are part of the compiler front end.
    [Show full text]
  • Context-Aware Scanning and Determinism-Preserving Grammar Composition, in Theory and Practice
    Context-Aware Scanning and Determinism-Preserving Grammar Composition, in Theory and Practice A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY August Schwerdfeger IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY July, 2010 c August Schwerdfeger 2010 ALL RIGHTS RESERVED Acknowledgments I would like to thank all my colleagues in the MELT group for all their input and assis- tance through the course of the project, and especially for their help in providing ready- made tests and applications for my software. Thanks specifically to Derek Bodin and Ted Kaminski for their efforts in integrating the Copper parser generator into MELT’s attribute grammar tools; to Lijesh Krishnan for developing the ableJ system of Java ex- tensions and assisting me extensively with its further development; and to Yogesh Mali for implementing and testing the Promela grammar with Copper. I also thank my advisor, Eric Van Wyk, for his continuous guidance throughout my time as a student, and also for going well above and beyond the call of duty in helping me to edit and prepare this thesis for presentation. I thank the members of my thesis committee, Mats Heimdahl, Gopalan Nadathur, and Wayne Richter, for their time and efforts in serving and reviewing. I especially wish to thank Prof. Nadathur for his many detailed questions and helpful suggestions for improving this thesis. Work on this thesis has been partially funded by National Science Foundation Grants 0347860 and 0905581. Support has also been received from funds provided by the Institute of Technology (soon to be renamed the College of Science and Engi- neering) and the Department of Computer Science and Engineering at the University of Minnesota.
    [Show full text]
  • Rewriting Strategies for Instruction Selection
    Rewriting Strategies for Instruction Selection Martin Bravenboer Eelco Visser www.stratego-language.org Technical Report UU-CS-2002-021 Institute of Information and Computing Sciences Utrecht University April 2002 This technical report is a preprint of: M. Bravenboer and E. Visser. Rewriting Strategies for Instruction Selection. To appear in S. Tison (editor) Rewriting Techniques and Applications (RTA'02). Lecture Notes in Computer Science. Springer-Verlag, Copenhagen, Denmark, June 2002. ( c Springer-Verlag) Copyright c 2002 Martin Bravenboer and Eelco Visser Address: Institute of Information and Computing Sciences Universiteit Utrecht P.O.Box 80089 3508 TB Utrecht email: [email protected] http://www.cs.uu.nl/~visser/ Rewriting Strategies for Instruction Selection Martin Bravenboer Eelco Visser Institute of Information and Computing Sciences, Universiteit Utrecht, P.O. Box 80089, 3508 TB Utrecht, The Netherlands. http://www.cs.uu.nl/ visser, [email protected], [email protected] ∼ Abstract. Instruction selection (mapping IR trees to machine instruc- tions) can be expressed by means of rewrite rules. Typically, such sets of rewrite rules are highly ambiguous. Therefore, standard rewriting en- gines based on fixed, exhaustive strategies are not appropriate for the execution of instruction selection. Code generator generators use spe- cial purpose implementations employing dynamic programming. In this paper we show how rewriting strategies for instruction selection can be encoded concisely in Stratego, a language for program transformation based on the paradigm of programmable rewriting strategies. This em- bedding obviates the need for a language dedicated to code generation, and makes it easy to combine code generation with other optimizations. 1 Introduction Code generation is the phase in the compilation of high-level programs in which an intermediate representation (IR) of a program is translated to a list of machine instructions.
    [Show full text]
  • Lecture 9 Code Generation Instruction Selection Instruction Selection
    Lecture 9 Code Generation Instruction Selection Instruction Selection • Mapping IR trees to assembly instructions • Tree patterns • Retain virtual registers (temps) • Algorithms for instruction selection • Maximal munch • Dynamic programming • Tree grammars • Instruction selection for the Minijava compiler • Generic assembly instructions (Asm.instr) Instructions and Tree Patterns + * ADD ri ← rj + rk MUL ri ← rj * rk LOAD ri ← M[rj + c] FETCH FETCH FETCH FETCH MEM MEM MEM MEM CONST + + CONST CONST STORE M[ri + c] ← rj MOVE MOVE MOVE MOVE MEM MEM MEM MEM CONST + + CONST CONST Tiling with Instruction Patterns a[i] := x where i = TEMPi a = M[fp + ca] = ca($fp) x = M[fp + cx] = cx($fp) MOVE MEM FETCH + MEM FETCH * + FP CONST(c ) MEM TEMPi CONST(4) x + FP CONST(ca) Tiling with Instruction Patterns MOVE a[i] := x MOVEM M[ri ] ← M[rj] MEM FETCH MEM MOVE MEM FETCH + MEM FETCH * + FP CONST(c ) MEM TEMPi CONST(4) x + FP CONST(ca) Optimal, Optimum Tilings A tiling of an IR tree with instruction patterns is optimum if the set of patterns has the least “cost” by some measure, e.g.: 1. the smallest number of patterns (producing the shortest instruction sequence, thus smallest code size) 2. total number of cycles to execute, thus fastest code A tiling is optimal if no two adjacent tiles can be combined into a single tile of lower cost. optimum ⇒ optimal Maximal Munch 1. Choose the “largest” tile that matches at the root of the IR tree (this is the munch) 2. Recursively apply maximal much at each subtree of this munch.
    [Show full text]
  • One Parser to Rule Them All
    One Parser to Rule Them All Ali Afroozeh Anastasia Izmaylova Centrum Wiskunde & Informatica, Amsterdam, The Netherlands {ali.afroozeh, anastasia.izmaylova}@cwi.nl Abstract 1. Introduction Despite the long history of research in parsing, constructing Parsing is a well-researched topic in computer science, and parsers for real programming languages remains a difficult it is common to hear from fellow researchers in the field and painful task. In the last decades, different parser gen- of programming languages that parsing is a solved prob- erators emerged to allow the construction of parsers from a lem. This statement mostly originates from the success of BNF-like specification. However, still today, many parsers Yacc [18] and its underlying theory that has been developed are handwritten, or are only partly generated, and include in the 70s. Since Knuth’s seminal paper on LR parsing [25], various hacks to deal with different peculiarities in program- and DeRemer’s work on practical LR parsing (LALR) [6], ming languages. The main problem is that current declara- there is a linear parsing technique that covers most syntactic tive syntax definition techniques are based on pure context- constructs in programming languages. Yacc, and its various free grammars, while many constructs found in program- ports to other languages, enabled the generation of efficient ming languages require context information. parsers from a BNF specification. Still, research papers and In this paper we propose a parsing framework that em- tools on parsing in the last four decades show an ongoing braces context information in its core. Our framework is effort to develop new parsing techniques.
    [Show full text]
  • Instruction Selection
    Lecture Notes on Instruction Selection 15-411: Compiler Design Frank Pfenning∗ Lecture 2 1 Introduction In this lecture we discuss the process of instruction selection, which typ- cially turns some form of intermediate code into a pseudo-assembly lan- guage in which we assume to have infinitely many registers called “temps”. We next apply register allocation to the result to assign machine registers and stack slots to the temps before emitting the actual assembly code. Ad- ditional material regarding instruction selection can be found in the text- book [App98, Chapter 9]. 2 A Simple Source Language We use a very simple source language where a program is just a sequence of assignments terminated by a return statement. The right-hand side of each assignment is a simple arithmetic expression. Later in the course we describe how the input text is parsed and translated into some intermediate form. Here we assume we have arrived at an intermediate representation where expressions are still in the form of trees and we have to generate in- structions in pseudo-assembly. We call this form IR Trees (for “Intermediate Representation Trees”). We describe the possible IR trees in a kind of pseudo-grammar, which should not be read as a description of the concrete syntax, but the recursive structure of the data. ∗Edited by Andre´ Platzer LECTURE NOTES L2.2 Instruction Selection Programs ~s ::= s1; : : : ; sn sequence of statements Statements s ::= t = e assignment j return e return, always last Expressions e ::= c integer constant j t temp (variable) j e1 ⊕ e2 binary operation Binops ⊕ ::= + j − j ∗ j = j ::: 3 Abstract Assembly Code Target For our very simple source, we use an equally simple target.
    [Show full text]
  • Compilers & Programming Systems
    CS502: Compilers & Programming Systems Low IR and Assembly code generation Zhiyuan Li Department of Computer Science Purdue University, USA • We first discuss the generation of low-level intermediate code of a program. – Inside the compiler such a code has its low-level representation, called low-level internal representation, or low IR. • We then discuss the generation of assembly code by walking through the low-level IR Motivation of machine independent representation • From the MIPS code example shown previously, we see that the assembly code is heavily dependent on the specific instruction format of a particular processor • We want to perform optimizations on machine independent representation such that we do not implement a new set of optimizations for each particular processor Differences between Low IR and Assembly Code •In low IR – Unlimited user-level registers are assumed. Hence, • symbolic registers, instead of hardware registers are named as operands. – Operators on different data type are overloaded. • For example, “+” represents both integer add and float add. – Instructions selected for special cases are abstracted away. • For example, a simple load instruction represents loading of a value, instead of two instructions (loading the high half and then the low half) as on RISC processors such as MIPS and SPARC. • Another example is that we simply write R4 * 2 when it could be done by a potentially less expensive left shift instruction. Intermediate Code • A well-known form of intermediate code is called three- address code (3AC) – In the “quadruples” form, 3AC for a function is represented by a list quadruples (dest, op1, op2, op), with “op” being the operation – An alternative form is a list of “triples” (op1, op2, op) and the index or the pointer to a specific triple implies the location of the intermediate result • When we plug such locations (indices or pointers) to the operand fields of an operation, we in effect obtain a forest, i.e.
    [Show full text]
  • 7. Code Generation
    7. Code Generation Oscar Nierstrasz Thanks to Jens Palsberg and Tony Hosking for their kind permission to reuse and adapt the CS132 and CS502 lecture notes. http://www.cs.ucla.edu/~palsberg/ http://www.cs.purdue.edu/homes/hosking/ Roadmap > Runtime storage organization > Procedure call conventions > Instruction selection > Register allocation > Example: generating Java bytecode See Modern compiler implementation in Java (Second edition), chapters 6 & 9. !2 Roadmap > Runtime storage organization > Procedure call conventions > Instruction selection > Register allocation > Example: generating Java bytecode !3 Typical run-time storage organization Heap grows “up”, stack grows “down”. • Allows both stack and heap maximal freedom. • Code and static data may be separate or intermingled. !4 In a 32-bit architecture, memory addresses range from from 0 to 4GB, broken into pages, of which only the low and the high pages are actually allocated. The low pages hold the compiled program code, static data, and heap data. The heap “grows” upwards as needed. High memory addresses refer to the run-time stack. They “grow” downward with each procedure call and “shrink” upward with each return. Certain memory pages (e.g., holding compiled code) may possibly be protected against modification. The Procedure Abstraction > The procedure abstraction supports separate compilation —build large programs —keep compile times reasonable —independent procedures > The linkage convention (calling convention): —a social contract — procedures inherit a valid run-time environment and restore one for their parents —platform dependent — code generated at compile time !5 Procedures as abstractions function foo() function bar(int a) { { int a, b; int x; ... ... bar(a); bar(x); ... ... } } bar() must preserve foo()’s state while executing.
    [Show full text]
  • Javacc Tutorial
    Chapter 1 Introduction to JavaCC 1.1 JavaCC and Parser Generation JavaCC is a parser generator and a lexical analyzer generator. Parsers and lexical analysers are software components for dealing with input of character sequences. Compilers and interpreters incorporate lexical analysers and parsers to decipher files containing programs, however lexical analysers and parsers can be used in a wide variety of other applications, as I hope the examples in this bookwill illustrate. So what are lexical analysers and parsers? Lexical analysers can breaka sequence of characters into a subsequences called tokens and it also classifies the tokens. Consider a short program in the C programming language. int main() { return 0 ; } The lexical analyser of a C compiler would breakthis into the following sequence of tokens “int”, “ ”, “main”, “(”, “)”, “”,“{”, “\n”, “\t”, “return” “”,“0”,“”,“;”,“\n”, “}”, “\n”, “” . The lexical analyser also identifies the kind of each token; in our example the sequence of 2 token kinds might be KWINT, SPACE, ID, OPAR, CPAR, SPACE, OBRACE, SPACE, SPACE, KWRETURN, SPACE, OCTALCONST, SPACE, SEMICOLON, SPACE, CBRACE, SPACE, EOF . The token of kind EOF represents the end of the original file. The sequence of tokens is then passed on to the parser. In the case of C, the parser does not need all the tokens; in our example, those clasified as SPACE are not passed on to the parser. The parser then analyses the sequence of tokens to determine the structure of the program. Often in compilers, the parser outputs a tree representing the structure of the program. This tree then serves as an input to components of the compiler responsible for analysis and code generation.
    [Show full text]
  • Instruction Selection Compiler Backend Intermediate Representations
    Instruction Selection 15-745 Optimizing Compilers Spring 2007 1 School of Computer Science School of Computer Science Compiler Backend Intermediate Representations instruction IR IR IR Assem Source Front Middle Back Target selector Code End End End Code Front end - produces an intermediate representation (IR) register TempMap allocator Middle end - transforms the IR into an equivalent IR that runs more efficiently Back end - transforms the IR into native code instruction Assem scheduler IR encodes the compiler’s knowledge of the program Middle end usually consists of several passes 2 3 School of Computer Science School of Computer Science Page ‹#› Types of Intermediate Intermediate Representations Representations Decisions in IR design affect the speed and efficiency of Structural the compiler – Graphically oriented Examples: Some important IR properties – Heavily used in source-to-source translators Trees, DAGs – Ease of generation – Tend to be large – Ease of manipulation Linear – Procedure size – Pseudo-code for an abstract machine Examples: – Freedom of expression – Level of abstraction varies 3 address code Stack machine code – Level of abstraction – Simple, compact data structures The importance of different properties varies between – Easier to rearrange compilers Hybrid Example: – Selecting an appropriate IR for a compiler is critical – Combination of graphs and linear code Control-flow graph 4 5 School of Computer Science School of Computer Science Level of Abstraction Level of Abstraction The level of detail exposed in an IR
    [Show full text]