Design and Implementation of Compiler

Total Page:16

File Type:pdf, Size:1020Kb

Design and Implementation of Compiler This page intentionally left blank Copyright © 2009, New Age International (P) Ltd., Publishers Published by New Age International (P) Ltd., Publishers All rights reserved. No part of this ebook may be reproduced in any form, by photostat, microfilm, xerography, or any other means, or incorporated into any information retrieval system, electronic or mechanical, without the written permission of the publisher. All inquiries should be emailed to [email protected] ISBN (13) : 978-81-224-2865-0 PUBLISHING FOR ONE WORLD NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS 4835/24, Ansari Road, Daryaganj, New Delhi - 110002 Visit us at www.newagepublishers.com Dedicated totoDedicated Dear Shantanu This page intentionally left blank PREFACE This book attempts to provide a unified overview of the broad field of compiler and their practical implementa- tion. The organization of book reflects an attempt to break this massive subject into comprehensible parts and to build piece by piece, a survey of the state of art. This book emphasizes basic principles and topics of fundamental importance concerning the design issues and architecture of this field and provides detailed discussion leading edge topics. Although the scope of this book is broad, there are a number of basic principles that appear repeatedly as themes and that unify this field. This book addresses a full spectrum of compiler design. The emphasis is on solving the problems universally encountered in designing a compiler regardless of the source language or target machine. The book examines alternative approaches to meeting specific design requirements of compiler. The concepts of designing and step-by-step practical implementation of compiler in ‘C’ language are main focus of the book. The book is intended for students who have completed a programming based two-semester introductory computer sequence in which they have written programs that implemented basic algorithms manipulate discrete structures and apply basic data structures and have studied theory of computation also. An important feature of the book is the collection of problems and exercises. Across all the chapters, the book includes over 200 problems, almost all of the developed and class-tested in homework and exams as part of our teaching of the course. We view the problems as a crucial component of the book, and they are structured in keeping with our overall approach to the material. Basically, the book is divided according to different phases of compilers. Each chapter includes problems and suggestions for further reading. The chapters of the book are sufficiently modular to provide a great deal of flexibility in the design of course. The first and second chapters give a brief introduction of the language and compiler is important to understand the rest of the book. The approach used in different phases of compiler covers the various aspects of designing a language translator in depth. To facilitate easy understanding, the text book and problems have been put in simpler form. The concepts are supported by programming exercise in ‘C’ language. This book certainly helps the readers to understand the subject in better way. For many instructors, an important component of compiler course is a project or set of projects by which student gets hands-on experience to reinforce concepts from the text. This book provides an unparalleled degree of support for including a projects component in the course. The two projects on compiler in Chapter 12 will be helpful to students to deep understanding the subject. The book is intended for both academic and professional audience. For the professional interested in this field, the book serves as a basic reference volume and suitable for self study. The book is primarily designed for use in a first undergraduate course on the compiler; it can also be used as the basis for an introductory graduate viii Preface course. This book covers the syllabus of U.P. Technical University, Lucknow and other Universities of India. As a textbook it can be used for a one semester course. It is said, “To err in human, to forgive divine.” In this light I wish that the short comings of the book will be forgiven. At the same time the authors are open to any kind of constructive criticism and suggestions for further improvement. Intelligent suggestions are welcome and we will try our best to incorporate such valuable suggestions in the subsequent edition of this book. AUTHORS Contents ix ACKNOWLEDGEMENTS Once again it is pleasure to acknowledge our gratitude to the many persons involved, directly, or indirectly, in production of this book. First of all, we would like to thank almighty God who gave us the inspiration to take up this task. Prof. S.K. Bajpai of Institute of Engineering and Technology Lucknow and Shantanu student at S.R.M.S., CET Bareilly were key collaborator in the early conceptualization of this text. For the past several years, the development of the book has benefited greatly from the feedback and advice of the colleagues of various institutes who have used prepublication drafts for teaching. The course staffs we’ve had in teaching the subject have been tremendously helpful in the formulation of this material. We thank our undergraduate and graduate teaching assistants of various institutes who have provided valuable in-sights, suggestions, and comments on the text. We also thank all the students in these classes who have provided comments and feedback on early drafts of the book over the years. Our special thanks to Shantanu for producing supplemen- tary material associated with the book, which promises to greatly extend its utility to future instructors. We wish to express our deep sense of gratitude to Mrs. Veena Mathur, Executive Chairperson, Prof. Manish Sharma, Dr. Neeraj Saxena of RBMI Group of Institutions, and Mr. Devmurtee, Chairman, Prof. Prabhakar Gupta, Saket Aggrawal of SRMS CET, Bareilly, Mr. Zuber Khan, Mr. Ajai Indian of IIMS, Bareilly and Dr. S.P. Tripathi, Dr. M H. Khan of IET, Lucknow for their kind co-operation and motivation for writing this book. We would particularly like to thank our friends Dr. Verma S. at Indian Institute of Information Technology, Allahabad and Prof. Anupam Shrivastav at IFTM, Moradabad for numerous illuminating conversations and much stimu- lating correspondence. We are grateful to M/s New Age International (P) Limited, Publishers and the staff of editorial department for their encouragement and support throughout this project. Finally, we thank our parents and families— Dr. Deepali Budhoria, Mrs. Nishi Sharma, Mrs. Pragati Varshney, and kids Ritunjay, Pragunjay and sweet Keya. We appreciate their support, patience and many other contributions more than we can express in any acknowledgements here. AUTHORS This page intentionally left blank CONTENTS Preface vii Acknowledgements ix CHAPTER 1: Introduction to Language 1–40 1.1 Chomsky Hierarchy 2 1.2 Definition of Grammar 3 1.2.1 Some Other Definitions 3 1.3 Regular Languages 4 1.3.1 Algebraic Operation on RE 5 1.3.2 Identities for RE 5 1.3.3 Algorithm 1.1 (Anderson Theorem) 6 1.4 Regular Definitions 6 1.4.1 Notational Shorthand 7 1.4.2 Closure Properties for Regular Language 7 1.5 Automata 8 1.5.1 Definition of an Automata 8 1.5.2 Characteristics of an Automata 8 1.5.3 Finite Automata 8 1.5.4 Mathematical Model 9 1.6 Context Free Languages 19 1.6.1 Closure Properties 19 1.6.2 Derivations/Parse Tree 19 1.6.3 Properties of Context-free Languages 21 1.6.4 Notational Shorthand 22 1.6.5 Pushdown Automaton 22 1.7 Context Sensitive Languages 26 xii Contents 1.7.1 Linear Bounded Automata (LBA) 26 1.8 Recursively Enumerable Languages 26 1.8.1 Turing Machines 26 1.9 Programming Language 26 1.9.1 History 27 1.9.2 Purpose 29 1.9.3 Specification 30 1.9.4 Implementation 30 Examples 32 Tribulations 36 Further Readings and References 39 CHAPTER 2: Introduction to Compiler 41–77 2.1 Introduction to Compiler 42 2.1.1 What is the Challenge? 44 2.2 Types of Compilers 44 2.2.1 One-pass versus Multi-pass Compilers 44 2.2.2 Where Does the Code Execute? 47 2.2.3 Hardware Compilation 47 2.3 Meaning of Compiler Design 47 2.3.1 Front End 48 2.3.2 Back End 48 2.4 Computational Model: Analysis and Synthesis 49 2.5 Phases of Compiler and Analysis of Source Code 49 2.5.1 Source Code 49 2.5.2 Lexical Analysis 50 2.5.3 Syntax Analysis 51 2.5.4 Semantic Analysis 52 2.5.5 Intermediate Code Generation 52 2.5.6 Code Optimization 53 2.5.7 Code Generation 54 2.5.8 Out Target Program 56 2.5.9 Symbol Table 56 2.6 Cousins/Relatives of the Compiler 57 2.6.1 Preprocessor 57 2.6.2 Assembler 58 Contents xiii 2.6.3 Loader and Linker 58 2.7 Interpreter 60 2.7.1 Byte Code Interpreter 60 2.7.2 Interpreter v/s Compiler 60 2.8 Abstract Interpreter 60 2.8.1 Incentive Knowledge 60 2.8.2 Abstract Interpretation of Computer Programs 61 2.8.3 Formalization 61 2.8.4 Example of Abstract Domain 61 2.9 Case Tool: Compiler Construction Tools 62 2.10 A Simple Compiler Example (C Language) 64 2.11 Decompilers 64 2.11.1 Phases 64 2.12 Just-in-Time Compilation 66 2.13 Cross Compiler 67 2.13.1 Uses of Cross Compilers 68 2.13.2 GCC and Cross Compilation 68 2.13.3 Canadian Cross 69 2.14 Bootstrapping 69 2.15 Macro 71 2.15.1 Quoting the Macro Arguments 71 2.16 X-Macros 72 Examples 73 Tribulations 75 Further Readings and References 76 CHAPTER 3: Introduction to Source Code and Lexical Analysis 79–99 3.1 Source Code 80 3.1.1 Purposes 80 3.1.2 Organization 80 3.1.3 Licensing 81 3.1.4 Quality 81 3.1.5 Phrase Structure and Lexical Structure 82 3.1.6 The
Recommended publications
  • Expression Rematerialization for VLIW DSP Processors with Distributed Register Files ?
    Expression Rematerialization for VLIW DSP Processors with Distributed Register Files ? Chung-Ju Wu, Chia-Han Lu, and Jenq-Kuen Lee Department of Computer Science, National Tsing-Hua University, Hsinchu 30013, Taiwan {jasonwu,chlu}@pllab.cs.nthu.edu.tw,[email protected] Abstract. Spill code is the overhead of memory load/store behavior if the available registers are not sufficient to map live ranges during the process of register allocation. Previously, works have been proposed to reduce spill code for the unified register file. For reducing power and cost in design of VLIW DSP processors, distributed register files and multi- bank register architectures are being adopted to eliminate the amount of read/write ports between functional units and registers. This presents new challenges for devising compiler optimization schemes for such ar- chitectures. This paper aims at addressing the issues of reducing spill code via rematerialization for a VLIW DSP processor with distributed register files. Rematerialization is a strategy for register allocator to de- termine if it is cheaper to recompute the value than to use memory load/store. In the paper, we propose a solution to exploit the character- istics of distributed register files where there is the chance to balance or split live ranges. By heuristically estimating register pressure for each register file, we are going to treat them as optional spilled locations rather than spilling to memory. The choice of spilled location might pre- serve an expression result and keep the value alive in different register file. It increases the possibility to do expression rematerialization which is effectively able to reduce spill code.
    [Show full text]
  • User-Directed Loop-Transformations in Clang
    User-Directed Loop-Transformations in Clang Michael Kruse Hal Finkel Argonne Leadership Computing Facility Argonne Leadership Computing Facility Argonne National Laboratory Argonne National Laboratory Argonne, USA Argonne, USA [email protected] hfi[email protected] Abstract—Directives for the compiler such as pragmas can Only #pragma unroll has broad support. #pragma ivdep made help programmers to separate an algorithm’s semantics from popular by icc and Cray to help vectorization is mimicked by its optimization. This keeps the code understandable and easier other compilers as well, but with different interpretations of to optimize for different platforms. Simple transformations such as loop unrolling are already implemented in most mainstream its meaning. However, no compiler allows applying multiple compilers. We recently submitted a proposal to add generalized transformations on a single loop systematically. loop transformations to the OpenMP standard. We are also In addition to straightforward trial-and-error execution time working on an implementation in LLVM/Clang/Polly to show its optimization, code transformation pragmas can be useful for feasibility and usefulness. The current prototype allows applying machine-learning assisted autotuning. The OpenMP approach patterns common to matrix-matrix multiplication optimizations. is to make the programmer responsible for the semantic Index Terms—OpenMP, Pragma, Loop Transformation, correctness of the transformation. This unfortunately makes it C/C++, Clang, LLVM, Polly hard for an autotuner which only measures the timing difference without understanding the code. Such an autotuner would I. MOTIVATION therefore likely suggest transformations that make the program Almost all processor time is spent in some kind of loop, and return wrong results or crash.
    [Show full text]
  • Elimination of Memory-Based Dependences For
    Elimination of Memory-Based Dependences for Loop-Nest Optimization and Parallelization: Evaluation of a Revised Violated Dependence Analysis Method on a Three-Address Code Polyhedral Compiler Konrad Trifunovic1, Albert Cohen1, Razya Ladelsky2, and Feng Li1 1 INRIA Saclay { ^Ile-de-France and LRI, Paris-Sud 11 University, Orsay, France falbert.cohen, konrad.trifunovic, [email protected] 2 IBM Haifa Research, Haifa, Israel [email protected] Abstract In order to preserve the legality of loop nest transformations and parallelization, data- dependences need to be analyzed. Memory dependences come in two varieties: they are either data-flow dependences or memory-based dependences. While data-flow de- pendences must be satisfied in order to preserve the correct order of computations, memory-based dependences are induced by the reuse of a single memory location to store multiple values. Memory-based dependences reduce the degrees of freedom for loop transformations and parallelization. While systematic array expansion techniques exist to remove all memory-based dependences, the overhead on memory footprint and the detrimental impact on register-level reuse can be catastrophic. Much care is needed when eliminating memory-based dependences, and this is particularly essential for polyhedral compilation on three-address code representation like the GRAPHITE pass of GCC. We propose and evaluate a technique allowing a compiler to ignore some memory- based dependences while checking for the legality of a given affine transformation. This technique does not involve any array expansion. When this technique is not sufficient to expose data parallelism, it can be used to compute the minimal set of scalars and arrays that should be privatized.
    [Show full text]
  • Generalizing Loop-Invariant Code Motion in a Real-World Compiler
    Imperial College London Department of Computing Generalizing loop-invariant code motion in a real-world compiler Author: Supervisor: Paul Colea Fabio Luporini Co-supervisor: Prof. Paul H. J. Kelly MEng Computing Individual Project June 2015 Abstract Motivated by the perpetual goal of automatically generating efficient code from high-level programming abstractions, compiler optimization has developed into an area of intense research. Apart from general-purpose transformations which are applicable to all or most programs, many highly domain-specific optimizations have also been developed. In this project, we extend such a domain-specific compiler optimization, initially described and implemented in the context of finite element analysis, to one that is suitable for arbitrary applications. Our optimization is a generalization of loop-invariant code motion, a technique which moves invariant statements out of program loops. The novelty of the transformation is due to its ability to avoid more redundant recomputation than normal code motion, at the cost of additional storage space. This project provides a theoretical description of the above technique which is fit for general programs, together with an implementation in LLVM, one of the most successful open-source compiler frameworks. We introduce a simple heuristic-driven profitability model which manages to successfully safeguard against potential performance regressions, at the cost of missing some speedup opportunities. We evaluate the functional correctness of our implementation using the comprehensive LLVM test suite, passing all of its 497 whole program tests. The results of our performance evaluation using the same set of tests reveal that generalized code motion is applicable to many programs, but that consistent performance gains depend on an accurate cost model.
    [Show full text]
  • A General Compilation Algorithm to Parallelize and Optimize Counted Loops with Dynamic Data-Dependent Bounds Jie Zhao, Albert Cohen
    A general compilation algorithm to parallelize and optimize counted loops with dynamic data-dependent bounds Jie Zhao, Albert Cohen To cite this version: Jie Zhao, Albert Cohen. A general compilation algorithm to parallelize and optimize counted loops with dynamic data-dependent bounds. IMPACT 2017 - 7th International Workshop on Polyhedral Compilation Techniques, Jan 2017, Stockholm, Sweden. pp.1-10. hal-01657608 HAL Id: hal-01657608 https://hal.inria.fr/hal-01657608 Submitted on 7 Dec 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A general compilation algorithm to parallelize and optimize counted loops with dynamic data-dependent bounds Jie Zhao Albert Cohen INRIA & DI, École Normale Supérieure 45 rue d’Ulm, 75005 Paris fi[email protected] ABSTRACT iterating until a dynamically computed, data-dependent up- We study the parallelizing compilation and loop nest opti- per bound. Such bounds are loop invariants, but often re- mization of an important class of programs where counted computed in the immediate vicinity of the loop they con- loops have a dynamically computed, data-dependent upper trol; for example, their definition may take place in the im- bound. Such loops are amenable to a wider set of trans- mediately enclosing loop.
    [Show full text]
  • Loop Transformations and Parallelization
    Loop transformations and parallelization Claude Tadonki LAL/CNRS/IN2P3 University of Paris-Sud [email protected] December 2010 Claude Tadonki Loop transformations and parallelization C. Tadonki – Loop transformations Introduction Most of the time, the most time consuming part of a program is on loops. Thus, loops optimization is critical in high performance computing. Depending on the target architecture, the goal of loops transformations are: improve data reuse and data locality efficient use of memory hierarchy reducing overheads associated with executing loops instructions pipeline maximize parallelism Loop transformations can be performed at different levels by the programmer, the compiler, or specialized tools. At high level, some well known transformations are commonly considered: loop interchange loop (node) splitting loop unswitching loop reversal loop fusion loop inversion loop skewing loop fission loop vectorization loop blocking loop unrolling loop parallelization Claude Tadonki Loop transformations and parallelization C. Tadonki – Loop transformations Dependence analysis Extract and analyze the dependencies of a computation from its polyhedral model is a fundamental step toward loop optimization or scheduling. Definition For a given variable V and given indexes I1, I2, if the computation of X(I1) requires the value of X(I2), then I1 ¡ I2 is called a dependence vector for variable V . Drawing all the dependence vectors within the computation polytope yields the so-called dependencies diagram. Example The dependence vectors are (1; 0); (0; 1); (¡1; 1). Claude Tadonki Loop transformations and parallelization C. Tadonki – Loop transformations Scheduling Definition The computation on the entire domain of a given loop can be performed following any valid schedule.A timing function tV for variable V yields a valid schedule if and only if t(x) > t(x ¡ d); 8d 2 DV ; (1) where DV is the set of all dependence vectors for variable V .
    [Show full text]
  • Foundations of Scientific Research
    2012 FOUNDATIONS OF SCIENTIFIC RESEARCH N. M. Glazunov National Aviation University 25.11.2012 CONTENTS Preface………………………………………………….…………………….….…3 Introduction……………………………………………….…..........................……4 1. General notions about scientific research (SR)……………….……….....……..6 1.1. Scientific method……………………………….………..……..……9 1.2. Basic research…………………………………………...……….…10 1.3. Information supply of scientific research……………..….………..12 2. Ontologies and upper ontologies……………………………….…..…….…….16 2.1. Concepts of Foundations of Research Activities 2.2. Ontology components 2.3. Ontology for the visualization of a lecture 3. Ontologies of object domains………………………………..………………..19 3.1. Elements of the ontology of spaces and symmetries 3.1.1. Concepts of electrodynamics and classical gauge theory 4. Examples of Research Activity………………….……………………………….21 4.1. Scientific activity in arithmetics, informatics and discrete mathematics 4.2. Algebra of logic and functions of the algebra of logic 4.3. Function of the algebra of logic 5. Some Notions of the Theory of Finite and Discrete Sets…………………………25 6. Algebraic Operations and Algebraic Structures……………………….………….26 7. Elements of the Theory of Graphs and Nets…………………………… 42 8. Scientific activity on the example “Information and its investigation”……….55 9. Scientific research in Artificial Intelligence……………………………………..59 10. Compilers and compilation…………………….......................................……69 11. Objective, Concepts and History of Computer security…….………………..93 12. Methodological and categorical apparatus of scientific research……………114 13. Methodology and methods of scientific research…………………………….116 13.1. Methods of theoretical level of research 13.1.1. Induction 13.1.2. Deduction 13.2. Methods of empirical level of research 14. Scientific idea and significance of scientific research………………………..119 15. Forms of scientific knowledge organization and principles of SR………….121 1 15.1. Forms of scientific knowledge 15.2.
    [Show full text]
  • Stack Applications
    Applications of Stacks Based on the notes from David Fernandez-Baca and Steve Kautz Bryn Mawr College CS206 Intro to Data Structures App1: Verifying Matched Parentheses • Verifying whether an string of parentheses is well- formed. o “{[(){[]}]()}” -- well-formed o “{[]}[]]()}” -- not well-formed • More precisely, a string γ of parentheses is well- formed if either γ is empty or γ has the form (α)β [α]β or {α}β where α and β are themselves well-formed strings of parentheses. This kind of recursive definition lends itself naturally to a stack-based algorithm. 1 Verifying Matched Parentheses (cont.) • Idea: to check a String like "{[(){[]}]()}", scan it character by character. o When you encounter a lefty— '{', '[', or '(' — push it onto the stack. o When you encounter a righty, pop its counterpart from atop the stack, and check that they match. o If there is a mismatch or exception, or if the stack is not empty when you reach the end of the string, the parentheses are not properly matched. o Detailed code is posted separately. App2: Arithmetic Expressions Infix notation: • Operators are written between the operands they act on; e.g., “2+2”. • Parentheses surrounding groups of operands and operators are used to indicate the intended order in which operations are to be performed. • In the absence of parentheses, precedence rules determine the order of operations. E.g., Because “-” has lower precedence than “*”, the infix expression “3-4*5” is evaluated as “3-(4*5)”, not as “(3-4)*5”. If you want it to evaluate the second way, you need to parenthesize.
    [Show full text]
  • Compiler Optimizations
    Compiler Optimizations CS 498: Compiler Optimizations Fall 2006 University of Illinois at Urbana-Champaign A note about the name of optimization • It is a misnomer since there is no guarantee of optimality. • We could call the operation code improvement, but this is not quite true either since compiler transformations are not guaranteed to improve the performance of the generated code. Outline Assignment Statement Optimizations Loop Body Optimizations Procedure Optimizations Register allocation Instruction Scheduling Control Flow Optimizations Cache Optimizations Vectorization and Parallelization Advanced Compiler Design Implementation. Steven S. Muchnick, Morgan and Kaufmann Publishers, 1997. Chapters 12 - 19 Historical origins of compiler optimization "It was our belief that if FORTRAN, during its first months, were to translate any reasonable "scientific" source program into an object program only half as fast as its hand coded counterpart, then acceptance of our system would be in serious danger. This belief caused us to regard the design of the translator as the real challenge, not the simple task of designing the language."... "To this day I believe that our emphasis on object program efficiency rather than on language design was basically correct. I believe that has we failed to produce efficient programs, the widespread use of language like FORTRAN would have been seriously delayed. John Backus FORTRAN I, II, and III Annals of the History of Computing Vol. 1, No 1, July 1979 Compilers are complex systems “Like most of the early hardware and software systems, Fortran was late in delivery, and didn’t really work when it was delivered. At first people thought it would never be done.
    [Show full text]
  • Polyhedral-Model Guided Loop-Nest Auto-Vectorization Konrad Trifunović, Dorit Nuzman, Albert Cohen, Ayal Zaks, Ira Rosen
    Polyhedral-Model Guided Loop-Nest Auto-Vectorization Konrad Trifunović, Dorit Nuzman, Albert Cohen, Ayal Zaks, Ira Rosen To cite this version: Konrad Trifunović, Dorit Nuzman, Albert Cohen, Ayal Zaks, Ira Rosen. Polyhedral-Model Guided Loop-Nest Auto-Vectorization. The 18th International Conference on Parallel Architectures and Com- pilation Techniques, Sep 2009, Raleigh, United States. hal-00645325 HAL Id: hal-00645325 https://hal.inria.fr/hal-00645325 Submitted on 27 Nov 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Polyhedral-Model Guided Loop-Nest Auto-Vectorization Konrad Trifunovic †, Dorit Nuzman ∗, Albert Cohen †, Ayal Zaks ∗ and Ira Rosen ∗ ∗IBM Haifa Research Lab, {dorit, zaks, irar}@il.ibm.com †INRIA Saclay, {konrad.trifunovic, albert.cohen }@inria.fr Abstract—Optimizing compilers apply numerous inter- Modern architectures must exploit multiple forms of par- dependent optimizations, leading to the notoriously difficult allelism provided by platforms while using the memory phase-ordering problem — that of deciding which trans- hierarchy efficiently. Systematic solutions to harness the formations to apply and in which order. Fortunately, new infrastructures such as the polyhedral compilation framework interplay of multi-level parallelism and locality are emerg- host a variety of transformations, facilitating the efficient explo- ing, by advances in automatic parallelization and loop nest ration and configuration of multiple transformation sequences.
    [Show full text]
  • Compiler Construction
    Compiler construction PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 10 Dec 2011 02:23:02 UTC Contents Articles Introduction 1 Compiler construction 1 Compiler 2 Interpreter 10 History of compiler writing 14 Lexical analysis 22 Lexical analysis 22 Regular expression 26 Regular expression examples 37 Finite-state machine 41 Preprocessor 51 Syntactic analysis 54 Parsing 54 Lookahead 58 Symbol table 61 Abstract syntax 63 Abstract syntax tree 64 Context-free grammar 65 Terminal and nonterminal symbols 77 Left recursion 79 Backus–Naur Form 83 Extended Backus–Naur Form 86 TBNF 91 Top-down parsing 91 Recursive descent parser 93 Tail recursive parser 98 Parsing expression grammar 100 LL parser 106 LR parser 114 Parsing table 123 Simple LR parser 125 Canonical LR parser 127 GLR parser 129 LALR parser 130 Recursive ascent parser 133 Parser combinator 140 Bottom-up parsing 143 Chomsky normal form 148 CYK algorithm 150 Simple precedence grammar 153 Simple precedence parser 154 Operator-precedence grammar 156 Operator-precedence parser 159 Shunting-yard algorithm 163 Chart parser 173 Earley parser 174 The lexer hack 178 Scannerless parsing 180 Semantic analysis 182 Attribute grammar 182 L-attributed grammar 184 LR-attributed grammar 185 S-attributed grammar 185 ECLR-attributed grammar 186 Intermediate language 186 Control flow graph 188 Basic block 190 Call graph 192 Data-flow analysis 195 Use-define chain 201 Live variable analysis 204 Reaching definition 206 Three address
    [Show full text]
  • Study Topics Test 3
    Printed by David Whalley Apr 22, 20 9:44 studytopics_test3.txt Page 1/2 Apr 22, 20 9:44 studytopics_test3.txt Page 2/2 Topics to Study for Exam 3 register allocation array padding scalar replacement Chapter 6 loop interchange prefetching intermediate code representations control flow optimizations static versus dynamic type checking branch chaining equivalence of type expressions reversing branches coercions, overloading, and polymorphism code positioning translation of boolean expressions loop inversion backpatching and translation of flow−of−control constructs useless jump elimination translation of record declarations unreachable code elimination translation of switch statements data flow optimizations translation of array references common subexpression elimination partial redundancy elimination dead assignment elimination Chapter 5 evaluation order determination recurrence elimination syntax directed definitions machine specific optimizations synthesized and inherited attributes instruction scheduling dependency graphs filling delay slots L−attributed definitions exploiting instruction−level parallelism translation schemes peephole optimization syntax−directed construction of syntax trees and DAGs constructing a control flow graph syntax−directed translation with YACC optimizations before and after code generation instruction selection optimization Chapter 7 Chapter 10 activation records call graphs instruction pipelining typical actions during calls and returns data dependences storage allocation strategies true dependences static,
    [Show full text]