COMP 181 Z What Is the Tufts Mascot? “Jumbo” the Elephant

Total Page:16

File Type:pdf, Size:1020Kb

COMP 181 Z What Is the Tufts Mascot? “Jumbo” the Elephant Prelude COMP 181 z What is the Tufts mascot? “Jumbo” the elephant Lecture 6 z Why? Top-down Parsing z P. T. Barnum was an original trustee of Tufts z 1884: donated $50,000 for a natural museum on campus Barnum Museum, later Barnum Hall September 21, 2006 z “Jumbo”: famous circus elephant z 1885: Jumbo died, was stuffed, donated to Tufts z 1975: Fire destroyed Barnum Hall, Jumbo Tufts University Computer Science 2 Last time Grammar issues z Finished scanning z Often: more than one way to derive a string z Produces a stream of tokens z Why is this a problem? z Removes things we don’t care about, like white z Parsing: is string a member of L(G)? space and comments z We want more than a yes or no answer z Context-free grammars z Key: z Formal description of language syntax z Represent the derivation as a parse tree z Deriving strings using CFG z We want the structure of the parse tree to capture the meaning of the sentence z Depicting derivation as a parse tree Tufts University Computer Science 3 Tufts University Computer Science 4 Grammar issues Parse tree: x – 2 * y z Often: more than one way to derive a string Right-most derivation Parse tree z Why is this a problem? Rule Sentential form expr z Parsing: is string a member of L(G)? - expr z We want more than a yes or no answer 1 expr op expr # Production rule 3 expr op <id,y> expr op expr 1 expr → expr op expr 6 expr * <id,y> z Key: 2 | number 1 expr op expr * <id,y> expr op expr * y z Represent the derivation as a parse3 tree | identifier 2 expr op <num,2> * <id,y> z We want the structure of the parse4 optree →to capture+ the 5 expr - <num,2> * <id,y> meaning of the sentence 5 | - 3 <id,x> - <num,2> * <id,y> x - 2 6 | * 7 | / Tufts University Computer Science 5 Tufts University Computer Science 6 1 Abstract syntax tree Left vs right derivations z Parse tree contains extra junk z Two derivations of “x – 2 * y” z Eliminate intermediate nodes z Move operators up to parent nodes Rule Sentential form Rule Sentential form z Result: abstract syntax tree - expr - expr 1 expr op expr 1 expr op expr expr * 3 <id, x> op expr 3 expr op <id,y> 5 <id,x> - expr 6 expr * <id,y> 1 <id,x> - expr op expr 1 expr op expr * <id,y> expr op expr - y 2 <id,x> - <num,2> op expr 2 expr op <num,2> * <id,y> 6 <id,x> - <num,2> * expr 5 expr - <num,2> * <id,y> expr op expr * y x 2 3 <id,x> - <num,2> * <id,y> 3 <id,x> - <num,2> * <id,y> Left-most derivation Right-most derivation x - 2 Tufts University Computer Science 7 Tufts University Computer Science 8 Derivations With precedence z One captures meaning, the other doesn’t z Last time: ways to force the right tree shape z Add productions to represent precedence - * # Production rule # Production rule 1 expr → expr op expr 1 expr → expr + term x * - y 2 | number 2 | expr - term 3 | identifier 3 | term 4 op → + 4 term → term * factor 2 y x 2 5 | - 5 | term / factor 6 | * 6 | factor Left-most derivation Right-most derivation 7 | / 7 factor → number 8 | identifier Tufts University Computer Science 9 Tufts University Computer Science 10 With precedence Parsing z What is parsing? z Discovering the derivation of a string expr expr- If one exists z Harder than generating strings Not surprisingly expr op expr expr - term* z Two major approaches expr op expr * y term term * fact z Top-down parsing z Bottom-up parsing x - 2 fact fact y z Don’t work on all context-free grammars z Properties of grammar determine parse-ability x 2 z Our goal: make parsing efficient z We may be able to transform a grammar Tufts University Computer Science 11 Tufts University Computer Science 12 2 Two approaches Grammars and parsers z Top-down parsers LL(1), recursive descent z LL(1) parsers z Start at the root of the parse tree and grow toward leaves z Left-to-right input Grammars that this z Pick a production & try to match the input z Leftmost derivation can handle are called z Bad “pick” Æ may need to backtrack LL(1) grammars z 1 symbol of look-ahead z Bottom-up parsers LR(1), operator precedence z LR(1) parsers z Start at the leaves and grow toward root z Left-to-right input Grammars that this z As input is consumed, encode possible parse trees in an z Rightmost derivation can handle are called internal state (similar to our NFA Æ DFA conversion) LR(1) grammars z 1 symbol of look-ahead z Bottom-up parsers handle a large class of grammars z Also: LL(k), LR(k), SLR, LALR, … Tufts University Computer Science 13 Tufts University Computer Science 14 Top-down parsing Example z Start with the root of the parse tree z Expression grammar (with precedence) z Root of the tree: node labeled with the start symbol # Production rule z Algorithm: 1 expr → expr + term Repeat until the fringe of the parse tree matches input string 2 | expr - term 3 | term z At a node A, select a production for A 4 term → term * factor Add a child node for each symbol on rhs 5 | term / factor z If a terminal symbol is added that doesn’t match, backtrack 6 | factor z Find the next node to be expanded (a non-terminal) 7 factor → number 8 | identifier z Done when: z Leaves of parse tree match input string (success) z Input string x – 2 * y z All productions exhausted in backtracking (failure) Tufts University Computer Science 15 Tufts University Computer Science 16 Current position in Example the input stream Backtracking Rule Sentential form Input string Rule Sentential form Input string - expr ↑ x - 2 * y expr - expr ↑ x - 2 * y 2 expr + term ↑ x - 2 * y 2 expr + term ↑ x - 2 * y 3 term + term ↑ x – 2 * y 3 term + term ↑ x – 2 * y Undo all these 6 factor + term ↑ x – 2 * y expr + term 6 factor + term ↑ x – 2 * y productions 8 <id> + term x ↑ – 2 * y 8 <id> + term x ↑ – 2 * y - <id,x> + term x ↑ – 2 * y ? <id,x> + term x ↑ – 2 * y term fact z Rollback productions z Problem: z Choose a different production for expr z x Can’t match next terminal z Continue z We guessed wrong at step 2 Tufts University Computer Science 17 Tufts University Computer Science 18 3 Retrying Successful parse Rule Sentential form Input string expr Rule Sentential form Input string expr - expr ↑ x - 2 * y - expr ↑ x - 2 * y 2 expr - term ↑ x - 2 * y 2 expr - term ↑ x - 2 * y 3 term - term ↑ x – 2 * y expr - term 3 term - term ↑ x – 2 * y expr - term 6 factor - term ↑ x – 2 * y 6 factor - term ↑ x – 2 * y 8 <id> - term x ↑ – 2 * y 8 <id> - term x ↑ – 2 * y term fact term term * fact - <id,x> - term x – ↑ 2 * y - <id,x> - term x – ↑ 2 * y 3 <id,x> - factor x – ↑ 2 * y 4 <id,x> - term * fact x – ↑ 2 * y 7 <id,x> - <num> x – 2 ↑ * y fact 2 6 <id,x> - fact * fact x – ↑ 2 * y fact fact y 7 <id,x> - <num> * fact x – 2 ↑ * y z - <id,x> - <num,2> * fact x – 2 * ↑ y Problem: x x 2 8 <id,x> - <num,2> * <id> x – 2 * y ↑ z More input to read z Another cause of backtracking z All terminals match – we’re done Tufts University Computer Science 19 Tufts University Computer Science 20 Other possible parses Left recursion Rule Sentential form Input string z Formally, - expr ↑ x - 2 * y A grammar is left recursive if ∃ a non-terminal A such that 2 expr + term ↑ x - 2 * y A →* A α (for some set of symbols α) 2 expr + term + term ↑ x – 2 * y ↑ 2 expr + term + term + term x – 2 * y What does →* mean? 2 expr + term + term + term + term ↑ x – 2 * y A → B x z Problem: termination B → A y z Bad news: z Wrong choice leads to infinite expansion Top-down parsers cannot handle left recursion (More importantly: without consuming any input!) z May not be as obvious as this z Good news: z Our grammar is left recursive We can systematically eliminate left recursion Tufts University Computer Science 21 Tufts University Computer Science 22 Notation Eliminating left recursion z Non-terminals z Consider this grammar: z Capital letter: A, B, C Language is β followed # Production rule by zero or more α 1 foo → foo α z Terminals 2 | β z Lowercase, underline: x, y, z z Rewrite as z Some mix of terminals and non-terminals # Production rule z Greek letters: α, β, γ This production gives 1 foo → β bar you one β z Example: # Production rule 2 bar → α bar 1 A → B + x 3 | ε These two productions 1 A → B α α = + x give you zero or more α New non-terminal Tufts University Computer Science 23 Tufts University Computer Science 24 4 Back to expressions Eliminating left recursion z Two cases of left recursion: z Resulting grammar # Production rule # Production rule # Production rule z All right recursive 1 expr → term expr2 2 expr2 → + term expr2 1 expr → expr + term 4 term → term * factor z Retain original language 3 | - term expr2 2 | expr - term 5 | term / factor and associativity 4 | ε 3 | term 6 | factor z Not as intuitive to read 5 term → factor term2 z Transform as follows: 6 term2 → * factor term2 z Top-down parser 7 | / factor term2 # Production rule # Production rule 8 | ε z 1 expr → term expr2 4 term → factor term2 Will always terminate 9 factor → number 2 expr2 → + term expr2 5 term2 → * factor term2 z May still backtrack 10 | identifier 3 | - term expr2 6 | / factor term2 4 | ε | ε There’s a lovely algorithm to do this automatically, which we will skip Tufts University Computer Science 25 Tufts University Computer Science 26 Top-down parsers Right-recursive grammar # Production rule z Problem: Left-recursion Two productions 1 expr → term expr2 with no choice at all z Solution: Technique to remove it 2 expr2 → + term expr2 3 | - term expr2 All other productions are 4 | ε z What about backtracking? uniquely identified by a 5 term → factor term2 terminal symbol at the Current algorithm is brute force 6 term2 → * factor term2 start of RHS 7 | / factor term2 z Problem: how to choose the
Recommended publications
  • A Grammar-Based Approach to Class Diagram Validation Faizan Javed Marjan Mernik Barrett R
    A Grammar-Based Approach to Class Diagram Validation Faizan Javed Marjan Mernik Barrett R. Bryant, Jeff Gray Department of Computer and Faculty of Electrical Engineering and Department of Computer and Information Sciences Computer Science Information Sciences University of Alabama at Birmingham University of Maribor University of Alabama at Birmingham 1300 University Boulevard Smetanova 17 1300 University Boulevard Birmingham, AL 35294-1170, USA 2000 Maribor, Slovenia Birmingham, AL 35294-1170, USA [email protected] [email protected] {bryant, gray}@cis.uab.edu ABSTRACT between classes to perform the use cases can be modeled by UML The UML has grown in popularity as the standard modeling dynamic diagrams, such as sequence diagrams or activity language for describing software applications. However, UML diagrams. lacks the formalism of a rigid semantics, which can lead to Static validation can be used to check whether a model conforms ambiguities in understanding the specifications. We propose a to a valid syntax. Techniques supporting static validation can also grammar-based approach to validating class diagrams and check whether a model includes some related snapshots (i.e., illustrate this technique using a simple case-study. Our technique system states consisting of objects possessing attribute values and involves converting UML representations into an equivalent links) desired by the end-user, but perhaps missing from the grammar form, and then using existing language transformation current model. We believe that the latter problem can occur as a and development tools to assist in the validation process. A string system becomes more intricate; in this situation, it can become comparison metric is also used which provides feedback, allowing hard for a developer to detect whether a state envisaged by the the user to modify the original class diagram according to the user is included in the model.
    [Show full text]
  • Section 12.3 Context-Free Parsing We Know (Via a Theorem) That the Context-Free Languages Are Exactly Those Languages That Are Accepted by Pdas
    Section 12.3 Context-Free Parsing We know (via a theorem) that the context-free languages are exactly those languages that are accepted by PDAs. When a context-free language can be recognized by a deterministic final-state PDA, it is called a deterministic context-free language. An LL(k) grammar has the property that a parser can be constructed to scan an input string from left to right and build a leftmost derivation by examining next k input symbols to determine the unique production for each derivation step. If a language has an LL(k) grammar, it is called an LL(k) language. LL(k) languages are deterministic context-free languages, but there are deterministic context-free languages that are not LL(k). (See text for an example on page 789.) Example. Consider the language {anb | n ∈ N}. (1) It has the LL(1) grammar S → aS | b. A parser can examine one input letter to decide whether to use S → aS or S → b for the next derivation step. (2) It has the LL(2) grammar S → aaS | ab | b. A parser can examine two input letters to determine whether to use S → aaS or S → ab for the next derivation step. Notice that the grammar is not LL(1). (3) Quiz. Find an LL(3) grammar that is not LL(2). Solution. S → aaaS | aab | ab | b. 1 Example/Quiz. Why is the following grammar S → AB n n + k for {a b | n, k ∈ N} an-LL(1) grammar? A → aAb | Λ B → bB | Λ. Answer: Any derivation starts with S ⇒ AB.
    [Show full text]
  • Formal Grammar Specifications of User Interface Processes
    FORMAL GRAMMAR SPECIFICATIONS OF USER INTERFACE PROCESSES by MICHAEL WAYNE BATES ~ Bachelor of Science in Arts and Sciences Oklahoma State University Stillwater, Oklahoma 1982 Submitted to the Faculty of the Graduate College of the Oklahoma State University iri partial fulfillment of the requirements for the Degree of MASTER OF SCIENCE July, 1984 I TheSIS \<-)~~I R 32c-lf CO'f· FORMAL GRAMMAR SPECIFICATIONS USER INTER,FACE PROCESSES Thesis Approved: 'Dean of the Gra uate College ii tta9zJ1 1' PREFACE The benefits and drawbacks of using a formal grammar model to specify a user interface has been the primary focus of this study. In particular, the regular grammar and context-free grammar models have been examined for their relative strengths and weaknesses. The earliest motivation for this study was provided by Dr. James R. VanDoren at TMS Inc. This thesis grew out of a discussion about the difficulties of designing an interface that TMS was working on. I would like to express my gratitude to my major ad­ visor, Dr. Mike Folk for his guidance and invaluable help during this study. I would also like to thank Dr. G. E. Hedrick and Dr. J. P. Chandler for serving on my graduate committee. A special thanks goes to my wife, Susan, for her pa­ tience and understanding throughout my graduate studies. iii TABLE OF CONTENTS Chapter Page I. INTRODUCTION . II. AN OVERVIEW OF FORMAL LANGUAGE THEORY 6 Introduction 6 Grammars . • . • • r • • 7 Recognizers . 1 1 Summary . • • . 1 6 III. USING FOR~AL GRAMMARS TO SPECIFY USER INTER- FACES . • . • • . 18 Introduction . 18 Definition of a User Interface 1 9 Benefits of a Formal Model 21 Drawbacks of a Formal Model .
    [Show full text]
  • Topics in Context-Free Grammar CFG's
    Topics in Context-Free Grammar CFG’s HUSSEIN S. AL-SHEAKH 1 Outline Context-Free Grammar Ambiguous Grammars LL(1) Grammars Eliminating Useless Variables Removing Epsilon Nullable Symbols 2 Context-Free Grammar (CFG) Context-free grammars are powerful enough to describe the syntax of most programming languages; in fact, the syntax of most programming languages is specified using context-free grammars. In linguistics and computer science, a context-free grammar (CFG) is a formal grammar in which every production rule is of the form V → w Where V is a “non-terminal symbol” and w is a “string” consisting of terminals and/or non-terminals. The term "context-free" expresses the fact that the non-terminal V can always be replaced by w, regardless of the context in which it occurs. 3 Definition: Context-Free Grammars Definition 3.1.1 (A. Sudkamp book – Language and Machine 2ed Ed.) A context-free grammar is a quadruple (V, Z, P, S) where: V is a finite set of variables. E (the alphabet) is a finite set of terminal symbols. P is a finite set of rules (Ax). Where x is string of variables and terminals S is a distinguished element of V called the start symbol. The sets V and E are assumed to be disjoint. 4 Definition: Context-Free Languages A language L is context-free IF AND ONLY IF there is a grammar G with L=L(G) . 5 Example A context-free grammar G : S aSb S A derivation: S aSb aaSbb aabb L(G) {anbn : n 0} (((( )))) 6 Derivation Order 1.
    [Show full text]
  • 15–212: Principles of Programming Some Notes on Grammars and Parsing
    15–212: Principles of Programming Some Notes on Grammars and Parsing Michael Erdmann∗ Spring 2011 1 Introduction These notes are intended as a “rough and ready” guide to grammars and parsing. The theoretical foundations required for a thorough treatment of the subject are developed in the Formal Languages, Automata, and Computability course. The construction of parsers for programming languages using more advanced techniques than are discussed here is considered in detail in the Compiler Construction course. Parsing is the determination of the structure of a sentence according to the rules of grammar. In elementary school we learn the parts of speech and learn to analyze sentences into constituent parts such as subject, predicate, direct object, and so forth. Of course it is difficult to say precisely what are the rules of grammar for English (or other human languages), but we nevertheless find this kind of grammatical analysis useful. In an effort to give substance to the informal idea of grammar and, more importantly, to give a plausible explanation of how people learn languages, Noam Chomsky introduced the notion of a formal grammar. Chomsky considered several different forms of grammars with different expressive power. Roughly speaking, a grammar consists of a series of rules for forming sentence fragments that, when used in combination, determine the set of well-formed (grammatical) sentences. We will be concerned here only with one, particularly useful form of grammar, called a context-free grammar. The idea of a context-free grammar is that the rules are specified to hold independently of the context in which they are applied.
    [Show full text]
  • QUESTION BANK SOLUTION Unit 1 Introduction to Finite Automata
    FLAT 10CS56 QUESTION BANK SOLUTION Unit 1 Introduction to Finite Automata 1. Obtain DFAs to accept strings of a’s and b’s having exactly one a.(5m )(Jun-Jul 10) 2. Obtain a DFA to accept strings of a’s and b’s having even number of a’s and b’s.( 5m )(Jun-Jul 10) L = {Œ,aabb,abab,baba,baab,bbaa,aabbaa,---------} 3. Give Applications of Finite Automata. (5m )(Jun-Jul 10) String Processing Consider finding all occurrences of a short string (pattern string) within a long string (text string). This can be done by processing the text through a DFA: the DFA for all strings that end with the pattern string. Each time the accept state is reached, the current position in the text is output. Finite-State Machines A finite-state machine is an FA together with actions on the arcs. Statecharts Statecharts model tasks as a set of states and actions. They extend FA diagrams. Lexical Analysis Dept of CSE, SJBIT 1 FLAT 10CS56 In compiling a program, the first step is lexical analysis. This isolates keywords, identifiers etc., while eliminating irrelevant symbols. A token is a category, for example “identifier”, “relation operator” or specific keyword. 4. Define DFA, NFA & Language? (5m)( Jun-Jul 10) Deterministic finite automaton (DFA)—also known as deterministic finite state machine—is a finite state machine that accepts/rejects finite strings of symbols and only produces a unique computation (or run) of the automaton for each input string. 'Deterministic' refers to the uniqueness of the computation. Nondeterministic finite automaton (NFA) or nondeterministic finite state machine is a finite state machine where from each state and a given input symbol the automaton may jump into several possible next states.
    [Show full text]
  • Xtext User Guide
    Xtext User Guide Xtext User Guide Heiko Behrens, Michael Clay, Sven Efftinge, Moritz Eysholdt, Peter Friese, Jan Köhnlein, Knut Wannheden, Sebastian Zarnekow and contributors Copyright 2008 - 2010 Xtext 1.0 1 1. Overview .................................................................................................................... 1 1.1. What is Xtext? ................................................................................................... 1 1.2. How Does It Work? ............................................................................................ 1 1.3. Xtext is Highly Configurable ................................................................................ 1 1.4. Who Uses Xtext? ................................................................................................ 1 1.5. Who is Behind Xtext? ......................................................................................... 1 1.6. What is a Domain-Specific Language ..................................................................... 2 2. Getting Started ............................................................................................................ 3 2.1. Creating a DSL .................................................................................................. 3 2.1.1. Create an Xtext Project ............................................................................. 3 2.1.2. Project Layout ......................................................................................... 4 2.1.3. Build Your Own Grammar ........................................................................
    [Show full text]
  • Compiler Construction
    Compiler construction PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 10 Dec 2011 02:23:02 UTC Contents Articles Introduction 1 Compiler construction 1 Compiler 2 Interpreter 10 History of compiler writing 14 Lexical analysis 22 Lexical analysis 22 Regular expression 26 Regular expression examples 37 Finite-state machine 41 Preprocessor 51 Syntactic analysis 54 Parsing 54 Lookahead 58 Symbol table 61 Abstract syntax 63 Abstract syntax tree 64 Context-free grammar 65 Terminal and nonterminal symbols 77 Left recursion 79 Backus–Naur Form 83 Extended Backus–Naur Form 86 TBNF 91 Top-down parsing 91 Recursive descent parser 93 Tail recursive parser 98 Parsing expression grammar 100 LL parser 106 LR parser 114 Parsing table 123 Simple LR parser 125 Canonical LR parser 127 GLR parser 129 LALR parser 130 Recursive ascent parser 133 Parser combinator 140 Bottom-up parsing 143 Chomsky normal form 148 CYK algorithm 150 Simple precedence grammar 153 Simple precedence parser 154 Operator-precedence grammar 156 Operator-precedence parser 159 Shunting-yard algorithm 163 Chart parser 173 Earley parser 174 The lexer hack 178 Scannerless parsing 180 Semantic analysis 182 Attribute grammar 182 L-attributed grammar 184 LR-attributed grammar 185 S-attributed grammar 185 ECLR-attributed grammar 186 Intermediate language 186 Control flow graph 188 Basic block 190 Call graph 192 Data-flow analysis 195 Use-define chain 201 Live variable analysis 204 Reaching definition 206 Three address
    [Show full text]
  • On the Covering Problem for Left-Recursive
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Theoretical Computer Science II ( 1979) l-l 1 @ North-Holland Publishing Company ON THE COVERING PROBLEM FOR LEFT-RECURSIVE Eljas SOISALON-SOININEN Department of Computer Science, University of Helsinki, Tiiiiliinkatu 1 I, SF~OOIOOHeisin ki IO. Finland Communicated by Arto Salomaa Received 15 December 1977 Abstract. Two new proofs of the fact that proper left-recursive grammars can be covered by non-left-recursive grammars are presented. The first proof is based on a simple trick inspired by the over ten-year-old idea that semantic Information hanged on tkc: productions can be carried along in the transformations. The second proof involves a new method for eliminating kft recursion from a proper context-free grammar in such a way thyr the covering grammar is obtained directly. 1. Introduction In a recent paper ‘51,,__ Nijholt pointed out that some conjectures expressed in [l] and [3] on the elimination of left recursion from context-free grammars are not valid: Nijholt [S] gave an algorithm that removes left recursion from proper context-free grammars in such a way that the resulting grammar right covers the original grammar; i.e. the right parses in the original grammar are homomorphic images of the right parses in the resulting grammar without left recursion” In addition, since the motivation of the elimination of left recursion is that simple tog-down parsing algorithms do not tolerate left recursion, Nijholt [ 51 proved by an additional transformation that left-recursive proper grammars can be, as he says, l$t-to-right/ covered by non-left-recursive grammars; i.e.
    [Show full text]
  • Generalised Recursive Descent Parsing and Follow-Determinism
    Generalised Recursive Descent Parsing and Follow-Determinism Adrian Johnstone and Elizabeth Scott* Royal Holloway, University of London Abstract. This paper presents a construct for mapping arbitrary non- left recursive context-free grammars into recursive descent parsers that: handle ambiguous granunars correctly; perform with LL(1 ) efficiency on LL (1) grammars; allow straightforward implementation of both inherit ed and synthesized attributes; and allow semantic actions to be added at any point in the grammar. We describe both the basic algorithm and a tool, GRDP, which generates parsers which use this technique. Modifi- cations of the basic algorithm to improve efficiency lead to a discussion of/ollow-deterrainisra, a fundamental property that gives insights into the behavioux of both LL and LR parsers. 1 Introduction Practical parsers for computer languages need to display near-linear parse times because we routinely expect compilers and interpreters to process inputs that are many thousands of language tokens long. Presently available practical (near- linear) parsing methods impose restrictions on their input grammars which have led to a 'tuning' of high level language syntax to the available efficient pars- ing algorithms, particularly LR bottom-up parsing and, for the Pascal family languages, to LL(1) parsers. The key observation was that, although left to themselves humans would use notations that were very difficult to parse, there exist notations that could be parsed in time proportional to the length of the string being parsed whilst still being easily comprehended by programmers. It is our view that this process has gone a little too far, and the feedback from theory into engineering practice has become a constraint on language design, and particularly on the design of prototype and production language parsers.
    [Show full text]
  • The Modelcc Model-Based Parser Generator
    The ModelCC Model-Based Parser Generator Luis Quesada, Fernando Berzal, and Juan-Carlos Cubero Department of Computer Science and Artificial Intelligence, CITIC, University of Granada, Granada 18071, Spain {lquesada, fberzal, jc.cubero}@decsai.ugr.es Abstract Formal languages let us define the textual representation of data with precision. Formal grammars, typically in the form of BNF-like productions, describe the language syntax, which is then annotated for syntax-directed translation and completed with semantic actions. When, apart from the textual representation of data, an explicit representation of the corresponding data structure is required, the language designer has to devise the mapping between the suitable data model and its proper language specification, and then develop the conversion procedure from the parse tree to the data model instance. Unfortunately, whenever the format of the textual representation has to be modified, changes have to propagated throughout the entire language processor tool chain. These updates are time-consuming, tedious, and error-prone. Besides, in case different applications use the same language, several copies of the same language specification have to be maintained. In this paper, we introduce ModelCC, a model-based parser generator that decouples language specification from language processing, hence avoiding many of the problems caused by grammar-driven parsers and parser generators. ModelCC incorporates reference resolution within the parsing process. Therefore, instead of returning mere abstract syntax trees, ModelCC is able to obtain abstract syntax graphs from input strings. Keywords: Data models, text mining, language specification, parser generator, abstract syntax graphs, Model-Driven Software Development. 1. Introduction arXiv:1501.03458v1 [cs.FL] 11 Jan 2015 A formal language represents a set of strings [45].
    [Show full text]
  • An Intuitive View of Lexical and Syntax Analysis
    An Intuitive View of Lexical and Syntax Analysis Cosmin E. Oancea [email protected] Department of Computer Science (DIKU) University of Copenhagen February 2018 Compiler Lecture Slides/Notes University of Copenhagen Department of Computer Science 1 Intuition: Lexical and Syntactic Analysis Lexical Analysis; Regular Expressions Syntax Analysis; Context-Free Grammars 2 / 26 University of Copenhagen Department of Computer Science Structure of a Compiler Program text # Lexical analysis Binary machine code #" Symbol sequence Assembly and linking #" Syntax analysis Ditto with named registers #" Syntax tree ! Intepretation Register allocation #" Typecheck Symbolic machine code #" Syntax tree Machine code generation #" Intermediate code generation −! Intermediate code 3 / 26 University of Copenhagen Department of Computer Science 1 Intuition: Lexical and Syntactic Analysis Lexical Analysis; Regular Expressions Syntax Analysis; Context-Free Grammars 4 / 26 University of Copenhagen Department of Computer Science Lexical Analysis Lexical: relates to the words of the vocabulary of a language, (as opposed to grammar, i.e., correct construction of sentences). \My mother coooookes dinner not." Lexical Analyzer, a.k.a. lexer, scanner or tokenizer, splits the input program, seen as a stream of characters, into a sequence of tokens. Tokens are the words of the (programming) language, e.g., keywords, numbers, comments, parenthesis, semicolon. 5 / 26 Lexical Analysis: transforms a character stream to a token sequence. Keywd_Let, Id "result", Equal, Keywd_Let, Id "x", Equal, Int 10, Op_Cons, Int 20, Op_Cons, Int 48, Op_Cons, LBracket, RBracket, Id "List", Dot, Id "map", LParen, Keywd_fun Id "a", Arrow, Int 2, Multiply, Int 2, Multiply, Id "a", RParen, Id "x" Tokens can be: (fixed) vocabulary words, e.g., keywords (let), built-in operators (*, ::), special symbols (f, g).
    [Show full text]