LL and LR Parsing Lecture 6

Total Page:16

File Type:pdf, Size:1020Kb

LL and LR Parsing Lecture 6 LL and LR Parsing Lecture 6 February 5, 2018 Context-free Grammars A context-free grammar consists of É A set of non-terminals N É Written in uppercase throughout these notes É A set of terminals T comprised of tokens É Lowercase or punctuation throughout these notes É A start symbol S (a non-terminal) É A set of productions (rewrite rules) Assuming E N E ε 2 or ! E Y1Y2...Yn where Yi N T ! 2 [ Compiler Construction 2/49 Context-free ? Production rules hint at expressiveness! Regular A aB,C ε Context-free A ! α ! Context-sensitive αA!β αγβ Type-0 α β! ! α,β,γ N T ∗ 2 f [ g “What just happened? We must be missing some context...” Compiler Construction 3/49 Parsing and Context-free Grammars É Lexical Analysis É Regular Expressions specify a Regular Language containing strings of characters (lexeme) that correspond to a token É Parsing É Context-free Grammars specify a Context-free Language containing strings of tokens that correspond to a grammatical rule (production) Compiler Construction 4/49 Generativeness É Regular expressions and context-free grammars are generative É You can generate every string in the language using the regex or grammar! Compiler Construction 5/49 Generating Strings É Consider regex: ab*a É You can generate aa, aba, abba, abbba, ... É Consider context-free grammar: E (E)E |! ε É You can generate ε, (), (()), (())(), ... É Generating strings with a grammar can be thought of as creating a parse tree! Compiler Construction 6/49 Language membership É We care about whether an input string of tokens is syntactically correct (e.g., obeys our language’s grammar) É So far, we have looked at theoretical implications of grammars L(G) = a1...an S ∗ a1...an f j ! g For an input string x, is x L(G)? 2 Parsing part 1: We need a yes/no answer! Compiler Construction 7/49 Language membership S a B |! b C B b b C C ! c c ! What strings are in this language? (Hint: there’s only two!) If my input string is “dabc”, we ask: can the grammar generate this string? (No) É N.B. it doesn’t matter how from a theoretical perspective, that’s the job of the parsing algorithm! Compiler Construction 8/49 Parsing Algorithms É LL (top down) É Reads input from left to right and uses left-most derivations to construct a parse tree É LR (bottom up) É Reads input from left to right and uses right-most derivations to construct a parse tree É Both algorithms are driven by the input grammar and the input to be parsed. Compiler Construction 9/49 Parsing Algorithm Intuition É You start with a sequence of tokens, t1t2t3t4t5 É and also a grammar! É Two general approaches to constructing the parse tree É top-down parsing is when you predict the grammatical rule used to produce the tokens seen so far É bottom-up parsing is when you consider tokens one at a time until you match a grammatical rule Compiler Construction 10/49 Top Down Parsing S S a B c B ! C x B B ! ε C ! d !| a B c Input string: “adxdxc” a d x d x c Compiler Construction 11/49 Top Down Parsing S S a B c B B ! C x B B ! ε C ! d !| a B c Input string: “adxdxc” a d x d x c Compiler Construction 11/49 Top Down Parsing S S a B c B B ! C x B ! B ε B C ! d !| a B c C Input string: “adxdxc” a d x d x c Compiler Construction 11/49 Top Down Parsing S S a B c B B ! C x B ! B ε B C ! d !| a B c C Input string: “adxdxc” a d x d x c Compiler Construction 11/49 Top Down Parsing S S a B c B B ! C x B ! B ε B C ! d !| a B c C Input string: C B “adxdxc” a d x d x c Compiler Construction 11/49 Top Down Parsing S S a B c B B ! C x B ! B ε B C ! d !| a B c C Input string: C B “adxdxc” a d x d x c Compiler Construction 11/49 Top Down Parsing S S a B c B B ! C x B ! B ε B C ! d !| a B c C Input string: C B “adxdxc” a d x d x ε c Compiler Construction 11/49 Bottom-up Parsing Tokens right now: a S a B c B ! C x B B ! ε C ! d !| a B c Input string: “adxdxc” a d x d x c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: ad S a B c B ! C x B B ! ε C ! d !| a B c Input string: “adxdxc” a d x d x c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aC S a B c B ! C x B B ! ε C ! d !| a B c C Input string: “adxdxc” a d x d x c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aCx S a B c B ! C x B B ! ε C ! d !| a B c C Input string: “adxdxc” a d x d x c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aCxd S a B c B ! C x B B ! ε C ! d !| a B c C Input string: “adxdxc” a d x d x c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aCxC S a B c B ! C x B B ! ε C ! d !| a B c C Input string: “adxdxc” C a d x d x c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aCxCx S a B c B ! C x B B ! ε C ! d !| a B c C Input string: “adxdxc” C a d x d x c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aCxCxε S a B c B ! C x B B ! ε C ! d !| a B c C Input string: “adxdxc” C a d x d x ε c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aCxCxB S a B c B ! C x B B ! ε C ! d !| a B c C Input string: “adxdxc” C B a d x d x ε c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aCxB S a B c B ! C x B B ! ε C ! d B !| a B c C Input string: “adxdxc” C B a d x d x ε c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aB S a B c ! B C x B B B ! ε C ! d B !| a B c C Input string: “adxdxc” C B a d x d x ε c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: aBc S a B c ! B C x B B B ! ε C ! d B !| a B c C Input string: “adxdxc” C B a d x d x ε c Compiler Construction 12/49 Bottom-up Parsing Tokens right now: S S a B c S ! B C x B B B ! ε C ! d B !| a B c C Input string: “adxdxc” C B a d x d x ε c Compiler Construction 12/49 LL(k) parsing A LL parser read tokens from left to right and constructs a top-down leftmost derivation. LL(k) parsing predicts which production rule to use from k tokens of lookahead. LL(1) parsing is a special case using one token of lookahead. LL(1) parsing is fast and easy, but does not work if the grammar is ambiguous, left-recursive, or non-left-factored. Compiler Construction 13/49 General LL(1) Algorithm É Process 1 token at a time É Consider a ‘current’ non-terminal symbol, start with S É While input is not empty É Given next 1 token (t) and ‘current’ non-terminal N, choose a rule R s.t. (N α) ! É For each element X in rule R from left to right É If X is a non-terminal, ‘expand’ X by recursing! Set ‘current’ to X and consider same token t. É If X is a terminal and if t matches. If it matches, consume t from input, loop É Note the need for particular types of grammars! What if we have a rule S Sα? ! Compiler Construction 14/49 Recursive Descent Parsing É Recursive Descent Parsing can parse LL(k) grammars with backtracing É We can use RDP to parse LL(1) grammars by recursing through the rules of the grammar based upon the next available token É Intuition: Construct mutually-recursive functions that consume tokens according to the grammar rules! É TL;DR “Try all productions exhaustively, backtrack” Compiler Construction 15/49 Recursive Descent Parsing E T + E | T T ! (E) | int | int T ! ∗ Input: int * int 1. Try E0 T1 + E2 ! 2. Try T1 (E3) ! É Nope! token ‘int’ does not match ‘(’ in T1 (E3) ! 3. Try T1 int. Match! ! É But the next token ‘*’ does not match ‘+’ from E0 4. Try T1 int T2 ! ∗ É Matches ‘int’, but ‘+’ from E0 remains unmatched 5. Exhausted choices for T1, so we backtrack to E0 Compiler Construction 16/49 Recursive Descent Parsing (2) E T + E | T T ! (E) | int | int T ! ∗ Input: int * int 6.
Recommended publications
  • A Grammar-Based Approach to Class Diagram Validation Faizan Javed Marjan Mernik Barrett R
    A Grammar-Based Approach to Class Diagram Validation Faizan Javed Marjan Mernik Barrett R. Bryant, Jeff Gray Department of Computer and Faculty of Electrical Engineering and Department of Computer and Information Sciences Computer Science Information Sciences University of Alabama at Birmingham University of Maribor University of Alabama at Birmingham 1300 University Boulevard Smetanova 17 1300 University Boulevard Birmingham, AL 35294-1170, USA 2000 Maribor, Slovenia Birmingham, AL 35294-1170, USA [email protected] [email protected] {bryant, gray}@cis.uab.edu ABSTRACT between classes to perform the use cases can be modeled by UML The UML has grown in popularity as the standard modeling dynamic diagrams, such as sequence diagrams or activity language for describing software applications. However, UML diagrams. lacks the formalism of a rigid semantics, which can lead to Static validation can be used to check whether a model conforms ambiguities in understanding the specifications. We propose a to a valid syntax. Techniques supporting static validation can also grammar-based approach to validating class diagrams and check whether a model includes some related snapshots (i.e., illustrate this technique using a simple case-study. Our technique system states consisting of objects possessing attribute values and involves converting UML representations into an equivalent links) desired by the end-user, but perhaps missing from the grammar form, and then using existing language transformation current model. We believe that the latter problem can occur as a and development tools to assist in the validation process. A string system becomes more intricate; in this situation, it can become comparison metric is also used which provides feedback, allowing hard for a developer to detect whether a state envisaged by the the user to modify the original class diagram according to the user is included in the model.
    [Show full text]
  • LLLR Parsing: a Combination of LL and LR Parsing
    LLLR Parsing: a Combination of LL and LR Parsing Boštjan Slivnik University of Ljubljana, Faculty of Computer and Information Science, Ljubljana, Slovenia [email protected] Abstract A new parsing method called LLLR parsing is defined and a method for producing LLLR parsers is described. An LLLR parser uses an LL parser as its backbone and parses as much of its input string using LL parsing as possible. To resolve LL conflicts it triggers small embedded LR parsers. An embedded LR parser starts parsing the remaining input and once the LL conflict is resolved, the LR parser produces the left parse of the substring it has just parsed and passes the control back to the backbone LL parser. The LLLR(k) parser can be constructed for any LR(k) grammar. It produces the left parse of the input string without any backtracking and, if used for a syntax-directed translation, it evaluates semantic actions using the top-down strategy just like the canonical LL(k) parser. An LLLR(k) parser is appropriate for grammars where the LL(k) conflicting nonterminals either appear relatively close to the bottom of the derivation trees or produce short substrings. In such cases an LLLR parser can perform a significantly better error recovery than an LR parser since the most part of the input string is parsed with the backbone LL parser. LLLR parsing is similar to LL(∗) parsing except that it (a) uses LR(k) parsers instead of finite automata to resolve the LL(k) conflicts and (b) does not perform any backtracking.
    [Show full text]
  • Section 12.3 Context-Free Parsing We Know (Via a Theorem) That the Context-Free Languages Are Exactly Those Languages That Are Accepted by Pdas
    Section 12.3 Context-Free Parsing We know (via a theorem) that the context-free languages are exactly those languages that are accepted by PDAs. When a context-free language can be recognized by a deterministic final-state PDA, it is called a deterministic context-free language. An LL(k) grammar has the property that a parser can be constructed to scan an input string from left to right and build a leftmost derivation by examining next k input symbols to determine the unique production for each derivation step. If a language has an LL(k) grammar, it is called an LL(k) language. LL(k) languages are deterministic context-free languages, but there are deterministic context-free languages that are not LL(k). (See text for an example on page 789.) Example. Consider the language {anb | n ∈ N}. (1) It has the LL(1) grammar S → aS | b. A parser can examine one input letter to decide whether to use S → aS or S → b for the next derivation step. (2) It has the LL(2) grammar S → aaS | ab | b. A parser can examine two input letters to determine whether to use S → aaS or S → ab for the next derivation step. Notice that the grammar is not LL(1). (3) Quiz. Find an LL(3) grammar that is not LL(2). Solution. S → aaaS | aab | ab | b. 1 Example/Quiz. Why is the following grammar S → AB n n + k for {a b | n, k ∈ N} an-LL(1) grammar? A → aAb | Λ B → bB | Λ. Answer: Any derivation starts with S ⇒ AB.
    [Show full text]
  • Formal Grammar Specifications of User Interface Processes
    FORMAL GRAMMAR SPECIFICATIONS OF USER INTERFACE PROCESSES by MICHAEL WAYNE BATES ~ Bachelor of Science in Arts and Sciences Oklahoma State University Stillwater, Oklahoma 1982 Submitted to the Faculty of the Graduate College of the Oklahoma State University iri partial fulfillment of the requirements for the Degree of MASTER OF SCIENCE July, 1984 I TheSIS \<-)~~I R 32c-lf CO'f· FORMAL GRAMMAR SPECIFICATIONS USER INTER,FACE PROCESSES Thesis Approved: 'Dean of the Gra uate College ii tta9zJ1 1' PREFACE The benefits and drawbacks of using a formal grammar model to specify a user interface has been the primary focus of this study. In particular, the regular grammar and context-free grammar models have been examined for their relative strengths and weaknesses. The earliest motivation for this study was provided by Dr. James R. VanDoren at TMS Inc. This thesis grew out of a discussion about the difficulties of designing an interface that TMS was working on. I would like to express my gratitude to my major ad­ visor, Dr. Mike Folk for his guidance and invaluable help during this study. I would also like to thank Dr. G. E. Hedrick and Dr. J. P. Chandler for serving on my graduate committee. A special thanks goes to my wife, Susan, for her pa­ tience and understanding throughout my graduate studies. iii TABLE OF CONTENTS Chapter Page I. INTRODUCTION . II. AN OVERVIEW OF FORMAL LANGUAGE THEORY 6 Introduction 6 Grammars . • . • • r • • 7 Recognizers . 1 1 Summary . • • . 1 6 III. USING FOR~AL GRAMMARS TO SPECIFY USER INTER- FACES . • . • • . 18 Introduction . 18 Definition of a User Interface 1 9 Benefits of a Formal Model 21 Drawbacks of a Formal Model .
    [Show full text]
  • Compiler Design
    CCOOMMPPIILLEERR DDEESSIIGGNN -- PPAARRSSEERR http://www.tutorialspoint.com/compiler_design/compiler_design_parser.htm Copyright © tutorialspoint.com In the previous chapter, we understood the basic concepts involved in parsing. In this chapter, we will learn the various types of parser construction methods available. Parsing can be defined as top-down or bottom-up based on how the parse-tree is constructed. Top-Down Parsing We have learnt in the last chapter that the top-down parsing technique parses the input, and starts constructing a parse tree from the root node gradually moving down to the leaf nodes. The types of top-down parsing are depicted below: Recursive Descent Parsing Recursive descent is a top-down parsing technique that constructs the parse tree from the top and the input is read from left to right. It uses procedures for every terminal and non-terminal entity. This parsing technique recursively parses the input to make a parse tree, which may or may not require back-tracking. But the grammar associated with it ifnotleftfactored cannot avoid back- tracking. A form of recursive-descent parsing that does not require any back-tracking is known as predictive parsing. This parsing technique is regarded recursive as it uses context-free grammar which is recursive in nature. Back-tracking Top- down parsers start from the root node startsymbol and match the input string against the production rules to replace them ifmatched. To understand this, take the following example of CFG: S → rXd | rZd X → oa | ea Z → ai For an input string: read, a top-down parser, will behave like this: It will start with S from the production rules and will match its yield to the left-most letter of the input, i.e.
    [Show full text]
  • Topics in Context-Free Grammar CFG's
    Topics in Context-Free Grammar CFG’s HUSSEIN S. AL-SHEAKH 1 Outline Context-Free Grammar Ambiguous Grammars LL(1) Grammars Eliminating Useless Variables Removing Epsilon Nullable Symbols 2 Context-Free Grammar (CFG) Context-free grammars are powerful enough to describe the syntax of most programming languages; in fact, the syntax of most programming languages is specified using context-free grammars. In linguistics and computer science, a context-free grammar (CFG) is a formal grammar in which every production rule is of the form V → w Where V is a “non-terminal symbol” and w is a “string” consisting of terminals and/or non-terminals. The term "context-free" expresses the fact that the non-terminal V can always be replaced by w, regardless of the context in which it occurs. 3 Definition: Context-Free Grammars Definition 3.1.1 (A. Sudkamp book – Language and Machine 2ed Ed.) A context-free grammar is a quadruple (V, Z, P, S) where: V is a finite set of variables. E (the alphabet) is a finite set of terminal symbols. P is a finite set of rules (Ax). Where x is string of variables and terminals S is a distinguished element of V called the start symbol. The sets V and E are assumed to be disjoint. 4 Definition: Context-Free Languages A language L is context-free IF AND ONLY IF there is a grammar G with L=L(G) . 5 Example A context-free grammar G : S aSb S A derivation: S aSb aaSbb aabb L(G) {anbn : n 0} (((( )))) 6 Derivation Order 1.
    [Show full text]
  • 15–212: Principles of Programming Some Notes on Grammars and Parsing
    15–212: Principles of Programming Some Notes on Grammars and Parsing Michael Erdmann∗ Spring 2011 1 Introduction These notes are intended as a “rough and ready” guide to grammars and parsing. The theoretical foundations required for a thorough treatment of the subject are developed in the Formal Languages, Automata, and Computability course. The construction of parsers for programming languages using more advanced techniques than are discussed here is considered in detail in the Compiler Construction course. Parsing is the determination of the structure of a sentence according to the rules of grammar. In elementary school we learn the parts of speech and learn to analyze sentences into constituent parts such as subject, predicate, direct object, and so forth. Of course it is difficult to say precisely what are the rules of grammar for English (or other human languages), but we nevertheless find this kind of grammatical analysis useful. In an effort to give substance to the informal idea of grammar and, more importantly, to give a plausible explanation of how people learn languages, Noam Chomsky introduced the notion of a formal grammar. Chomsky considered several different forms of grammars with different expressive power. Roughly speaking, a grammar consists of a series of rules for forming sentence fragments that, when used in combination, determine the set of well-formed (grammatical) sentences. We will be concerned here only with one, particularly useful form of grammar, called a context-free grammar. The idea of a context-free grammar is that the rules are specified to hold independently of the context in which they are applied.
    [Show full text]
  • The Use of Predicates in LL(K) and LR(K) Parser Generators
    Purdue University Purdue e-Pubs ECE Technical Reports Electrical and Computer Engineering 7-1-1993 The seU of Predicates In LL(k) And LR(k) Parser Generators (Technical Summary) T. J. Parr Purdue University School of Electrical Engineering R. W. Quong Purdue University School of Electrical Engineering H. G. Dietz Purdue University School of Electrical Engineering Follow this and additional works at: http://docs.lib.purdue.edu/ecetr Parr, T. J.; Quong, R. W.; and Dietz, H. G., "The sU e of Predicates In LL(k) And LR(k) Parser Generators (Technical Summary) " (1993). ECE Technical Reports. Paper 234. http://docs.lib.purdue.edu/ecetr/234 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. TR-EE 93-25 JULY 1993 The Use of Predicates In LL(k) And LR(k) Parser ene era tors? (Technical Summary) T. J. Purr, R. W. Quong, and H. G. Dietz School of Electrical Engineering Purdue University West Lafayette, IN 47907 (317) 494-1739 [email protected] Abstract Although existing LR(1) or U(1) parser generators suffice for many language recognition problems, writing a straightforward grammar to translate a complicated language, such as C++ or even C, remains a non-trivial task. We have often found that adding translation actions to the grammar is harder than writing the grammar itself. Part of the problem is that many languages are context-sensitive. Simple, natural descriptions of these languages escape current language tool technology because they were not designed to handle semantic information.
    [Show full text]
  • COMP 181 Z What Is the Tufts Mascot? “Jumbo” the Elephant
    Prelude COMP 181 z What is the Tufts mascot? “Jumbo” the elephant Lecture 6 z Why? Top-down Parsing z P. T. Barnum was an original trustee of Tufts z 1884: donated $50,000 for a natural museum on campus Barnum Museum, later Barnum Hall September 21, 2006 z “Jumbo”: famous circus elephant z 1885: Jumbo died, was stuffed, donated to Tufts z 1975: Fire destroyed Barnum Hall, Jumbo Tufts University Computer Science 2 Last time Grammar issues z Finished scanning z Often: more than one way to derive a string z Produces a stream of tokens z Why is this a problem? z Removes things we don’t care about, like white z Parsing: is string a member of L(G)? space and comments z We want more than a yes or no answer z Context-free grammars z Key: z Formal description of language syntax z Represent the derivation as a parse tree z Deriving strings using CFG z We want the structure of the parse tree to capture the meaning of the sentence z Depicting derivation as a parse tree Tufts University Computer Science 3 Tufts University Computer Science 4 Grammar issues Parse tree: x – 2 * y z Often: more than one way to derive a string Right-most derivation Parse tree z Why is this a problem? Rule Sentential form expr z Parsing: is string a member of L(G)? - expr z We want more than a yes or no answer 1 expr op expr # Production rule 3 expr op <id,y> expr op expr 1 expr → expr op expr 6 expr * <id,y> z Key: 2 | number 1 expr op expr * <id,y> expr op expr * y z Represent the derivation as a parse3 tree | identifier 2 expr op <num,2> * <id,y> z We want the structure
    [Show full text]
  • QUESTION BANK SOLUTION Unit 1 Introduction to Finite Automata
    FLAT 10CS56 QUESTION BANK SOLUTION Unit 1 Introduction to Finite Automata 1. Obtain DFAs to accept strings of a’s and b’s having exactly one a.(5m )(Jun-Jul 10) 2. Obtain a DFA to accept strings of a’s and b’s having even number of a’s and b’s.( 5m )(Jun-Jul 10) L = {Œ,aabb,abab,baba,baab,bbaa,aabbaa,---------} 3. Give Applications of Finite Automata. (5m )(Jun-Jul 10) String Processing Consider finding all occurrences of a short string (pattern string) within a long string (text string). This can be done by processing the text through a DFA: the DFA for all strings that end with the pattern string. Each time the accept state is reached, the current position in the text is output. Finite-State Machines A finite-state machine is an FA together with actions on the arcs. Statecharts Statecharts model tasks as a set of states and actions. They extend FA diagrams. Lexical Analysis Dept of CSE, SJBIT 1 FLAT 10CS56 In compiling a program, the first step is lexical analysis. This isolates keywords, identifiers etc., while eliminating irrelevant symbols. A token is a category, for example “identifier”, “relation operator” or specific keyword. 4. Define DFA, NFA & Language? (5m)( Jun-Jul 10) Deterministic finite automaton (DFA)—also known as deterministic finite state machine—is a finite state machine that accepts/rejects finite strings of symbols and only produces a unique computation (or run) of the automaton for each input string. 'Deterministic' refers to the uniqueness of the computation. Nondeterministic finite automaton (NFA) or nondeterministic finite state machine is a finite state machine where from each state and a given input symbol the automaton may jump into several possible next states.
    [Show full text]
  • Part 1: Introduction
    Part 1: Introduction By: Morteza Zakeri PhD Student Iran University of Science and Technology Winter 2020 Agenda • What is ANTLR? • History • Motivation • What is New in ANTLR v4? • ANTLR Components: How it Works? • Getting Started with ANTLR v4 February 2020 Introduction to ANTLR – Morteza Zakeri 2 What is ANTLR? • ANTLR (pronounced Antler), or Another Tool For Language Recognition, is a parser generator that uses LL(*) for parsing. • ANTLR takes as input a grammar that specifies a language and generates as output source code for a recognizer for that language. • Supported generating code in Java, C#, JavaScript, Python2 and Python3. • ANTLR is recursive descent parser Generator! (See Appendix) February 2020 Introduction to ANTLR – Morteza Zakeri 3 Runtime Libraries and Code Generation Targets • There is no language specific code generators • There is only one tool, written in Java, which is able to generate Lexer and Parser code for all targets, through command line options. • The available targets are the following (2020): • Java, C#, C++, Swift, Python (2 and 3), Go, PHP, and JavaScript. • Read more: • https://github.com/antlr/antlr4/blob/master/doc/targets.md 11 February 2020 Introduction to ANTLR – Morteza Zakeri Runtime Libraries and Code Generation Targets • $ java -jar antlr4-4.8.jar -Dlanguage=CSharp MyGrammar.g4 • https://github.com/antlr/antlr4/tree/master/runtime/CSharp • https://github.com/tunnelvisionlabs/antlr4cs • $ java -jar antlr4-4.8.jar -Dlanguage=Cpp MyGrammar.g4 • https://github.com/antlr/antlr4/blob/master/doc/cpp-target.md • $ java -jar antlr4-4.8.jar -Dlanguage=Python3 MyGrammar.g4 • https://github.com/antlr/antlr4/blob/master/doc/python- target.md 11 February 2020 Introduction to ANTLR – Morteza Zakeri History • Initial release: • February 1992; 24 years ago.
    [Show full text]
  • Xtext User Guide
    Xtext User Guide Xtext User Guide Heiko Behrens, Michael Clay, Sven Efftinge, Moritz Eysholdt, Peter Friese, Jan Köhnlein, Knut Wannheden, Sebastian Zarnekow and contributors Copyright 2008 - 2010 Xtext 1.0 1 1. Overview .................................................................................................................... 1 1.1. What is Xtext? ................................................................................................... 1 1.2. How Does It Work? ............................................................................................ 1 1.3. Xtext is Highly Configurable ................................................................................ 1 1.4. Who Uses Xtext? ................................................................................................ 1 1.5. Who is Behind Xtext? ......................................................................................... 1 1.6. What is a Domain-Specific Language ..................................................................... 2 2. Getting Started ............................................................................................................ 3 2.1. Creating a DSL .................................................................................................. 3 2.1.1. Create an Xtext Project ............................................................................. 3 2.1.2. Project Layout ......................................................................................... 4 2.1.3. Build Your Own Grammar ........................................................................
    [Show full text]