Chart Parsing: the CYK Algorithm Motivation: Ambiguity Left Recursion

Total Page:16

File Type:pdf, Size:1020Kb

Chart Parsing: the CYK Algorithm Motivation: Ambiguity Left Recursion Problems with Recursive Descent Parsing Problems with Recursive Descent Parsing The CYK Algorithm The CYK Algorithm 1 Problems with Recursive Descent Parsing Left Recursion Chart Parsing: the CYK Algorithm Complexity Informatics 2A: Lecture 17 2 The CYK Algorithm Parsing as Dynamic Programming The CYK Algorithm Bonnie Webber Visualizing the Chart School of Informatics Properties of the Algorithm University of Edinburgh [email protected] Reading: 28 October 2008 J&M (2nd ed), ch. 13 (Sections 13.3 – 13.4) NLTK Tutorial, Chart Parsing and Probabilistic Parsing pp. 1-8. Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 1 Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 2 Problems with Recursive Descent Parsing Left Recursion Problems with Recursive Descent Parsing Left Recursion The CYK Algorithm Complexity The CYK Algorithm Complexity Motivation: Ambiguity Left Recursion Recall that recursive descent parsing may require restructuring a Deterministic parsing (Lecture 15) aimed to address a limited grammar to eliminate left-recursive rules. amount of local ambiguity – the problem of not being able to But what if those rules reflect important structural and decide uniquely which grammar rule to use next in a left-to-right distributional properties of the language? analysis of the input string, even if the string is not globally ambiguous. NP → DET N By re-structuring the grammar, the parser can make a unique NP → NPR decision, based on a limited amount of look-ahead. DET → NP ’s We’ll now look at two other ways of handling ambiguity: These rules generate English NPs with possesive modifiers such as: Chart parsing: handling ambiguity with the parser alone; John’s sister Probabilistic Grammars: handling ambiguity with both John’s mother’s sister grammar and parser. John’s mother’s uncle’s sister John’s mother’s uncle’s sister’s niece Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 3 Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 4 Problems with Recursive Descent Parsing Left Recursion Problems with Recursive Descent Parsing Left Recursion The CYK Algorithm Complexity The CYK Algorithm Complexity Left Recursion Complexity Tree structures for possessives: Recall other problems with recursive descent parsing: NP NP NP DET N DET N DET N 1 Structural ambiguity in the grammar and lexical ambiguity in the words can lead the parser down a path that will eventually NP NP NP fail (i.e., it cannot parse the whole input). NPR DET N DET N 2 The same sub-tree may be built several different times: when NP NP John 's sister mother 's sister uncle 's sister a path fails, the parser backtracks, undoes the structure, and NPR DET N starts again. NP John 's mother 's The complexity of this blind backtracking is exponential in the NPR worst case, because of repeated re-analysis of the same sub-string. We need a type of parser that solves this problems but does not John 's When left-recursive rules are necessary, we can’t use recursive require restructuring the grammar. descent parsing. Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 5 Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 6 Parsing as Dynamic Programming Parsing as Dynamic Programming Problems with Recursive Descent Parsing The CYK Algorithm Problems with Recursive Descent Parsing The CYK Algorithm The CYK Algorithm Visualizing the Chart The CYK Algorithm Visualizing the Chart Properties of the Algorithm Properties of the Algorithm Dynamic Programming Parsing as Dynamic Programming Dynamic Programming: With a CFG, a parser should be able to avoid re-analyzing a Given a problem, systematically fill a table of solutions to sub-string because such an analysis is independent of the rest of sub-problems: this is called memoization. the parse. Once solutions to all sub-problems have been accumulated, NP solve the overall problem by composing them.. The dog saw a man in the park For parsing, the sub-problems are analyses of sub-strings, and they NP NP NP are memoized in a chart (aka well-formed substring table, WFST). The search space explored by the parser can reflect this This contains: independence if we use a parser based on dynamic programming. constituents (sub-trees) that have been found, indexed by the Dynamic programming is the basis for all chart parsing algorithms. start and end of the sub-strings they cover; hypotheses about what constituents could be found, indexed by the start and end of the sub-strings that suggest them. Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 7 Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 8 Parsing as Dynamic Programming Parsing as Dynamic Programming Problems with Recursive Descent Parsing The CYK Algorithm Problems with Recursive Descent Parsing The CYK Algorithm The CYK Algorithm Visualizing the Chart The CYK Algorithm Visualizing the Chart Properties of the Algorithm Properties of the Algorithm Depicting a WFST/Chart Depicting a WFST as a Matrix A well-formed substring table (aka chart) can be depicted as either 1 2 3 4 5 6 a matrix or a graph. Both contain the same information. 0 V When a WFST (aka chart) is depicted as a matrix: 1 Prep PP Rows and columns of the matrix correspond to the start and end positions of a span (ie, starting right before the first word, 2 Det NP ending right after the final one); A cell in the matrix corresponds to the sub-string that starts 3 N at the row index and ends at the column index. It can contain information about the type of constituent (or constituents) 4 that span(s) the substring, pointers to its sub-constituents, 5 and/or predictions about what constituents might follow the substring. See with a telescope in hand Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 9 Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 10 Parsing as Dynamic Programming Parsing as Dynamic Programming Problems with Recursive Descent Parsing The CYK Algorithm Problems with Recursive Descent Parsing The CYK Algorithm The CYK Algorithm Visualizing the Chart The CYK Algorithm Visualizing the Chart Properties of the Algorithm Properties of the Algorithm Depicting a WFST as a Graph Algorithms for Chart Parsing When a WFST (aka chart) is depicted as a graph: nodes/vertices represent positions in the text string, starting before the first word, ending after the final word. Important members of the chart parsing family include: arcs/edges connect vertices at the start and the end of a span to represent a particular substring. Edges can be labelled with the CYK algorithm, which memoizes only constituents; the same information as in a cell in the matrix representation. three algorithms that memoize both constituents and predictions: PP a bottom-up chart parser a top-down chart parser NP the Earley algorithm Prep Det N with a telescope 1 23 4 Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 11 Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 12 Parsing as Dynamic Programming Parsing as Dynamic Programming Problems with Recursive Descent Parsing The CYK Algorithm Problems with Recursive Descent Parsing The CYK Algorithm The CYK Algorithm Visualizing the Chart The CYK Algorithm Visualizing the Chart Properties of the Algorithm Properties of the Algorithm CYK Algorithm Chart Parsing with the CYK Algorithm CYK (Cocke, Younger, Kasami) is just a particular regime for recognizing and recording constituents in the chart (WFST). Let Close(X) = {B | B →* A, using unary productions, and A X} The simplest version of CYK is for a CFG whose rules have at (t,[w ,. ,w ]) most two symbols on their RHS. Build CYK Chart 1 n for j ← 1 to n We can enter constituent A in cell (i, j) if there is a rule do A → B t(j-1, j) ← Close({wj }) for k ← 1 to n and B is found in cell (i, j), or if for j ← k to n A → BC for m ← 1 to k-1 and B is found in cell (i, k) and C is found in cell (k, j). do t(j-k, j) ← t(j-k, j) ∪ Close({A | A → BC CYK is designed to guarantee that the parser only looks for rules for some B ∈ t(j-k, j-m) and C ∈ t(j-m, j)}) that use a constituent from i to j after it has determined all the constituents that end at i. Otherwise something might be missed. Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 13 Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 14 Parsing as Dynamic Programming Parsing as Dynamic Programming Problems with Recursive Descent Parsing The CYK Algorithm Problems with Recursive Descent Parsing The CYK Algorithm The CYK Algorithm Visualizing the Chart The CYK Algorithm Visualizing the Chart Properties of the Algorithm Properties of the Algorithm CYK Schematic Diagram Visualizing the Chart CYK proceeds systematically left-to-right across the input string, Grammatical rules Lexical rules S → NP VP Det → a | the (determiner) looking back to see what constituents can now be built with what NP → Det Nom N → fish | frogs | soup (noun) has been found. NP → Nom Prep → in | for (preposition) Nom → N SRel TV → saw | ate (transitive verb) Nom → N IV → fish | swim (intransitive verb) VP → TV NP Relpro → that (relative pronoun) VP → IV PP VP → IV PP → Prep NP 0 1 2 3 4 5 SRel → Relpro VP This algorithm is complete and does recognition in time O(n3). Nom: nominal (follows the determiner in an NP with determiner; occurs also in bare NP). SRel: subject relative clause, as in the frogs that ate fish. Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 15 Informatics 2A: Lecture 17 Chart Parsing: the CYK Algorithm 16 Parsing as Dynamic Programming Parsing as Dynamic Programming Problems with Recursive
Recommended publications
  • Journal of Computer Science and Engineering Parsing
    IJRDO - Journal of Computer Science and Engineering ISSN: 2456-1843 JOURNAL OF COMPUTER SCIENCE AND ENGINEERING PARSING TECHNIQUES Ojesvi Bhardwaj Abstract: ‘Parsing’ is the term used to describe the process of automatically building syntactic analyses of a sentence in terms of a given grammar and lexicon. The resulting syntactic analyses may be used as input to a process of semantic interpretation, (or perhaps phonological interpretation, where aspects of this, like prosody, are sensitive to syntactic structure). Occasionally, ‘parsing’ is also used to include both syntactic and semantic analysis. We use it in the more conservative sense here, however. In most contemporary grammatical formalisms, the output of parsing is something logically equivalent to a tree, displaying dominance and precedence relations between constituents of a sentence, perhaps with further annotations in the form of attribute-value equations (‘features’) capturing other aspects of linguistic description. However, there are many different possible linguistic formalisms, and many ways of representing each of them, and hence many different ways of representing the results of parsing. We shall assume here a simple tree representation, and an underlying context-free grammatical (CFG) formalism. *Student, CSE, Dronachraya Collage of Engineering, Gurgaon Volume-1 | Issue-12 | December, 2015 | Paper-3 29 - IJRDO - Journal of Computer Science and Engineering ISSN: 2456-1843 1. INTRODUCTION Parsing or syntactic analysis is the process of analyzing a string of symbols, either in natural language or in computer languages, according to the rules of a formal grammar. The term parsing comes from Latin pars (ōrātiōnis), meaning part (of speech). The term has slightly different meanings in different branches of linguistics and computer science.
    [Show full text]
  • An Efficient Implementation of the Head-Corner Parser
    An Efficient Implementation of the Head-Corner Parser Gertjan van Noord" Rijksuniversiteit Groningen This paper describes an efficient and robust implementation of a bidirectional, head-driven parser for constraint-based grammars. This parser is developed for the OVIS system: a Dutch spoken dialogue system in which information about public transport can be obtained by telephone. After a review of the motivation for head-driven parsing strategies, and head-corner parsing in particular, a nondeterministic version of the head-corner parser is presented. A memorization technique is applied to obtain a fast parser. A goal-weakening technique is introduced, which greatly improves average case efficiency, both in terms of speed and space requirements. I argue in favor of such a memorization strategy with goal-weakening in comparison with ordinary chart parsers because such a strategy can be applied selectively and therefore enormously reduces the space requirements of the parser, while no practical loss in time-efficiency is observed. On the contrary, experiments are described in which head-corner and left-corner parsers imple- mented with selective memorization and goal weakening outperform "standard" chart parsers. The experiments include the grammar of the OV/S system and the Alvey NL Tools grammar. Head-corner parsing is a mix of bottom-up and top-down processing. Certain approaches to robust parsing require purely bottom-up processing. Therefore, it seems that head-corner parsing is unsuitable for such robust parsing techniques. However, it is shown how underspecification (which arises very naturally in a logic programming environment) can be used in the head-corner parser to allow such robust parsing techniques.
    [Show full text]
  • Generalized Probabilistic LR Parsing of Natural Language (Corpora) with Unification-Based Grammars
    Generalized Probabilistic LR Parsing of Natural Language (Corpora) with Unification-Based Grammars Ted Briscoe* John Carroll* University of Cambridge University of Cambridge We describe work toward the construction of a very wide-coverage probabilistic parsing system for natural language (NL), based on LR parsing techniques. The system is intended to rank the large number of syntactic analyses produced by NL grammars according to the frequency of occurrence of the individual rules deployed in each analysis. We discuss a fully automatic procedure for constructing an LR parse table from a unification-based grammar formalism, and consider the suitability of alternative LALR(1) parse table construction methods for large grammars. The parse table is used as the basis for two parsers; a user-driven interactive system that provides a computationally tractable and labor-efficient method of supervised training of the statistical information required to drive the probabilistic parser. The latter is constructed by associating probabilities with the LR parse table directly. This technique is superior to parsers based on probabilistic lexical tagging or probabilistic context-free grammar because it allows for a more context-dependent probabilistic language model, as well as use of a more linguistically adequate grammar formalism. We compare the performance of an optimized variant of Tomita's (1987) generalized LR parsing algorithm to an (efficiently indexed and optimized) chart parser. We report promising results of a pilot study training on 150 noun definitions from the Longman Dictionary of Contemporary English (LDOCE) and retesting on these plus a further 55 definitions. Finally, we discuss limitations of the current system and possible extensions to deal with lexical (syntactic and semantic)frequency of occurrence.
    [Show full text]
  • CS 375, Compilers: Class Notes Gordon S. Novak Jr. Department Of
    CS 375, Compilers: Class Notes Gordon S. Novak Jr. Department of Computer Sciences University of Texas at Austin [email protected] http://www.cs.utexas.edu/users/novak Copyright c Gordon S. Novak Jr.1 1A few slides reproduce figures from Aho, Lam, Sethi, and Ullman, Compilers: Principles, Techniques, and Tools, Addison-Wesley; these have footnote credits. 1 I wish to preach not the doctrine of ignoble ease, but the doctrine of the strenuous life. { Theodore Roosevelt Innovation requires Austin, Texas. We need faster chips and great compilers. Both those things are from Austin. { Guy Kawasaki 2 Course Topics • Introduction • Lexical Analysis: characters ! words lexer { Regular grammars { Hand-written lexical analyzer { Number conversion { Regular expressions { LEX • Syntax Analysis: words ! sentences parser { Context-free grammars { Operator precedence { Recursive descent parsing { Shift-reduce parsing, YACC { Intermediate code { Symbol tables • Code Generation { Code generation from trees { Register assignment { Array references { Subroutine calls • Optimization { Constant folding, partial evaluation, Data flow analysis • Object-oriented programming 3 Pascal Test Program program graph1(output); { Jensen & Wirth 4.9 } const d = 0.0625; {1/16, 16 lines for [x,x+1]} s = 32; {32 character widths for [y,y+1]} h = 34; {character position of x-axis} c = 6.28318; {2*pi} lim = 32; var x,y : real; i,n : integer; begin for i := 0 to lim do begin x := d*i; y := exp(-x)*sin(c*x); n := round(s*y) + h; repeat write(' '); n := n-1 until n=0; writeln('*') end end. * * * * * * * * * * * * * * * * * * * * * * * * * * 4 Introduction • What a compiler does; why we need compilers • Parts of a compiler and what they do • Data flow between the parts 5 Machine Language A computer is basically a very fast pocket calculator attached to a large memory.
    [Show full text]
  • The Earley Algorithm Is to Avoid This, by Only Building Constituents That Are Compatible with the Input Read So Far
    Earley Parsing Informatics 2A: Lecture 21 Shay Cohen 3 November 2017 1 / 31 1 The CYK chart as a graph What's wrong with CYK Adding Prediction to the Chart 2 The Earley Parsing Algorithm The Predictor Operator The Scanner Operator The Completer Operator Earley parsing: example Comparing Earley and CYK 2 / 31 We would have to split a given span into all possible subspans according to the length of the RHS. What is the complexity of such algorithm? Still O(n2) charts, but now it takes O(nk−1) time to process each cell, where k is the maximal length of an RHS. Therefore: O(nk+1). For CYK, k = 2. Can we do better than that? Note about CYK The CYK algorithm parses input strings in Chomsky normal form. Can you see how to change it to an algorithm with an arbitrary RHS length (of only nonterminals)? 3 / 31 Still O(n2) charts, but now it takes O(nk−1) time to process each cell, where k is the maximal length of an RHS. Therefore: O(nk+1). For CYK, k = 2. Can we do better than that? Note about CYK The CYK algorithm parses input strings in Chomsky normal form. Can you see how to change it to an algorithm with an arbitrary RHS length (of only nonterminals)? We would have to split a given span into all possible subspans according to the length of the RHS. What is the complexity of such algorithm? 3 / 31 Note about CYK The CYK algorithm parses input strings in Chomsky normal form.
    [Show full text]
  • LATE Ain't Earley: a Faster Parallel Earley Parser
    LATE Ain’T Earley: A Faster Parallel Earley Parser Peter Ahrens John Feser Joseph Hui [email protected] [email protected] [email protected] July 18, 2018 Abstract We present the LATE algorithm, an asynchronous variant of the Earley algorithm for pars- ing context-free grammars. The Earley algorithm is naturally task-based, but is difficult to parallelize because of dependencies between the tasks. We present the LATE algorithm, which uses additional data structures to maintain information about the state of the parse so that work items may be processed in any order. This property allows the LATE algorithm to be sped up using task parallelism. We show that the LATE algorithm can achieve a 120x speedup over the Earley algorithm on a natural language task. 1 Introduction Improvements in the efficiency of parsers for context-free grammars (CFGs) have the potential to speed up applications in software development, computational linguistics, and human-computer interaction. The Earley parser has an asymptotic complexity that scales with the complexity of the CFG, a unique, desirable trait among parsers for arbitrary CFGs. However, while the more commonly used Cocke-Younger-Kasami (CYK) [2, 5, 12] parser has been successfully parallelized [1, 7], the Earley algorithm has seen relatively few attempts at parallelization. Our research objectives were to understand when there exists parallelism in the Earley algorithm, and to explore methods for exploiting this parallelism. We first tried to naively parallelize the Earley algorithm by processing the Earley items in each Earley set in parallel. We found that this approach does not produce any speedup, because the dependencies between Earley items force much of the work to be performed sequentially.
    [Show full text]
  • NLTK Parsing Demos
    NLTK Parsing Demos Adrian Brasoveanu∗ March 3, 2014 Contents 1 Recursive descent parsing (top-down, depth-first)1 2 Shift-reduce (bottom-up)4 3 Left-corner parser: top-down with bottom-up filtering5 4 General Background: Memoization6 5 Chart parsing 11 6 Bottom-Up Chart Parsing 15 7 Top-down Chart Parsing 21 8 The Earley Algorithm 25 9 Back to Left-Corner chart parsing 30 1 Recursive descent parsing (top-down, depth-first) [py1] >>> import nltk, re, pprint >>> from __future__ import division [py2] >>> grammar1= nltk.parse_cfg(""" ... S -> NP VP ... VP -> V NP | V NP PP ... PP -> P NP ... V ->"saw"|"ate"|"walked" ∗Based on the NLTK book (Bird et al. 2009) and created with the PythonTeX package (Poore 2013). 1 ... NP ->"John"|"Mary"|"Bob" | Det N | Det N PP ... Det ->"a"|"an"|"the"|"my" ... N ->"man"|"dog"|"cat"|"telescope"|"park" ... P ->"in"|"on"|"by"|"with" ... """) [py3] >>> rd_parser= nltk.RecursiveDescentParser(grammar1) >>> sent=’Mary saw a dog’.split() >>> sent [’Mary’, ’saw’, ’a’, ’dog’] >>> trees= rd_parser.nbest_parse(sent) >>> for tree in trees: ... print tree,"\n\n" ... (S (NP Mary) (VP (V saw) (NP (Det a) (N dog)))) [py4] >>> rd_parser= nltk.RecursiveDescentParser(grammar1, trace=2) >>> rd_parser.nbest_parse(sent) Parsing ’Mary saw a dog’ [*S] E [ * NP VP ] E [ * ’John’ VP ] E [ * ’Mary’ VP ] M [ ’Mary’ * VP ] E [ ’Mary’ * V NP ] E [ ’Mary’ * ’saw’ NP ] M [ ’Mary’ ’saw’ * NP ] E [ ’Mary’ ’saw’ * ’John’ ] E [ ’Mary’ ’saw’ * ’Mary’ ] E [ ’Mary’ ’saw’ * ’Bob’ ] E [ ’Mary’ ’saw’ * Det N ] E [ ’Mary’ ’saw’ * ’a’ N ] M [ ’Mary’ ’saw’
    [Show full text]
  • Lecture 10: CYK and Earley Parsers Alvin Cheung Building Parse Trees Maaz Ahmad CYK and Earley Algorithms Talia Ringer More Disambiguation
    Hack Your Language! CSE401 Winter 2016 Introduction to Compiler Construction Ras Bodik Lecture 10: CYK and Earley Parsers Alvin Cheung Building Parse Trees Maaz Ahmad CYK and Earley algorithms Talia Ringer More Disambiguation Ben Tebbs 1 Announcements • HW3 due Sunday • Project proposals due tonight – No late days • Review session this Sunday 6-7pm EEB 115 2 Outline • Last time we saw how to construct AST from parse tree • We will now discuss algorithms for generating parse trees from input strings 3 Today CYK parser builds the parse tree bottom up More Disambiguation Forcing the parser to select the desired parse tree Earley parser solves CYK’s inefficiency 4 CYK parser Parser Motivation • Given a grammar G and an input string s, we need an algorithm to: – Decide whether s is in L(G) – If so, generate a parse tree for s • We will see two algorithms for doing this today – Many others are available – Each with different tradeoffs in time and space 6 CYK Algorithm • Parsing algorithm for context-free grammars • Invented by John Cocke, Daniel Younger, and Tadao Kasami • Basic idea given string s with n tokens: 1. Find production rules that cover 1 token in s 2. Use 1. to find rules that cover 2 tokens in s 3. Use 2. to find rules that cover 3 tokens in s 4. … N. Use N-1. to find rules that cover n tokens in s. If succeeds then s is in L(G), else it is not 7 A graphical way to visualize CYK Initial graph: the input (terminals) Repeat: add non-terminal edges until no more can be added.
    [Show full text]
  • Parsing 1. Grammars and Parsing 2. Top-Down and Bottom-Up Parsing 3
    Syntax Parsing syntax: from the Greek syntaxis, meaning “setting out together or arrangement.” 1. Grammars and parsing Refers to the way words are arranged together. 2. Top-down and bottom-up parsing Why worry about syntax? 3. Chart parsers • The boy ate the frog. 4. Bottom-up chart parsing • The frog was eaten by the boy. 5. The Earley Algorithm • The frog that the boy ate died. • The boy whom the frog was eaten by died. Slide CS474–1 Slide CS474–2 Grammars and Parsing Need a grammar: a formal specification of the structures allowable in Syntactic Analysis the language. Key ideas: Need a parser: algorithm for assigning syntactic structure to an input • constituency: groups of words may behave as a single unit or phrase sentence. • grammatical relations: refer to the subject, object, indirect Sentence Parse Tree object, etc. Beavis ate the cat. S • subcategorization and dependencies: refer to certain kinds of relations between words and phrases, e.g. want can be followed by an NP VP infinitive, but find and work cannot. NAME V NP All can be modeled by various kinds of grammars that are based on ART N context-free grammars. Beavis ate the cat Slide CS474–3 Slide CS474–4 CFG example CFG’s are also called phrase-structure grammars. CFG’s Equivalent to Backus-Naur Form (BNF). A context free grammar consists of: 1. S → NP VP 5. NAME → Beavis 1. a set of non-terminal symbols N 2. VP → V NP 6. V → ate 2. a set of terminal symbols Σ (disjoint from N) 3.
    [Show full text]
  • Chart Parsing and Constraint Programming
    Chart Parsing and Constraint Programming Frank Morawietz Seminar f¨ur Sprachwissenschaft Universit¨at T¨ubingen Wilhelmstr. 113 72074 T¨ubingen, Germany [email protected] Abstract yet another language. The approach allows for a rapid In this paper, parsing-as-deduction and constraint pro- and very flexible but at the same time uniform method gramming are brought together to outline a procedure for of implementation of all kinds of parsing algorithms (for the specification of constraint-based chart parsers. Fol- constraint-based theories). The goal is not necessarily to lowing the proposal in Shieber et al. (1995), we show build the fastest parser, but rather to build – for an ar- how to directly realize the inference rules for deductive bitrary algorithm – a parser fast and perspicuously. For parsers as Constraint Handling Rules (Fr¨uhwirth, 1998) example, the advantage of our approach compared to the by viewing the items of a chart parser as constraints and one proposed in Shieber et al. (1995) is that we do not the constraint base as a chart. This allows the direct use have to design a special deduction engine and we do not of constraint resolution to parse sentences. have to handle chart and agenda explicitly. Furthermore, the process can be used in any constraint-based formal- 1 Introduction ism which allows for constraint propagation and there- fore can be easily integrated into existing applications. The parsing-as-deduction approach proposed in Pereira The paper proceeds by reviewing the parsing-as- and Warren (1983) and extended in Shieber et al. (1995) deduction approach and a particular way of imple- and the parsing schemata defined in Sikkel (1997) are menting constraint systems, Constraint Handling Rules well established parsing paradigms in computational lin- (CHR) as presented in Fr¨uhwirth (1998).
    [Show full text]
  • A Mobile App for Teaching Formal Languages and Automata
    Received: 21 December 2017 | Accepted: 19 March 2018 DOI: 10.1002/cae.21944 SPECIAL ISSUE ARTICLE A mobile app for teaching formal languages and automata Carlos H. Pereira | Ricardo Terra Department of Computer Science, Federal University of Lavras, Lavras, Brazil Abstract Formal Languages and Automata (FLA) address mathematical models able to Correspondence Ricardo Terra, Department of Computer specify and recognize languages, their properties and characteristics. Although Science, Federal University of Lavras, solid knowledge of FLA is extremely important for a B.Sc. degree in Computer Postal Code 3037, Lavras, Brazil. Science and similar fields, the algorithms and techniques covered in the course Email: [email protected] are complex and difficult to assimilate. Therefore, this article presents FLApp, Funding information a mobile application—which we consider the new way to reach students—for FAPEMIG (Fundação de Amparo à teaching FLA. The application—developed for mobile phones and tablets Pesquisa do Estado de Minas Gerais) running Android—provides students not only with answers to problems involving Regular, Context-free, Context-Sensitive, and Recursively Enumer- able Languages, but also an Educational environment that describes and illustrates each step of the algorithms to support students in the learning process. KEYWORDS automata, education, formal languages, mobile application 1 | INTRODUCTION In this article, we present FLApp (Formal Languages and Automata Application), a mobile application for teaching FLA Formal Languages and Automata (FLA) is an important area that helps students by solving problems involving Regular, of Computer Science that approaches mathematical models Context-free, Context-Sensitive, and Recursively Enumerable able to specify and recognize languages, their properties and Languages (levels 3 to 0, respectively), in addition to create an characteristics [14].
    [Show full text]
  • An Earley Parsing Algorithm for Range Concatenation Grammars
    An Earley Parsing Algorithm for Range Concatenation Grammars Laura Kallmeyer Wolfgang Maier Yannick Parmentier SFB 441 SFB 441 CNRS - LORIA Universitat¨ Tubingen¨ Universitat¨ Tubingen¨ Nancy Universite´ 72074 Tubingen,¨ Germany 72074 Tubingen,¨ Germany 54506 Vandœuvre, France [email protected] [email protected] [email protected] Abstract class of LCFRS has received more attention con- cerning parsing (Villemonte de la Clergerie, 2002; We present a CYK and an Earley-style Burden and Ljunglof,¨ 2005). This article proposes algorithm for parsing Range Concatena- new CYK and Earley parsers for RCG, formulat- tion Grammar (RCG), using the deduc- ing them in the framework of parsing as deduction tive parsing framework. The characteris- (Shieber et al., 1995). The second section intro- tic property of the Earley parser is that we duces necessary definitions. Section 3 presents a use a technique of range boundary con- CYK-style algorithm and Section 4 extends this straint propagation to compute the yields with an Earley-style prediction. of non-terminals as late as possible. Ex- periments show that, compared to previ- 2 Preliminaries ous approaches, the constraint propagation The rules (clauses) of RCGs1 rewrite predicates helps to considerably decrease the number ranging over parts of the input by other predicates. of items in the chart. E.g., a clause S(aXb) S(X) signifies that S is → 1 Introduction true for a part of the input if this part starts with an a, ends with a b, and if, furthermore, S is also true RCGs (Boullier, 2000) have recently received a for the part between a and b.
    [Show full text]