Efficient Techniques for Parsing with Tree Automata

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Techniques for Parsing with Tree Automata Efficient techniques for parsing with tree automata Jonas Groschwitz and Alexander Koller Mark Johnson Department of Linguistics Department of Computing University of Potsdam Macquarie University Germany Australia groschwi|[email protected] [email protected] Abstract ing algorithm which recursively explores substruc- tures of the input object and assigns them non- Parsing for a wide variety of grammar for- terminal symbols, but their exact relationship is malisms can be performed by intersecting rarely made explicit. On the practical side, this finite tree automata. However, naive im- parsing algorithm and its extensions (e.g. to EM plementations of parsing by intersection training) have to be implemented and optimized are very inefficient. We present techniques from scratch for each new grammar formalism. that speed up tree-automata-based pars- Thus, development time is spent on reinventing ing, to the point that it becomes practically wheels that are slightly different from previous feasible on realistic data when applied to ones, and the resulting implementations still tend context-free, TAG, and graph parsing. For to underperform. graph parsing, we obtain the best runtimes Koller and Kuhlmann (2011) introduced Inter- in the literature. preted Regular Tree Grammars (IRTGs) in order to address this situation. An IRTG represents 1 Introduction a language by describing a regular language of Grammar formalisms that go beyond context-free derivation trees, each of which is mapped to a term grammars have recently enjoyed renewed atten- over some algebra and evaluated there. Gram- tion throughout computational linguistics. Clas- mars from a wide range of monolingual and syn- sical grammar formalisms such as TAG (Joshi and chronous formalisms can be mapped into IRTGs Schabes, 1997) and CCG (Steedman, 2001) have by using different algebras: Context-free and tree- been equipped with expressive statistical mod- adjoining grammars use string algebras of differ- els, and high-performance parsers have become ent kinds, graph grammars can be captured by us- available (Clark and Curran, 2007; Lewis and ing graph algebras, and so on. In addition, IRTGs Steedman, 2014; Kallmeyer and Maier, 2013). come with a universal parsing algorithm based on Synchronous grammar formalisms such as syn- closure results for tree automata. Implementing chronous context-free grammars (Chiang, 2007) and optimizing this parsing algorithm once, one and tree-to-string transducers (Galley et al., 2004; could apply it to all grammar formalisms that can Graehl et al., 2008; Seemann et al., 2015) are be mapped to IRTG. However, while Koller and being used as models that incorporate syntac- Kuhlmann show that asymptotically optimal pars- tic information in statistical machine translation. ing is possible in theory, it is non-trivial to imple- Synchronous string-to-tree (Wong and Mooney, ment their algorithm optimally. 2006) and string-to-graph grammars (Chiang et In this paper, we introduce practical algorithms al., 2013) have been applied to semantic parsing; for the two key operations underlying IRTG pars- and so forth. ing: computing the intersection of two tree au- Each of these grammar formalisms requires its tomata and applying an inverse tree homomor- users to develop new algorithms for parsing and phism to a tree automaton. After defining IRTGs training. This comes with challenges that are both (Section 2), we will first illustrate that a naive practical and theoretical. From a theoretical per- bottom-up implementation of the intersection al- spective, many of these algorithms are basically gorithm yields asymptotic parsing complexities the same, in that they rest upon a CKY-style pars- that are too high (Section 3). We will then 2042 Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2042–2051, Berlin, Germany, August 7-12, 2016. c 2016 Association for Computational Linguistics show how the parsing complexity can be im- automaton rule homomorphism proved by combining algebra-specific index data S r1(NP, VP) (x1, x2) → ∗ structures with a generic parsing algorithm (Sec- NP r2 John → tion 4), and by replacing bottom-up with top-down VP r3 walks → queries (Section 5). In contrast to the naive al- VP r4(VP, NP) (x1, (on, x2)) → ∗ ∗ gorithm, both of these methods achieve the ex- NP r5 Mars → pected asymptotic complexities, e.g. O(n3) for context-free parsing, O(n6) for TAG parsing, etc. Figure 1: An example IRTG. Furthermore, an evaluation with realistic gram- mars shows that our algorithms improve practi- alphabet E. Its domain is the set of all strings over cal parsing times with IRTG grammars encoding E. For each symbol a E, it has a nullary oper- ∈ context-free grammars, tree-adjoining grammars, ation symbol a with aE∗ = a. It also has a single and graph grammars by orders of magnitude (Sec- E binary operation symbol , such that ∗ (w1, w2) tion 6). Thus our algorithms make IRTG pars- ∗ ∗ is the concatenation of the strings w1 and w2. Thus ing practically feasible for the first time; for graph the term (John, (walks, (on, Mars))) in Fig. 2b ∗ ∗ ∗ parsing, we obtain the fastest reported runtimes. evaluates to the string “John walks on Mars”. A finite tree automaton M over the signature Σ 2 Interpreted Regular Tree Grammars is a structure M = (Σ, Q, R, XF ), where Q is a We will first define IRTGs and explain how the finite set of states and XF Q is a final state. ∈ universal parsing algorithm for IRTGs works. R is a finite set of transition rules of the form X r(X ,...,X ), where the terminal symbol → 1 k 2.1 Formal foundations r Σ is of arity k and X, X ,...,X Q.A ∈ 1 k ∈ First, we introduce some fundamental theoretical tree automaton can run non-deterministically on a tree t T by assigning states to the nodes concepts and notation. ∈ Σ A signature Σ is a finite set of symbols r, f, . ., of t bottom-up. If we have t = r(t1, . , tn) each of which has an arity ar(r) 0.A tree t over and M can assign the state Xi to each ti, written ≥ Xi ∗ ti, then we also have X ∗ t. We say that the signature Σ is a term of the form r(t1, . , tn), → → M accepts t if XF ∗ t, and define the language where the ti are trees and r Σ has arity n. We → ∈ L(M) T of M as the (possibly infinite) set identify the nodes of t by their Gorn addresses, i.e. ⊆ Σ of all trees that M accepts. An example of a tree paths π N∗ from the root to the node, and write ∈ automaton (with states S, NP, etc.) is shown in t(π) for the label of π. We write TΣ for the set of all trees over Σ, and T ( ) for the trees in which the “automaton rule” column of Fig. 1. It accepts, Σ Xk each node either has a label from Σ, or is a leaf among others, the tree τ1 in Fig. 2a. labeled with one of the variables x , . , x . Tree automata can be defined top-down or { 1 k} A (linear, nondeleting) tree homomorphism h bottom-up, and are equivalent to regular tree from a signature Σ to a signature ∆ is a mapping grammars. The languages that can be accepted by h : T T . It is defined by specifying, for each finite tree automata are called the regular tree lan- Σ → ∆ symbol r Σ of arity k, a term h(r) T ( ) guages. See e.g. Comon et al. (2008) for details. ∈ ∈ ∆ Xk in which each variable occurs exactly once. This symbol-wise mapping is lifted to entire trees by 2.2 Interpreted regular tree grammars letting h(r(t1, . , tk)) = h(r)[h(t1), . , h(tk)], We can combine tree automata, homomorphisms, i.e. by replacing the variable xi in h(r) by the re- and algebras into grammars that can describe lan- cursively computed value h(ti). guages of arbitrary objects, as well as relations be- Let ∆ be a signature. A ∆-algebra con- tween such objects – in a way that inherits many A sists of a nonempty set A, called the domain, and technical properties from context-free grammars, for each symbol f ∆ with arity k, a function while extending the expressive capacity. ∈ f : Ak A, the operation associated with An interpreted regular tree grammar A → f. We can evaluate any term t T to a value (IRTG, Koller and Kuhlmann (2011)) ∈ ∆ t A, by evaluating the operation symbols = (M, (h , ),..., (h , )) consists of A ∈ G 1 A1 n An bottom-up. In this paper, we will be particularly a tree automaton M over some signature Σ, interested in the string algebra E∗ over the finite together with an arbitrary number n of inter- 2043 ∗ r1 h John evaluate r2 r4 ∗ “John walks on Mars” −→ walks −−−−→ r3 r5 ∗ on Mars (a) Tree τ1. (b) Term h (τ1). (c) h (τ1) evaluated in E∗. Figure 2: The tree τ1, evaluated by the homomorphism h and the algebra E∗ pretations (h , ), where each is an algebra sent synchronous grammars and (bottom-up) tree- i Ai Ai over some signature ∆i and each hi is a tree to-tree and tree-to-string transducers. In general, homomorphism from Σ to ∆i. The automaton any grammar formalism whose grammars describe M describes a language L(M) of derivation derivations in terms of a finite set of states can typ- trees which represent abstract syntactic structures. ically be converted into IRTG. Each derivation tree τ is then interpreted n ways: we map it to a term hi(τ) T∆i , and then we 2.3 Parsing IRTGs ∈ evaluate h (τ) to a value a = h (τ) i A of the i i i A ∈ i algebra .
Recommended publications
  • Regular Description of Context-Free Graph Languages
    Regular Description of ContextFree Graph Languages Jo ost Engelfriet and Vincent van Oostrom Department of Computer Science Leiden University POBox RA Leiden The Netherlands email engelfriwileidenunivnl Abstract A set of lab eled graphs can b e dened by a regular tree language and one regular string language for each p ossible edge lab el as follows For each tree t from the regular tree language the graph g r t has the same no des as t with the same lab els and there is an edge with lab el from no de x to no de y if the string of lab els of the no des on the shortest path from x to y in t b elongs to the regular string language for Slightly generalizing this denition scheme we allow g r t to have only those no des of t that have certain lab els and we allow a relab eling of these no des It is shown that in this way exactly the class of CedNCE graph languages generated by CedNCE graph grammars is obtained one of the largest known classes of contextfree graph languages Introduction There are many kinds of contextfree graph grammars see eg ENRR EKR Some are no de rewriting and others are edge rewriting In b oth cases a pro duc tion of the grammar is of the form X D C Application of such a pro duction to a lab eled graph H consists of removing a no de or edge lab eled X from H replacing it by the graph D and connecting D to the remainder of H accord ing to the embedding pro cedure C Since these grammars are contextfree in the sense that one no de or edge is replaced their derivations can b e mo d eled by derivation trees as
    [Show full text]
  • Weighted Regular Tree Grammars with Storage
    Discrete Mathematics and Theoretical Computer Science DMTCS vol. 20:1, 2018, #26 Weighted Regular Tree Grammars with Storage 1 2 2 Zoltan´ Ful¨ op¨ ∗ Luisa Herrmann † Heiko Vogler 1 Department of Foundations of Computer Science, University of Szeged, Hungary 2 Faculty of Computer Science, Technische Universitat¨ Dresden, Germany received 19th May 2017, revised 11th June 2018, accepted 22nd June 2018. We introduce weighted regular tree grammars with storage as combination of (a) regular tree grammars with storage and (b) weighted tree automata over multioperator monoids. Each weighted regular tree grammar with storage gen- erates a weighted tree language, which is a mapping from the set of trees to the multioperator monoid. We prove that, for multioperator monoids canonically associated to particular strong bimonoids, the support of the generated weighted tree languages can be generated by (unweighted) regular tree grammars with storage. We characterize the class of all generated weighted tree languages by the composition of three basic concepts. Moreover, we prove results on the elimination of chain rules and of finite storage types, and we characterize weighted regular tree grammars with storage by a new weighted MSO-logic. Keywords: regular tree grammars, weighted tree automata, multioperator monoids, storage types, weighted MSO- logic 1 Introduction In automata theory, weighted string automata with storage are a very recent topic of research [HV15, HV16, VDH16]. This model generalizes finite-state string automata in two directions. On the one hand, it is based on the concept of “sequential program + machine”, which Scott [Sco67] proposed in order to har- monize the wealth of upcoming automaton models.
    [Show full text]
  • Tree Automata Techniques and Applications
    Tree Automata Techniques and Applications Hubert Comon Max Dauchet Remi´ Gilleron Florent Jacquemard Denis Lugiez Sophie Tison Marc Tommasi Contents Introduction 7 Preliminaries 11 1 Recognizable Tree Languages and Finite Tree Automata 13 1.1 Finite Tree Automata . 14 1.2 The pumping Lemma for Recognizable Tree Languages . 22 1.3 Closure Properties of Recognizable Tree Languages . 23 1.4 Tree homomorphisms . 24 1.5 Minimizing Tree Automata . 29 1.6 Top Down Tree Automata . 31 1.7 Decision problems and their complexity . 32 1.8 Exercises . 35 1.9 Bibliographic Notes . 38 2 Regular grammars and regular expressions 41 2.1 Tree Grammar . 41 2.1.1 Definitions . 41 2.1.2 Regular tree grammar and recognizable tree languages . 44 2.2 Regular expressions. Kleene’s theorem for tree languages. 44 2.2.1 Substitution and iteration . 45 2.2.2 Regular expressions and regular tree languages. 48 2.3 Regular equations. 51 2.4 Context-free word languages and regular tree languages . 53 2.5 Beyond regular tree languages: context-free tree languages . 56 2.5.1 Context-free tree languages . 56 2.5.2 IO and OI tree grammars . 57 2.6 Exercises . 58 2.7 Bibliographic notes . 61 3 Automata and n-ary relations 63 3.1 Introduction . 63 3.2 Automata on tuples of finite trees . 65 3.2.1 Three notions of recognizability . 65 3.2.2 Examples of the three notions of recognizability . 67 3.2.3 Comparisons between the three classes . 69 3.2.4 Closure properties for Rec£ and Rec; cylindrification and projection .
    [Show full text]
  • Taxonomy of XML Schema Languages Using Formal Language Theory
    Taxonomy of XML Schema Languages using Formal Language Theory MAKOTO MURATA IBM Tokyo Research Lab DONGWON LEE Penn State University MURALI MANI Worcester Polytechnic Institute and KOHSUKE KAWAGUCHI Sun Microsystems On the basis of regular tree grammars, we present a formal framework for XML schema languages. This framework helps to describe, compare, and implement such schema languages in a rigorous manner. Our main results are as follows: (1) a simple framework to study three classes of tree languages (“local”, “single-type”, and “regular”); (2) classification and comparison of schema languages (DTD, W3C XML Schema, and RELAX NG) based on these classes; (3) efficient doc- ument validation algorithms for these classes; and (4) other grammatical concepts and advanced validation algorithms relevant to XML model (e.g., binarization, derivative-based validation). Categories and Subject Descriptors: H.2.1 [Database Management]: Logical Design—Schema and subschema; F.4.3 [Mathematical Logic and Formal Languages]: Formal Languages— Classes defined by grammars or automata General Terms: Algorithms, Languages, Theory Additional Key Words and Phrases: XML, schema, validation, tree automaton, interpretation 1. INTRODUCTION XML [Bray et al. 2000] is a meta language for creating markup languages. To represent an XML based language, we design a collection of names for elements and attributes that the language uses. These names (i.e., tag names) are then used by application programs dedicated to this type of information. For instance, XHTML [Altheim and McCarron (Eds) 2000] is such an XML-based language. In it, permissible element names include p, a, ul, and li, and permissible attribute names include href and style.
    [Show full text]
  • Capturing Cfls with Tree Adjoining Grammars
    Capturing CFLs with Tree Adjoining Grammars James Rogers* Dept. of Computer and Information Sciences University of Delaware Newark, DE 19716, USA j rogers©cis, udel. edu Abstract items that occur in that string. There are a number We define a decidable class of TAGs that is strongly of reasons for exploring lexicalized grammars. Chief equivalent to CFGs and is cubic-time parsable. This among these are linguistic considerations--lexicalized class serves to lexicalize CFGs in the same manner as grammars reflect the tendency in many current syntac- the LC, FGs of Schabes and Waters but with consider- tic theories to have the details of the syntactic structure ably less restriction on the form of the grammars. The be projected from the lexicon. There are also practical class provides a nornlal form for TAGs that generate advantages. All lexicalized grammars are finitely am- local sets m rnuch the same way that regular grammars biguous and, consequently, recognition for them is de- provide a normal form for CFGs that generate regular cidable. Further, lexicalization supports strategies that sets. can, in practice, improve the speed of recognition algo- rithms (Schabes et M., 1988). Introduction One grammar formalism is said to lezicalize an- We introduce the notion of Regular Form for Tree Ad- other (Joshi and Schabes, 1992) if for every grammar joining (;rammars (TA(;s). The class of TAGs that in the second formalism there is a lexicalized grammar are in regular from is equivalent in strong generative in the first that generates exactly the same set of struc- capacity 1 to the Context-Free Grammars, that is, the tures.
    [Show full text]
  • Deterministic Top-Down Tree Automata: Past, Present, and Future
    Deterministic Top-Down Tree Automata: Past, Present, and Future Wim Martens1 Frank Neven2 Thomas Schwentick1 1 University of Dortmund Germany 2 Hasselt University and Transnational University of Limburg School for Information Technology Belgium Abstract In strong contrast to their non-deterministic counterparts, deter- ministic top-down tree automata received little attention in the sci- entific literature. The aim of this article is to survey recent and less recent results and stipulate new research directions for top-down de- terministic tree automata motivated by the advent of the XML data exchange format. In particular, we survey different ranked and un- ranked top-down tree automata models and discuss expressiveness, closure properties and the complexity of static analysis problems. 1 Introduction The goal of this article is to survey some results concerning deterministic top-down tree automata motivated by purely formal language theoretic rea- sons (past) and by the advent of the data exchange format XML (present). Finally, we outline some new research directions (future). The Past. Regular tree languages have been studied in depth ever since their introduction in the late sixties [10]. Just as for regular string languages, regular tree languages form a robust class admitting many closure properties and many equivalent formulations, the most prominent one in the form of tree automata. A striking difference with the string case where left-to-right equals right-to-left processing, is that top-down is no longer equivalent to bottom-up. In particular, top-down deterministic tree automata are strictly less expressive than their bottom-up counterparts and consequently form a strict subclass of the regular tree languages.
    [Show full text]
  • Tree Automata
    Foundations of XML Types: Tree Automata Pierre Genevès CNRS (slides mostly based on slides by W. Martens and T. Schwentick) University Grenoble Alpes, 2020–2021 1 / 43 Why Tree Automata? • Foundations of XML type languages (DTD, XML Schema, Relax NG...) • Provide a general framework for XML type languages • A tool to define regular tree languages with an operational semantics • Provide algorithms for efficient validation • Basic tool for static analysis (proofs, decision procedures in logic) 2 / 43 Prelude: Word Automata b b a start even odd a Transitions even !a odd odd !a even ... 3 / 43 From Words to Trees: Binary Trees Binary trees with an even number of a’s a a a b a b b How to write transitions? 4 / 43 From Words to Trees: Binary Trees Binary trees with an even number of a’s a even a even a odd b even a odd b even b even How to write transitions? 4 / 43 From Words to Trees: Binary Trees Binary trees with an even number of a’s a even a even a odd b even a odd b even b even How to write transitions? (even; odd) !a even (even; even) !a odd etc: 4 / 43 Ranked Trees? They come from parse trees of data (or programs)... A function call f (a; b) is a ranked tree f a b 5 / 43 Ranked Trees? They come from parse trees of data (or programs)... A function call f (g(a; b; c); h(i)) is a ranked tree f g h a b c i 5 / 43 Ranked Alphabet A ranked alphabet symbol is: • a formalisation of a function call • a symbol a with an integer arity(a) • arity(a) indicates the number of children of a Notation a(k): symbol a with arity(a)= k 6 / 43 Example Alphabet:
    [Show full text]
  • Language, Automata and Logic for Finite Trees
    Language, Automata and Logic for Finite Trees Olivier Gauwin UMons Feb/March 2010 Olivier Gauwin (UMons) Finite Tree Automata Feb/March 2010 1 / 66 Languages, Automata, Logic Example for regular word languages: Σ∗.a.b.Σ∗ Complexity a, b a, b a b ∃x. ∃y. lab (x) S A B Automata Logic a ∧ labb(y) ∧ succ(x, y) Grammars / Expressions S → aS S → aB X → aX X → ǫ S → bS B → bX X → bX Olivier Gauwin (UMons) Finite Tree Automata Feb/March 2010 2 / 66 Languages, Automata, Logic Example for regular word languages: Σ∗.a.b.Σ∗ Example for regular tree languages: all trees with an a-node having a b-child Complexity a, b a, b a b ∃x. ∃y. lab (x) S A B Automata Logic a ∧ labb(y) ∧ succ(x, y) (S, X ) → S ∃x. ∃y. laba(x) (X , S) → S ∧ labb(y) ∧ child(x, y) a(B, X ) → S a(X , B) → S Grammars / Expressions ... S → aS S → aB X → aX X → ǫ b → B S → bS B → bX X → bX ... S → (S, X ) S → a(B, X ) B → b(X , X ) X → (X , X ) S → (X , S) S → a(X , B) B → b X → Olivier Gauwin (UMons) Finite Tree Automata Feb/March 2010 2 / 66 References The main reference for this talk is the TATA book [CDG+07]: Tree Automata, Techniques and Applications by Hubert Comon, Max Dauchet, R´emi Gilleron, Christof L¨oding, Florent Jacquemard, Denis Lugier, Sophie Tison, Marc Tommasi. Other references will be mentionned progressively. Olivier Gauwin (UMons) Finite Tree Automata Feb/March 2010 3 / 66 1 Ranked Trees Trees on Ranked Alphabet Tree Automata Tree Grammars Logic 2 Unranked Trees Unranked Trees Automata Logic Olivier Gauwin (UMons) Finite Tree Automata Feb/March 2010 4 / 66 1 Ranked Trees Trees on Ranked Alphabet Tree Automata Tree Grammars Logic 2 Unranked Trees Unranked Trees Automata Logic Olivier Gauwin (UMons) Finite Tree Automata Feb/March 2010 5 / 66 Trees on Ranked Alphabet Ranked alphabet Ranked alphabet = finite alphabet + arity function Σr Σ= {a, b, c} ar:Σ → N ar(a) = 2 ar(b) = 2 ar(c) = 0 Ranked trees over Σr TΣr , the set of ranked trees, is the smallest set of terms f (t1,..., tk ) such that: f ∈ Σr , k = ar(f ), and ti ∈TΣr for all 1 ≤ i ≤ k.
    [Show full text]
  • A Model-Theoretic Description of Tree Adjoining Grammars 1
    p() URL: http://www.elsevier.nl/locate/entcs/volume53.html 23 pages A Model-Theoretic Description of Tree Adjoining Grammars 1 Frank Morawietz and Uwe M¨onnich 2,3 Seminar f¨ur Sprachwissenschaft Universit¨at T¨ubingen T¨ubingen, Germany Abstract In this paper we show that non-context-free phenomena can be captured using only limited logical means. In particular, we show how to encode a Tree Adjoining Gram- mar [16] into a weakly equivalent monadic context-free tree grammar (MCFTG). By viewing MCFTG-rules as terms in a free Lawvere theory, we can translate a given MCFTG into a regular tree grammar. The latter is characterizable by both a tree automaton and a corresponding formula in monadic second-order (MSO) logic. The trees of the resulting regular tree language are then unpacked into the intended “linguistic” trees through a model-theoretic interpretation in the form of an MSO transduction based upon tree-walking automata. This two-step approach gives a logical as well as an operational description of the tree sets involved. 1 Introduction Algebraic, logical and regular characterizations of (tree) languages provide a natural framework for the denotational and operational semantics of grammar formalisms relying on the use of trees for their intended models. In the present context the combination of algebraic, logical and regular techniques does not onlyadd another description of mildlycontext-sensitive languages to an alreadylong list of weak equivalences between grammar for- malisms. It also makes available the whole bodyof techniques that have been developed in the tradition of algebraic language theory, logic and automata theory.
    [Show full text]
  • Uniform Vs. Nonuniform Membership for Mildly Context-Sensitive Languages: a Brief Survey
    algorithms Article Uniform vs. Nonuniform Membership for Mildly Context-Sensitive Languages: A Brief Survey Henrik Björklund *, Martin Berglund and Petter Ericson Department of Computing Science, Umeå University, SE-901 87 Umeå, Sweden; [email protected] (M.B.); [email protected] (P.E.) * Correspondence: [email protected]; Tel.: +46-90-786-9789 Academic Editors: Henning Fernau and Florin Manea Received: 8 March 2016; Accepted: 27 April 2016; Published: 11 May 2016 Abstract: Parsing for mildly context-sensitive language formalisms is an important area within natural language processing. While the complexity of the parsing problem for some such formalisms is known to be polynomial, this is not the case for all of them. This article presents a series of results regarding the complexity of parsing for linear context-free rewriting systems and deterministic tree-walking transducers. We discuss the difference between uniform and nonuniform complexity measures and how parameterized complexity theory can be used to investigate how different aspects of the formalisms influence how hard the parsing problem is. The main results we survey are all hardness results and indicate that parsing is hard even for relatively small values of parameters such as rank and fan-out in a rewriting system. Keywords: mildly context-sensitive languages; parsing; formal languages; parameterized complexity 1. Introduction Context-free and context-sensitive grammars were introduced in the 1950s by Noam Chomsky, as possible formalisms for describing the structure of natural languages; see, e.g., [1]. Context-free grammars (CFG for short), in particular, have been very successful, both when it comes to natural language modelling and, perhaps to an even larger degree, the description of the syntax of programming languages.
    [Show full text]
  • Programming Using Automata and Transducers
    University of Pennsylvania ScholarlyCommons Publicly Accessible Penn Dissertations 2015 Programming Using Automata and Transducers Loris D'antoni University of Pennsylvania, [email protected] Follow this and additional works at: https://repository.upenn.edu/edissertations Part of the Computer Sciences Commons Recommended Citation D'antoni, Loris, "Programming Using Automata and Transducers" (2015). Publicly Accessible Penn Dissertations. 1677. https://repository.upenn.edu/edissertations/1677 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/edissertations/1677 For more information, please contact [email protected]. Programming Using Automata and Transducers Abstract Automata, the simplest model of computation, have proven to be an effective tool in reasoning about programs that operate over strings. Transducers augment automata to produce outputs and have been used to model string and tree transformations such as natural language translations. The success of these models is primarily due to their closure properties and decidable procedures, but good properties come at the price of limited expressiveness. Concretely, most models only support finite alphabets and can only represent small classes of languages and transformations. We focus on addressing these limitations and bridge the gap between the theory of automata and transducers and complex real-world applications: Can we extend automata and transducer models to operate over structured and infinite alphabets? Can we design languages that hide the complexity of these formalisms? Can we define executable models that can process the input efficiently? First, we introduce succinct models of transducers that can operate over large alphabets and design BEX, a language for analysing string coders. We use BEX to prove the correctness of UTF and BASE64 encoders and decoders.
    [Show full text]
  • Context-Free Graph Grammars and Concatenation of Graphs
    Context-free graph grammars and concatenation of graphs Citation for published version (APA): Engelfriet, J., & Vereijken, J. J. (1995). Context-free graph grammars and concatenation of graphs. (Computing science reports; Vol. 9533). Technische Universiteit Eindhoven. Document status and date: Published: 01/01/1995 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]