Algorithm for Analysis and Translation of Sentence Phrases
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Fundamental Methodological Issues of Syntactic Pattern Recognition
Pattern Anal Applic (2014) 17:465–480 DOI 10.1007/s10044-013-0322-1 ORIGINAL ARTICLE Fundamental methodological issues of syntactic pattern recognition Mariusz Flasin´ski • Janusz Jurek Received: 22 February 2012 / Accepted: 30 January 2013 / Published online: 9 March 2013 Ó The Author(s) 2013. This article is published with open access at Springerlink.com Abstract Fundamental open problems, which are fron- Syntactic pattern recognition prevails over ‘‘standard’’ tiers of syntactic pattern recognition are discussed in the pattern recognition approaches (probabilistic, discriminant paper. Methodological considerations on crucial issues in function-based, NN, etc.) when patterns considered can be areas of string and graph grammar-based syntactic methods characterized better with structural features than vectors of are made. As a result, recommendations concerning an features. What is more, using this approach not only can we enhancement of context-free grammars as well as con- make a classification (in a sense of ascribing a pattern to a structing parsable and inducible classes of graph grammars pre-defined category), but also a (structural) interpretation are formulated. of an unknown pattern. Therefore, for structurally-oriented recognition problems such as: character recognition, speech Keyword Syntactic pattern recognition Á recognition, scene analysis, chemical and biological struc- Formal language Á Graph grammar tures analysis, texture analysis, fingerprint recognition, geophysics, a syntactic approach has been applied suc- cessfully since its beginning in the early 1960s for the next 1 Introduction two decades. A rapid development of syntactic methods has slowed down since 1990s and the experts in this area (see Representing a pattern as a structure of the form of string, e.g. -
Lecture 5 Mildly Context-Sensitive Languages
Lecture 5 Mildly Context-Sensitive Languages Last modified 2016/07/08 Multiple context-free grammars In the previous lecture, we proved the equivalence between TAG and LIG. In fact, there is yet another grammar formalism, namely the head grammar, that is equivalent to these two formalisms. A head grammar is a special kind of multiple context-free grammar (MCFG), so we introduce the latter formalism first. The MCFG is a natural generalization of the “bottom-up” view of the CFG. The standard, “top-down” view of the CFG takes a rule A X ::: X ! 1 n as a permission to rewrite A into X1 ::: Xn. In contrast, the bottom-up view of the CFG interprets the same rule not as a rewriting instruction, but as an implication, which says: A ∗ x ::: x if X ∗ x X ∗ x : ) 1 n 1 ) 1 ^ · · · ^ n ) n In fact, to define the language L(G) = w Σ S w of a CFG G, there is no f 2 ∗ j )∗ g need to define the derivation relation which holds between strings of terminals )∗ and nonterminals. All you need is an inductive definition of the subrelation of this relation that holds between single nonterminals and strings of terminals: ∗ (N Σ ∗): ) \ × To express that nonterminal A and terminal string x stand in this relation (i.e., A x), we may write A(x), treating nonterminal A as a unary predicate on )∗ 5–1 terminal strings. Then the bottom-up interpretation of the CFG rule can be written in the form of a Horn clause: A(x ::: x ) X (x );:::; X (x ): 1 n 1 1 n n A context-free grammar now becomes a Horn clause program consisting of rules of the above form. -
Regular Expressions with a Brief Intro to FSM
Regular Expressions with a brief intro to FSM 15-123 Systems Skills in C and Unix Case for regular expressions • Many web applications require pattern matching – look for <a href> tag for links – Token search • A regular expression – A pattern that defines a class of strings – Special syntax used to represent the class • Eg; *.c - any pattern that ends with .c Formal Languages • Formal language consists of – An alphabet – Formal grammar • Formal grammar defines – Strings that belong to language • Formal languages with formal semantics generates rules for semantic specifications of programming languages Automaton • An automaton ( or automata in plural) is a machine that can recognize valid strings generated by a formal language . • A finite automata is a mathematical model of a finite state machine (FSM), an abstract model under which all modern computers are built. Automaton • A FSM is a machine that consists of a set of finite states and a transition table. • The FSM can be in any one of the states and can transit from one state to another based on a series of rules given by a transition function. Example What does this machine represents? Describe the kind of strings it will accept. Exercise • Draw a FSM that accepts any string with even number of A’s. Assume the alphabet is {A,B} Build a FSM • Stream: “I love cats and more cats and big cats ” • Pattern: “cat” Regular Expressions Regex versus FSM • A regular expressions and FSM’s are equivalent concepts. • Regular expression is a pattern that can be recognized by a FSM. • Regex is an example of how good theory leads to good programs Regular Expression • regex defines a class of patterns – Patterns that ends with a “*” • Regex utilities in unix – grep , awk , sed • Applications – Pattern matching (DNA) – Web searches Regex Engine • A software that can process a string to find regex matches. -
Generating Context-Free Grammars Using Classical Planning
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) Generating Context-Free Grammars using Classical Planning Javier Segovia-Aguas1, Sergio Jimenez´ 2, Anders Jonsson 1 1 Universitat Pompeu Fabra, Barcelona, Spain 2 University of Melbourne, Parkville, Australia [email protected], [email protected], [email protected] Abstract S ! aSa S This paper presents a novel approach for generating S ! bSb /|\ Context-Free Grammars (CFGs) from small sets of S ! a S a /|\ input strings (a single input string in some cases). a S a Our approach is to compile this task into a classical /|\ planning problem whose solutions are sequences b S b of actions that build and validate a CFG compli- | ant with the input strings. In addition, we show that our compilation is suitable for implementing the two canonical tasks for CFGs, string produc- (a) (b) tion and string recognition. Figure 1: (a) Example of a context-free grammar; (b) the corre- sponding parse tree for the string aabbaa. 1 Introduction A formal grammar is a set of symbols and rules that describe symbols in the grammar and (2), a bounded maximum size of how to form the strings of certain formal language. Usually the rules in the grammar (i.e. a maximum number of symbols two tasks are defined over formal grammars: in the right-hand side of the grammar rules). Our approach is compiling this inductive learning task into • Production : Given a formal grammar, generate strings a classical planning task whose solutions are sequences of ac- that belong to the language represented by the grammar. -
Formal Grammar Specifications of User Interface Processes
FORMAL GRAMMAR SPECIFICATIONS OF USER INTERFACE PROCESSES by MICHAEL WAYNE BATES ~ Bachelor of Science in Arts and Sciences Oklahoma State University Stillwater, Oklahoma 1982 Submitted to the Faculty of the Graduate College of the Oklahoma State University iri partial fulfillment of the requirements for the Degree of MASTER OF SCIENCE July, 1984 I TheSIS \<-)~~I R 32c-lf CO'f· FORMAL GRAMMAR SPECIFICATIONS USER INTER,FACE PROCESSES Thesis Approved: 'Dean of the Gra uate College ii tta9zJ1 1' PREFACE The benefits and drawbacks of using a formal grammar model to specify a user interface has been the primary focus of this study. In particular, the regular grammar and context-free grammar models have been examined for their relative strengths and weaknesses. The earliest motivation for this study was provided by Dr. James R. VanDoren at TMS Inc. This thesis grew out of a discussion about the difficulties of designing an interface that TMS was working on. I would like to express my gratitude to my major ad visor, Dr. Mike Folk for his guidance and invaluable help during this study. I would also like to thank Dr. G. E. Hedrick and Dr. J. P. Chandler for serving on my graduate committee. A special thanks goes to my wife, Susan, for her pa tience and understanding throughout my graduate studies. iii TABLE OF CONTENTS Chapter Page I. INTRODUCTION . II. AN OVERVIEW OF FORMAL LANGUAGE THEORY 6 Introduction 6 Grammars . • . • • r • • 7 Recognizers . 1 1 Summary . • • . 1 6 III. USING FOR~AL GRAMMARS TO SPECIFY USER INTER- FACES . • . • • . 18 Introduction . 18 Definition of a User Interface 1 9 Benefits of a Formal Model 21 Drawbacks of a Formal Model . -
The Linguistic Relevance of Tree Adjoining Grammar
University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science April 1985 The Linguistic Relevance of Tree Adjoining Grammar Anthony S. Kroch University of Pennsylvania Aravind K. Joshi University of Pennsylvania, [email protected] Follow this and additional works at: https://repository.upenn.edu/cis_reports Recommended Citation Anthony S. Kroch and Aravind K. Joshi, "The Linguistic Relevance of Tree Adjoining Grammar", . April 1985. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-85-16. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/671 For more information, please contact [email protected]. The Linguistic Relevance of Tree Adjoining Grammar Abstract In this paper we apply a new notation for the writing of natural language grammars to some classical problems in the description of English. The formalism is the Tree Adjoining Grammar (TAG) of Joshi, Levy and Takahashi 1975, which was studied, initially only for its mathematical properties but which now turns out to be a interesting candidate for the proper notation of meta-grammar; that is for the universal grammar of contemporary linguistics. Interest in the application of the TAG formalism to the writing of natural language grammars arises out of recent work on the possibility of writing grammars for natural languages in a metatheory of restricted generative capacity (for example, Gazdar 1982 and Gazdar et al. 1985). There have been also several recent attempts to examine the linguistic metatheory of restricted grammatical formalisms, in particular, context-free grammars. The inadequacies of context-free grammars have been discussed both from the point of view of strong generative capacity (Bresnan et al. -
Finite-State Automata and Algorithms
Finite-State Automata and Algorithms Bernd Kiefer, [email protected] Many thanks to Anette Frank for the slides MSc. Computational Linguistics Course, SS 2009 Overview . Finite-state automata (FSA) – What for? – Recap: Chomsky hierarchy of grammars and languages – FSA, regular languages and regular expressions – Appropriate problem classes and applications . Finite-state automata and algorithms – Regular expressions and FSA – Deterministic (DFSA) vs. non-deterministic (NFSA) finite-state automata – Determinization: from NFSA to DFSA – Minimization of DFSA . Extensions: finite-state transducers and FST operations Finite-state automata: What for? Chomsky Hierarchy of Hierarchy of Grammars and Languages Automata . Regular languages . Regular PS grammar (Type-3) Finite-state automata . Context-free languages . Context-free PS grammar (Type-2) Push-down automata . Context-sensitive languages . Tree adjoining grammars (Type-1) Linear bounded automata . Type-0 languages . General PS grammars Turing machine computationally more complex less efficient Finite-state automata model regular languages Regular describe/specify expressions describe/specify Finite describe/specify Regular automata recognize languages executable! Finite-state MACHINE Finite-state automata model regular languages Regular describe/specify expressions describe/specify Regular Finite describe/specify Regular grammars automata recognize/generate languages executable! executable! • properties of regular languages • appropriate problem classes Finite-state • algorithms for FSA MACHINE Languages, formal languages and grammars . Alphabet Σ : finite set of symbols Σ . String : sequence x1 ... xn of symbols xi from the alphabet – Special case: empty string ε . Language over Σ : the set of strings that can be generated from Σ – Sigma star Σ* : set of all possible strings over the alphabet Σ Σ = {a, b} Σ* = {ε, a, b, aa, ab, ba, bb, aaa, aab, ...} – Sigma plus Σ+ : Σ+ = Σ* -{ε} Strings – Special languages: ∅ = {} (empty language) ≠ {ε} (language of empty string) . -
Surface Without Structure Word Order and Tractability Issues in Natural Language Analysis
Surface without Structure word order and tractability issues in natural language analysis ..., daß Frank Julia Fred schwimmen helfen sah ...that Frank saw Julia help Fred swim ...dat Frank Julia Fred zag helpen zwemmen Annius Groenink Surface without Structure Word order and tractability issues in natural language analysis Oppervlakte zonder Structuur Woordvolgorde en computationele uitvoerbaarheid in natuurlijke- taalanalyse (met een samenvatting in het Nederlands) Proefschrift ter verkrijging van de graad van doctor aan de Universiteit Utrecht, op gezag van de Rector Magnificus, Prof. dr H.O. Voorma ingevolge het besluit van het College van Decanen in het openbaar te verdedigen op vrijdag 7 november 1997 des middags te 16.15 uur door Annius Victor Groenink geboren op 29 december 1971 te Twello (Voorst) Promotoren: Prof. dr D.J.N. van Eijck Onderzoeksinstituut voor Taal en Spraak Universiteit Utrecht Centrum voor Wiskunde en Informatica Prof. dr W.C. Rounds Deptartment of Electrical Engineering and Computer Science University of Michigan The research for this thesis was funded by the Stichting Informatica-Onderzoek Ned- erland (SION) of the Dutch national foundation for academic research NWO, un- der project no. 612-317-420: incremental parser generation and context-sensitive disambiguation: a multidisciplinary perspective, and carried out at the Centre for Mathematics and Computer Science (CWI) in Amsterdam. Copyright c 1997, Annius Groenink. All rights reserved. Printed and bound at CWI, Amsterdam; cover: focus on non-projectivity in -
Parsing Discontinuous Structures
wolfgang maier PARSINGDISCONTINUOUSSTRUCTURES PARSINGDISCONTINUOUSSTRUCTURES dissertation zur Erlangung des akademischen Grades Doktor der Philosophie der Philosophischen Fakultät der Eberhard Karls Universität Tübingen vorgelegt von Wolfgang Maier aus Göppingen 2013 Gedruckt mit Genehmigung der Philosophischen Fakultät der Eberhard Karls Universität Tübingen dekan: Prof. Dr. Jürgen Leonhardt hauptberichterstatter: Prof. Dr. Laura Kallmeyer, Universität Düsseldorf mitberichterstatter: Prof. Dr. Erhard Hinrichs, Universität Tübingen PD Dr. Frank Richter, Universität Tübingen tag der mündlichen prüfung: 16. Oktober 2012 TOBIAS-lib, Tübingen ABSTRACT The development of frameworks that allow to state grammars for nat- ural languages in a mathematically precise way is a core task of the field of computational linguistics. The same holds for the development of techniques for finding the syntactic structure of a sentence given a grammar, parsing. The focus of this thesis lies on data-driven parsing. In this area, one uses probabilistic grammars that are extracted from manually analyzed sentences coming from a treebank. The probabil- ity model can be used for disambiguation, i. e., for finding the best analysis of a sentence. In the last decades, enormous progress has been achieved in the do- main of data-driven parsing. Many current parsers are nevertheless still limited in an important aspect: They cannot handle discontinu- ous structures, a phenomenon which occurs especially frequently in languages with a free word order. This is due to the fact that those parsers are based on Probabilistic Context-Free Grammar (PCFG), a framework that cannot model discontinuities. In this thesis, I propose the use of Probabilistic Simple Range Con- catenation Grammar (PSRCG), a natural extension of PCFG, for data- driven parsing. -
Using Contextual Representations to Efficiently Learn Context-Free
JournalofMachineLearningResearch11(2010)2707-2744 Submitted 12/09; Revised 9/10; Published 10/10 Using Contextual Representations to Efficiently Learn Context-Free Languages Alexander Clark [email protected] Department of Computer Science, Royal Holloway, University of London Egham, Surrey, TW20 0EX United Kingdom Remi´ Eyraud [email protected] Amaury Habrard [email protected] Laboratoire d’Informatique Fondamentale de Marseille CNRS UMR 6166, Aix-Marseille Universite´ 39, rue Fred´ eric´ Joliot-Curie, 13453 Marseille cedex 13, France Editor: Fernando Pereira Abstract We present a polynomial update time algorithm for the inductive inference of a large class of context-free languages using the paradigm of positive data and a membership oracle. We achieve this result by moving to a novel representation, called Contextual Binary Feature Grammars (CBFGs), which are capable of representing richly structured context-free languages as well as some context sensitive languages. These representations explicitly model the lattice structure of the distribution of a set of substrings and can be inferred using a generalisation of distributional learning. This formalism is an attempt to bridge the gap between simple learnable classes and the sorts of highly expressive representations necessary for linguistic representation: it allows the learnability of a large class of context-free languages, that includes all regular languages and those context-free languages that satisfy two simple constraints. The formalism and the algorithm seem well suited to natural language and in particular to the modeling of first language acquisition. Pre- liminary experimental results confirm the effectiveness of this approach. Keywords: grammatical inference, context-free language, positive data only, membership queries 1. -
QUESTION BANK SOLUTION Unit 1 Introduction to Finite Automata
FLAT 10CS56 QUESTION BANK SOLUTION Unit 1 Introduction to Finite Automata 1. Obtain DFAs to accept strings of a’s and b’s having exactly one a.(5m )(Jun-Jul 10) 2. Obtain a DFA to accept strings of a’s and b’s having even number of a’s and b’s.( 5m )(Jun-Jul 10) L = {Œ,aabb,abab,baba,baab,bbaa,aabbaa,---------} 3. Give Applications of Finite Automata. (5m )(Jun-Jul 10) String Processing Consider finding all occurrences of a short string (pattern string) within a long string (text string). This can be done by processing the text through a DFA: the DFA for all strings that end with the pattern string. Each time the accept state is reached, the current position in the text is output. Finite-State Machines A finite-state machine is an FA together with actions on the arcs. Statecharts Statecharts model tasks as a set of states and actions. They extend FA diagrams. Lexical Analysis Dept of CSE, SJBIT 1 FLAT 10CS56 In compiling a program, the first step is lexical analysis. This isolates keywords, identifiers etc., while eliminating irrelevant symbols. A token is a category, for example “identifier”, “relation operator” or specific keyword. 4. Define DFA, NFA & Language? (5m)( Jun-Jul 10) Deterministic finite automaton (DFA)—also known as deterministic finite state machine—is a finite state machine that accepts/rejects finite strings of symbols and only produces a unique computation (or run) of the automaton for each input string. 'Deterministic' refers to the uniqueness of the computation. Nondeterministic finite automaton (NFA) or nondeterministic finite state machine is a finite state machine where from each state and a given input symbol the automaton may jump into several possible next states. -
Efficient Combinator Parsing for Natural-Language
University of Windsor Scholarship at UWindsor Electronic Theses and Dissertations Theses, Dissertations, and Major Papers 1-1-2006 Efficient combinator parsing for natural-language. Rahmatullah Hafiz University of Windsor Follow this and additional works at: https://scholar.uwindsor.ca/etd Recommended Citation Hafiz, Rahmatullah, "Efficient combinator parsing for natural-language." (2006). Electronic Theses and Dissertations. 7137. https://scholar.uwindsor.ca/etd/7137 This online database contains the full-text of PhD dissertations and Masters’ theses of University of Windsor students from 1954 forward. These documents are made available for personal study and research purposes only, in accordance with the Canadian Copyright Act and the Creative Commons license—CC BY-NC-ND (Attribution, Non-Commercial, No Derivative Works). Under this license, works must always be attributed to the copyright holder (original author), cannot be used for any commercial purposes, and may not be altered. Any other use would require the permission of the copyright holder. Students may inquire about withdrawing their dissertation and/or thesis from this database. For additional inquiries, please contact the repository administrator via email ([email protected]) or by telephone at 519-253-3000ext. 3208. Efficient Combinator Parsing for Natural-Language by Rahmatullah Hafiz A Thesis Submitted to the Faculty of Graduate Studies and Research through Computer Science in Partial Fulfillment of the Requirements for the Degree of Master of Computer Science at