Theory of Computation
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Nooj Computational Devices Max Silberztein
NooJ Computational Devices Max Silberztein To cite this version: Max Silberztein. NooJ Computational Devices. Formalising Natural Languages With NooJ, Jun 2012, Paris, France. hal-02435921 HAL Id: hal-02435921 https://hal.archives-ouvertes.fr/hal-02435921 Submitted on 11 Jan 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. NOOJ COMPUTATIONAL DEVICES MAX SILBERZTEIN Introduction NooJ’s linguistic development environment provides tools for linguists to construct linguistic resources that formalize 7 types of linguistic phenomena: typography, orthography, inflectional and derivational morphology, local and structural syntax, and semantics. NooJ also provides a set of parsers that can process any linguistic resource for these 7 types, and apply it to any corpus of texts in order to extract examples, annotate matching sequences, perform statistical analyses, etc.1 NooJ’s approach to Linguistics is peculiar in the world of Computational Linguistics: instead of constructing a large single grammar to describe a particular natural language (e.g. “a grammar of English”), NooJ users typically construct, edit, test and maintain a large number of local (small) grammars; for instance, there is a grammar that describes how to conjugate the verb to be, another grammar that describes how to state a date in English, another grammar that describes the heads of Noun Phrases, etc. -
Practical Experiments with Regular Approximation of Context-Free Languages
Practical Experiments with Regular Approximation of Context-Free Languages Mark-Jan Nederhof German Research Center for Arti®cial Intelligence Several methods are discussed that construct a ®nite automaton given a context-free grammar, including both methods that lead to subsets and those that lead to supersets of the original context-free language. Some of these methods of regular approximation are new, and some others are presented here in a more re®ned form with respect to existing literature. Practical experiments with the different methods of regular approximation are performed for spoken-language input: hypotheses from a speech recognizer are ®ltered through a ®nite automaton. 1. Introduction Several methods of regular approximation of context-free languages have been pro- posed in the literature. For some, the regular language is a superset of the context-free language, and for others it is a subset. We have implemented a large number of meth- ods, and where necessary, re®ned them with an analysis of the grammar. We also propose a number of new methods. The analysis of the grammar is based on a suf®cient condition for context-free grammars to generate regular languages. For an arbitrary grammar, this analysis iden- ti®es sets of rules that need to be processed in a special way in order to obtain a regular language. The nature of this processing differs for the respective approximation meth- ods. For other parts of the grammar, no special treatment is needed and the grammar rules are translated to the states and transitions of a ®nite automaton without affecting the language. -
CDM Context-Sensitive Grammars
CDM Context-Sensitive Grammars 1 Context-Sensitive Grammars Klaus Sutner Carnegie Mellon Universality Linear Bounded Automata 70-cont-sens 2017/12/15 23:17 LBA and Counting Where Are We? 3 Postfix Calculators 4 Hewlett-Packard figured out 40 years ago the reverse Polish notation is by far the best way to perform lengthy arithmetic calculations. Very easy to implement with a stack. Context-free languages based on grammars with productions A α are very → important since they describe many aspects of programming languages and The old warhorse dc also uses RPN. admit very efficient parsers. 10 20 30 + * n CFLs have a natural machine model (PDA) that is useful e.g. to evaluate 500 arithmetic expressions. 10 20 30 f Properties of CFLs are mostly undecidable, Emptiness and Finiteness being 30 notable exceptions. 20 10 ^ n 1073741824000000000000000000000000000000 Languages versus Machines 5 Languages versus Machines 6 Why the hedging about “aspects of programming languages”? To deal with problems like this one we need to strengthen our grammars. The Because some properties of programming language lie beyond the power of key is to remove the constraint of being “context-free.” CFLs. Here is a classical example: variables must be declared before they can be used. This leads to another important grammar based class of languages: context-sensitive languages. As it turns out, CSL are plenty strong enough to begin describe programming languages—but in the real world it does not matter, it is int x; better to think of programming language as being context-free, plus a few ... extra constraints. x = 17; .. -
CIT 425- AUTOMATA THEORY, COMPUTABILITY and FORMAL LANGUAGES LECTURE NOTE by DR. OYELAMI M. O. Introduction • This Course Cons
CIT 425- AUTOMATA THEORY, COMPUTABILITY AND FORMAL LANGUAGES LECTURE NOTE BY DR. OYELAMI M. O. Status: Core Description: Words and String. Concatenation, word Length; Language Definition. Regular Expression, Regular Language, Recursive Languages; Finite State Automata (FSA), State Diagrams; Pumping Lemma, Grammars, Applications in Computer Science and Engineering, Compiler Specification and Design, Text Editor and Implementation, Very Large Scale Integrated (VLSI) Circuit Specification and Design, Natural Language Processing (NLP) and Embedded Systems. Introduction This course constitutes the theoretical foundation of computer science. Loosely speaking we can think of automata, grammars, and computability as the study of what can be done by computers in principle, while complexity addresses what can be done in practice. This course has applications in the following areas: o Digital design, o Programming languages o Compilers construction Languages Dictionaries define the term informally as a system suitable for the expression of certain ideas, facts, or concepts, including a set of symbols and rules for their manipulation. While this gives us an intuitive idea of what a language is, it is not sufficient as a definition for the study of formal languages. We need a precise definition for the term. A formal language is an abstraction of the general characteristics of programming languages. Languages can be specified in various ways. One way is to list all the words in the language. Another is to give some criteria that a word must satisfy to be in the language. Another important way is to specify a language through the use of some terminologies: 1 Alphabet: A finite, nonempty set Σ of symbols. -
Implementation of Unrestricted Grammar in to the Recursively Enumerable Language Using Turing Machine
The International Journal Of Engineering And Science (IJES) ||Volume||2 ||Issue|| 3 ||Pages|| 56-59 ||2013|| ISSN: 2319 – 1813 ISBN: 2319 – 1805 Implementation of Unrestricted Grammar in To the Recursively Enumerable Language Using Turing Machine 1,Jainendra Singh, 2, Dr. S.K. Saxena 1,Department of Computer Science, Maharaja Surajmal Institute 2,Department of Computer Engineering, Delhi Technological University -------------------------------------------------------Abstract------------------------------------------------------- This paper presents the implementation of the unrestricted grammar in to recursively enumerable language for JFLAP platform. Automata play a major role in compiler design and parsing. The class of formal languages that work for the most complex problems belongs to the set of Recursively Enumerable Language (REL).RELs are accepted by the type of automata as Turing Machine. Turing Machines are the most powerful computational machines and are the theoretical basis for modern computers. Turing Machine works for all classes of languages including regular language, CFL as well as Recursive Enumerable Languages. Unrestricted grammar are much more powerful than restricted forms like the regular and context free grammars. In facts, unrestricted grammars corresponds to the largest family of languages so we can hope to recognize by mechanical means; that is unrestricted grammars generates exactly the family of recursively enumerable languages. Turing Machine is used to implementation of unrestricted grammar & RELs for JFLAP -
On the Complexity of Regular-Grammars with Integer Attributes ∗ M
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computer and System Sciences 77 (2011) 393–421 Contents lists available at ScienceDirect Journal of Computer and System Sciences www.elsevier.com/locate/jcss On the complexity of regular-grammars with integer attributes ∗ M. Manna a, F. Scarcello b,N.Leonea, a Department of Mathematics, University of Calabria, 87036 Rende (CS), Italy b Department of Electronics, Computer Science and Systems, University of Calabria, 87036 Rende (CS), Italy article info abstract Article history: Regular grammars with attributes overcome some limitations of classical regular grammars, Received 21 August 2009 sensibly enhancing their expressiveness. However, the addition of attributes increases the Received in revised form 17 May 2010 complexity of this formalism leading to intractability in the general case. In this paper, Available online 27 May 2010 we consider regular grammars with attributes ranging over integers, providing an in-depth complexity analysis. We identify relevant fragments of tractable attribute grammars, where Keywords: Attribute grammars complexity and expressiveness are well balanced. In particular, we study the complexity of Computational complexity the classical problem of deciding whether a string belongs to the language generated by Models of computation any attribute grammar from a given class C (call it parse[C]). We consider deterministic and ambiguous regular grammars, attributes specified by arithmetic expressions over {| |, +, −, ÷, %, ∗}, and a possible restriction on the attributes composition (that we call strict composition). Deterministic regular grammars with attributes computed by arithmetic expressions over {| |, +, −, ÷, %} are P-complete. If the way to compose expressions is strict, they can be parsed in L, and they remain tractable even if multiplication is allowed. -
Integrating JFLAP Into the Classroom
Changes to JFLAP to Increase Its Use in Courses Susan H. Rodger Duke University [email protected] ITiCSE 2011 Darmstadt, Germany June 29, 2011 NSF Grants CCLI-0442513 and TUES-1044191 Co-Authors Henry Qin Jonathan Su Overview of JFLAP • Java Formal Languages and Automata Package • Instructional tool to learn concepts of Formal Languages and Automata Theory • Topics: – Regular Languages – Context-Free Languages – Recursively Enumerable Languages – Lsystems • With JFLAP your creations come to life! JFLAP – Regular Languages • Create – DFA and NFA – Moore and Mealy – regular grammar – regular expression • Conversions – NFA to DFA to minimal DFA – NFA regular expression – NFA regular grammar JFLAP – Regular languages (more) • Simulate DFA and NFA – Step with Closure or Step by State – Fast Run – Multiple Run • Combine two DFA • Compare Equivalence • Brute Force Parser • Pumping Lemma JFLAP – Context-free Languages • Create – Nondeterministic PDA – Context-free grammar – Pumping Lemma • Transform – PDA CFG – CFG PDA (LL & SLR parser) – CFG CNF – CFG Parse table (LL and SLR) – CFG Brute Force Parser JFLAP – Recursively Enumerable Languages • Create – Turing Machine (1-Tape) – Turing Machine (multi-tape) – Building Blocks – Unrestricted grammar • Parsing – Unrestricted grammar with brute force parser JFLAP - L-Systems • This L-System renders as a tree that grows larger with each successive derivation step. JFLAP’s Use Around the World • JFLAP web page has over 300,000 hits since 1996 • Google Search – JFLAP appears on over 9830 web pages -
Computation of Infix Probabilities for Probabilistic Context-Free Grammars
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by St Andrews Research Repository Computation of Infix Probabilities for Probabilistic Context-Free Grammars Mark-Jan Nederhof Giorgio Satta School of Computer Science Dept. of Information Engineering University of St Andrews University of Padua United Kingdom Italy [email protected] [email protected] Abstract or part-of-speech, when a prefix of the input has al- ready been processed, as discussed by Jelinek and The notion of infix probability has been intro- Lafferty (1991). Such distributions are useful for duced in the literature as a generalization of speech recognition, where the result of the acous- the notion of prefix (or initial substring) prob- tic processor is represented as a lattice, and local ability, motivated by applications in speech recognition and word error correction. For the choices must be made for a next transition. In ad- case where a probabilistic context-free gram- dition, distributions for the next word are also useful mar is used as language model, methods for for applications of word error correction, when one the computation of infix probabilities have is processing ‘noisy’ text and the parser recognizes been presented in the literature, based on vari- an error that must be recovered by operations of in- ous simplifying assumptions. Here we present sertion, replacement or deletion. a solution that applies to the problem in its full generality. Motivated by the above applications, the problem of the computation of infix probabilities for PCFGs has been introduced in the literature as a generaliza- 1 Introduction tion of the prefix probability problem. -
1 Finite Automata and Regular Languages
Aalto University Department of Computer Science CS-C2150 Theoretical Computer Science Solved Example Problems, Spring 2020 1 Finite Automata and Regular Languages 1. Problem: Describe the following languages both in terms of regular expressions and in terms of deterministic finite automata: (a) fw 2 f0; 1g∗ j w contains 101 as a substringg, (b) fw 2 f0; 1g∗ j w does not contain 101 as a substringg. Solution: (a) A regular expression for this language is easy to construct: (0j1)∗101(0j1)∗: A nondeterministic finite automaton is also easy to come up with: 0; 1 0; 1 1 0 1 start q0 q1 q2 q3 Let us then determinise this using the subset construction: 0 1 ! fq0g fq0g fq0; q1g fq0; q1g fq0; q2g fq0; q1g fq0; q2g fq0g fq0; q1; q3g f:::; q3g f:::; q3g f:::; q3g The last row of the table has been simplified based on the observation that from a set containing the accepting state q3, one always moves again to some set con- taining q3, and so all such states are equivalent. Thus, we obtain the following deterministic automaton: 1 0; 1 0 1 0 1 start fq0g fq0; q1g fq0; q2g f:::; q3g 0 (b) Observe that the language here is the complement of the language in part (a), and the DFA provided in part (a) is complete, i.e. all possible transitions are explicitly listed. Therefore, complementing the DFA from part (a) yields a DFA for this language: 0 1 0; 1 1 0 1 start q0 q1 q2 q3 0 All accepting states are inaccessible from state q3, thus we may ignore it. -
Automata Theory and Formal Languages
Alberto Pettorossi Automata Theory and Formal Languages Third Edition ARACNE Contents Preface 7 Chapter 1. Formal Grammars and Languages 9 1.1. Free Monoids 9 1.2. Formal Grammars 10 1.3. The Chomsky Hierarchy 13 1.4. Chomsky Normal Form and Greibach Normal Form 19 1.5. Epsilon Productions 20 1.6. Derivations in Context-Free Grammars 24 1.7. Substitutions and Homomorphisms 27 Chapter 2. Finite Automata and Regular Grammars 29 2.1. Deterministic and Nondeterministic Finite Automata 29 2.2. Nondeterministic Finite Automata and S-extended Type 3 Grammars 33 2.3. Finite Automata and Transition Graphs 35 2.4. Left Linear and Right Linear Regular Grammars 39 2.5. Finite Automata and Regular Expressions 44 2.6. Arden Rule 56 2.7. Equations Between Regular Expressions 57 2.8. Minimization of Finite Automata 59 2.9. Pumping Lemma for Regular Languages 72 2.10. A Parser for Regular Languages 74 2.10.1. A Java Program for Parsing Regular Languages 82 2.11. Generalizations of Finite Automata 90 2.11.1. Moore Machines 91 2.11.2. Mealy Machines 91 2.11.3. Generalized Sequential Machines 92 2.12. Closure Properties of Regular Languages 94 2.13. Decidability Properties of Regular Languages 96 Chapter 3. Pushdown Automata and Context-Free Grammars 99 3.1. Pushdown Automata and Context-Free Languages 99 3.2. From PDA’s to Context-Free Grammars and Back: Some Examples 111 3.3. Deterministic PDA’s and Deterministic Context-Free Languages 117 3.4. Deterministic PDA’s and Grammars in Greibach Normal Form 121 3.5. -
Context-Sensitive Grammars and Linear- Bounded Automata
I. J. Computer Network and Information Security, 2016, 1, 61-66 Published Online January 2016 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijcnis.2016.01.08 Context-Sensitive Grammars and Linear- Bounded Automata Prem Nath The Patent Office, CP-2, Sector-5, Salt Lake, Kolkata-700091, India Email: [email protected] Abstract—Linear-bounded automata (LBA) accept The head moves left or right on the tape utilizing finite context-sensitive languages (CSLs) and CSLs are number of states. A finite contiguous portion of the tape generated by context-sensitive grammars (CSGs). So, for whose length is a linear function of the length of the every CSG/CSL there is a LBA. A CSG is converted into initial input can be accessed by the read/write head. This normal form like Kuroda normal form (KNF) and then limitation makes LBA a more accurate model of corresponding LBA is designed. There is no algorithm or computers that actually exists than a Turing machine in theorem for designing a linear-bounded automaton (LBA) some respects. for a context-sensitive grammar without converting the LBA are designed to use limited storage (limited tape). grammar into some kind of normal form like Kuroda So, as a safety feature, we shall employ end markers normal form (KNF). I have proposed an algorithm for ($ on the left and on the right) on tape and the head this purpose which does not require any modification or never goes past these markers. This will ensure that the normalization of a CSG. storage bound is maintained and helps to keep LBA from leaving their tape. -
Recursively Enumerable
Chapter 11 A HIERARCHY OF FORMAL LANGUAGES AND AUTOMATA Learning Objectives At the conclusion of the chapter, the student will be able to: • Explain the difference between recursive and recursively enumerable languages • Describe the type of productions in an unrestricted grammar • Identify the types of languages generated by unrestricted grammars • Describe the type of productions in a context sensitive grammar • Give a sequence of derivations to generate a string using the productions in a context sensitive grammar • Identify the types of languages generated by context-sensitive grammars • Construct a context-sensitive grammar to generate a particular language • Describe the structure and components of the Chomsky hierarchy Recursive and Recursively Enumerable Languages • A language L is recursively enumerable if there exists a Turing machine that accepts it (as we have previously stated, rejected strings cause the machine to either not halt or halt in a nonfinal state) • A language L is recursive if there exists a Turing machine that accepts it and is guaranteed to halt on every valid input string • In other words, a language is recursive if and only if there exists a membership algorithm for it Languages That Are Not Recursively Enumerable • Theorem 11.1 states that, for any nonempty alphabet, there exist languages not recursively enumerable • One proof involves a technique called diagonalization, which can be used to show that, in a sense, there are fewer Turing Machines than there are languages • More explicitly, Theorem 11.3 describes