Polynomial Space 1 Context Sensitive Languages

Total Page:16

File Type:pdf, Size:1020Kb

Polynomial Space 1 Context Sensitive Languages Polynomial Space 1 Context Sensitive Languages 2 Linear Bounded Automata 3 Polynomial Space Klaus Sutner Carnegie Mellon University 2019/03/26 Towards Polynomial Space 2 Context-Sensitive Grammars 3 Definition (CSG) A context-sensitive grammar (CSG) is a grammar where all productions are of Logarithmic space and linear (deterministic) space clearly make algorithmic the form sense. But how about nondeterministic linear space? αAβ αγβ where γ = ε → 6 Generalizing context-free grammars naturally leads to context-sensitive grammars and the obvious parsing “algorithm” for a context-sensitive language 2 For technical reasons, some authors also allow S ε in which case S may not is in NSPACE(n) (and, by Savitch, in SPACE(n )). → appear on the righthand side of any production. A language is context-sensitive if it can be generated by a context-sensitive grammar. Somewhat surprisingly, it’s just a small step from there to all of PSPACE. Note the constraint that the replacement string γ = ε; as a consequence we 6 have α β implies α β ⇒ | | ≤ | | Brute-Force Recognition 4 Needless to say . 5 Lemma Lemma Every context-sensitive language is decidable. Not all decidable languages are context-sensitive. Proof. Proof. Here is a cute diagonalization argument for this. ? Suppose w Σ and n = w . In any potential derivation (αi)i<N we have ? ∈ | | Let (xi)i be an effective enumeration of Σ and (Gi)i an effective enumeration αi n. | | ≤ of all CSG over Σ (say, both in length-lex order). Set So consider the derivation graph (recall the pictures from last time for D CFLs): L = xi xi / (Gi) { | ∈ L } n vertices are Γ≤ , 1 edges are α β. By the lemma, L is decidable. ⇒ Then w is in L if w is reachable from vertex S in . D But L cannot be context-sensitive by the usual diagonal mumbo-jumbo. 2 2 Example: Counting 6 Not so fast . 7 We know that the language Alas, we also need to show that (G) L. L = anbncn n 1 L ⊆ { | ≥ } This is a bit harder: we need to show that the productions cannot be abused in is not context free. Here is a context-sensitive grammar G for L: some unintended way to generate other strings: recall that there is no let V = S, B and set restriction on the order in which productions can be applied, they just have to { } match. S aSBc abc → | E.g., the following is allowed: cB Bc → bB bb S aaSBcBc aaSBBcc → ⇒ ⇒ A typical derivation looks like Exercise S an+1bc(Bc)n an+1bBncn+1 an+1bn+1cn+1 Figure out the details. ⇒ ⇒ ⇒ It follows by induction that L (G). ⊆ L Example: More Counting 8 Closure Properties 9 It is also known that the language ? L = x a, b, c #ax = #bx = #cx { ∈ { } | } Theorem Context-sensitive languages are closed under union, concatenation, Kleene star is not context free. But, again, it is easily context-sensitive: and reversal. let V = S, A, B, C and set They are also closed under ε-free homomorphisms. { } S S0 ε → | S0 S0ABC ABC → | Proof. Straightforward by manipulating the grammar. 2 XY YX for all X, Y A, B, C → ∈ { } A a → B b → Note that arbitrary homomorphisms do not work in this case: they erase too C c much information and can force too large a search. → Note that most productions are actually context free. The critical part is the commutation productions for A, B, C . { } Two Innocent Questions 10 Normal Form 11 Theorem (Kuroda) Are CSL closed under intersection? Every context-sensitive grammar can be written with productions of the form Are CSL closed under complement? A BC AB CDA a → → → The answer is Yes in both cases (so this is quite different from context-free languages). The proof is very similar to the argument for Chomsky normal form for CFG (using only productions A BC and A a). → → The proof for intersection can be based on a machine model, and is much easier than the proof for complement (which requires a special and very Note that the recognition algorithm becomes particularly simple when the CSG surprising counting technique; see next lecture). is given in Kuroda normal form: we first get rid of all terminals and then operate only on pairs of consecutive variables. Monotonicity 12 Monotonic is Context-Sensitive 13 Theorem A language is context-sensitive iff it is generated by a monotonic grammar. Derivations in CSGs are length-non-decreasing. Correspondingly, define a Proof. grammar to be monotonic if all productions are of the form Using LBAs introduced below, it suffices to show that every monotonic π : α β where α Γ?V Γ?, β Γ?, α β language can be accepted by an LBA. Initially, some terminal string → ∈ ∈ | | ≤ | | a1, a2, . , an is written on the tape. As we have seen already, a monotonic grammar can only generate a decidable Search handle: the head moves to the right a random number of places, language. say, to ai. In fact, these look rather similar to context-sensitive grammars except that we Check righthand side: the machine verifies that ai, . , aj = β where α β is a production. are now allowed to manipulate the context (but see 2 slides down). In → particular, every CSG is monotonic. Replace by lefthand side: the block ai, . , aj is replaced by α, possibly leaving some blanks. Collect: remove possible blanks by shifting the rest of the tape left. This loop repeats until the tape is reduced to S and we accept. If any of the guesses goes wrong we reject. Tiny Example 14 And the Difference? 15 One can also directly convert monotonic productions to context-sensitive ones. CFL recognition is cubic time, via a dynamic programming algorithm that As a simple example, consider a commutativity rule basically enumerates all possible parse trees. Why would a similar approach not also work for CSLs? As Kuroda NF shows, AB BA → on the face of it, the productions seem only mildly more complicated. Here are equivalent context-sensitive rules: Recall how one can associate undecidability with CFGs: we are interested in the AB AX language (M) of all accepting computations of a TM: → C AX BX → BX BA # C0 # C1 # ... # Cn # → Here X is a new variable and the green variable is the one that is being Unfortunately, this language fails to be context-free: the copy language ww w Σ? is not context-free, and this one is worse. replaced. Note how the left/right context is duly preserved. { | ∈ } Tricks 16 And Context-Sensitive? 17 Consider the complement Σ? (M). For context-sensitive languages there is no problem: we can directly generate − C (M). This requires a bit of work using a grammar directly, but is fairly easy C Consider a representation where we use alternating mirror images: to see from the machine model (see below): an LBA can easily perform the necessary checks. op op # C0 # C1 # C2 # C3 ... # Cn # This works essentially since palindromes are context-free. As a consequence even some the most basic questions about CSG are undecidable. This is really quite surprising. Claim Theorem The resulting language of non-computations is context-free. It is undecidable whether a CSG generates the empty language. Decidability Overview 18 x LL = L = Σ? L = KL K = ∈ ∅ ∩ ∅ 1 Context Sensitive Languages regular YYYYY DCFL YYYYN CFL YYNNN 2 Linear Bounded Automata CSL YNNNN decidable YNNNN semi-dec. NNNNN 3 Polynomial Space Standard decision problems for various language classes. Needless to say, for the decidable ones we would like a more precise complexity classification (in which case it may matter how precisely the instance is given). A Machine Model? 20 Backwards Derivation 21 In order to find a parser for CSL it seems natural to look for an associated Suppose L is context-sensitive via G. The idea is to run a derivation of G machine models: backwards, starting at a string x of terminals. semidecidable — Turing machines To this end, nondeterministically guess a handle in the current string, a place context free — pushdown automata where there is a substring of the form β, where α β is a production in G. → regular — finite state machines Erase β and replace it by α. Rinse and repeat. The original string x is in L iff we can ultimately reach S this way. We need a machine model that is stronger than pushdown automata (FSM plus stack), but significantly weaker than full Turing machines. Note that two stacks won’t work, we need a different approach. Of course, this is yet another path existence problem in a suitable digraph. Linear Bounded Automata 22 The Myhill-Landweber-Kuroda Theorem 23 Definition A linear bounded automaton (LBA) is a type of one-tape, nondeterministic The development happened in stages: Turing machine acceptor where the input is written between special end-markers and the computation can never leave the space between these Myhill 1960 considered deterministic LBAs. markers (nor overwrite them). Landweber 1963 showed that they produce only context-sensitive languages. Kuroda 1964 generalized to nondeterministic LBAs and showed that this Thus the initial configuration looks like produces precisely all the context-sensitive languages. #q0x1x2 . xn# and the tape head can never leave this part of the tape. Theorem It may seem that there is not enough space to perform any interesting A language is accepted by a (nondeterministic) LBA iff it is context-sensitive. computations on an LBA, but note that we can use a sufficiently large tape alphabet to “compress” the input to a fraction of its original size and make room. Proof 24 Digression: Physical Realizability II 25 It was recognized already in the 1950s that Turing machines are, in many ways, too general to describe anything resembling the type of computation that was One direction is by definition of an LBA.
Recommended publications
  • Practical Experiments with Regular Approximation of Context-Free Languages
    Practical Experiments with Regular Approximation of Context-Free Languages Mark-Jan Nederhof German Research Center for Arti®cial Intelligence Several methods are discussed that construct a ®nite automaton given a context-free grammar, including both methods that lead to subsets and those that lead to supersets of the original context-free language. Some of these methods of regular approximation are new, and some others are presented here in a more re®ned form with respect to existing literature. Practical experiments with the different methods of regular approximation are performed for spoken-language input: hypotheses from a speech recognizer are ®ltered through a ®nite automaton. 1. Introduction Several methods of regular approximation of context-free languages have been pro- posed in the literature. For some, the regular language is a superset of the context-free language, and for others it is a subset. We have implemented a large number of meth- ods, and where necessary, re®ned them with an analysis of the grammar. We also propose a number of new methods. The analysis of the grammar is based on a suf®cient condition for context-free grammars to generate regular languages. For an arbitrary grammar, this analysis iden- ti®es sets of rules that need to be processed in a special way in order to obtain a regular language. The nature of this processing differs for the respective approximation meth- ods. For other parts of the grammar, no special treatment is needed and the grammar rules are translated to the states and transitions of a ®nite automaton without affecting the language.
    [Show full text]
  • CDM Context-Sensitive Grammars
    CDM Context-Sensitive Grammars 1 Context-Sensitive Grammars Klaus Sutner Carnegie Mellon Universality Linear Bounded Automata 70-cont-sens 2017/12/15 23:17 LBA and Counting Where Are We? 3 Postfix Calculators 4 Hewlett-Packard figured out 40 years ago the reverse Polish notation is by far the best way to perform lengthy arithmetic calculations. Very easy to implement with a stack. Context-free languages based on grammars with productions A α are very → important since they describe many aspects of programming languages and The old warhorse dc also uses RPN. admit very efficient parsers. 10 20 30 + * n CFLs have a natural machine model (PDA) that is useful e.g. to evaluate 500 arithmetic expressions. 10 20 30 f Properties of CFLs are mostly undecidable, Emptiness and Finiteness being 30 notable exceptions. 20 10 ^ n 1073741824000000000000000000000000000000 Languages versus Machines 5 Languages versus Machines 6 Why the hedging about “aspects of programming languages”? To deal with problems like this one we need to strengthen our grammars. The Because some properties of programming language lie beyond the power of key is to remove the constraint of being “context-free.” CFLs. Here is a classical example: variables must be declared before they can be used. This leads to another important grammar based class of languages: context-sensitive languages. As it turns out, CSL are plenty strong enough to begin describe programming languages—but in the real world it does not matter, it is int x; better to think of programming language as being context-free, plus a few ... extra constraints. x = 17; ..
    [Show full text]
  • CIT 425- AUTOMATA THEORY, COMPUTABILITY and FORMAL LANGUAGES LECTURE NOTE by DR. OYELAMI M. O. Introduction • This Course Cons
    CIT 425- AUTOMATA THEORY, COMPUTABILITY AND FORMAL LANGUAGES LECTURE NOTE BY DR. OYELAMI M. O. Status: Core Description: Words and String. Concatenation, word Length; Language Definition. Regular Expression, Regular Language, Recursive Languages; Finite State Automata (FSA), State Diagrams; Pumping Lemma, Grammars, Applications in Computer Science and Engineering, Compiler Specification and Design, Text Editor and Implementation, Very Large Scale Integrated (VLSI) Circuit Specification and Design, Natural Language Processing (NLP) and Embedded Systems. Introduction This course constitutes the theoretical foundation of computer science. Loosely speaking we can think of automata, grammars, and computability as the study of what can be done by computers in principle, while complexity addresses what can be done in practice. This course has applications in the following areas: o Digital design, o Programming languages o Compilers construction Languages Dictionaries define the term informally as a system suitable for the expression of certain ideas, facts, or concepts, including a set of symbols and rules for their manipulation. While this gives us an intuitive idea of what a language is, it is not sufficient as a definition for the study of formal languages. We need a precise definition for the term. A formal language is an abstraction of the general characteristics of programming languages. Languages can be specified in various ways. One way is to list all the words in the language. Another is to give some criteria that a word must satisfy to be in the language. Another important way is to specify a language through the use of some terminologies: 1 Alphabet: A finite, nonempty set Σ of symbols.
    [Show full text]
  • Theory of Computation
    Theory of Computation Todd Gaugler December 14, 2011 2 Contents 1 Mathematical Background 5 1.1 Overview . .5 1.2 Number System . .5 1.3 Functions . .6 1.4 Relations . .6 1.5 Recursive Definitions . .8 1.6 Mathematical Induction . .9 2 Languages and Context-Free Grammars 11 2.1 Languages . 11 2.2 Counting the Rational Numbers . 13 2.3 Grammars . 14 2.4 Regular Grammar . 15 3 Normal Forms and Finite Automata 17 3.1 Review of Grammars . 17 3.2 Normal Forms . 18 3.3 Machines . 20 3.3.1 An NFA λ ..................................... 22 4 Regular Languages 23 4.1 Computation . 24 4.2 The Extended Transition Function . 24 4.3 Algorithms . 26 4.3.1 Removing Non-Determinism . 26 4.3.2 State Minimization . 26 4.3.3 Expression Graph . 26 4.4 The Relationship between a Regular Grammar and the Finite Automaton . 26 4.4.1 Building an NFA corresponding to a Regular Grammar . 27 4.4.2 Closure . 27 4.5 Review for the First Exam . 28 4.6 The Pumping Lemma . 28 5 Pushdown Automata and Context-Free Languages 31 5.1 Pushdown Automata . 31 5.2 Variations on the PDA Theme . 34 5.3 Acceptance of Context-Free Languages . 36 3 CONTENTS CONTENTS 5.4 The Pumping Lemma for Context-Free Languages . 36 5.5 Closure Properties of Context- Free Languages . 37 6 Turing Machines 39 6.1 The Standard Turing Machine . 39 6.2 Turing Machines as Language Acceptors . 40 6.3 Alternative Acceptance Criteria . 41 6.4 Multitrack Machines . 42 6.5 Two-Way Tape Machines .
    [Show full text]
  • On the Complexity of Regular-Grammars with Integer Attributes ∗ M
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computer and System Sciences 77 (2011) 393–421 Contents lists available at ScienceDirect Journal of Computer and System Sciences www.elsevier.com/locate/jcss On the complexity of regular-grammars with integer attributes ∗ M. Manna a, F. Scarcello b,N.Leonea, a Department of Mathematics, University of Calabria, 87036 Rende (CS), Italy b Department of Electronics, Computer Science and Systems, University of Calabria, 87036 Rende (CS), Italy article info abstract Article history: Regular grammars with attributes overcome some limitations of classical regular grammars, Received 21 August 2009 sensibly enhancing their expressiveness. However, the addition of attributes increases the Received in revised form 17 May 2010 complexity of this formalism leading to intractability in the general case. In this paper, Available online 27 May 2010 we consider regular grammars with attributes ranging over integers, providing an in-depth complexity analysis. We identify relevant fragments of tractable attribute grammars, where Keywords: Attribute grammars complexity and expressiveness are well balanced. In particular, we study the complexity of Computational complexity the classical problem of deciding whether a string belongs to the language generated by Models of computation any attribute grammar from a given class C (call it parse[C]). We consider deterministic and ambiguous regular grammars, attributes specified by arithmetic expressions over {| |, +, −, ÷, %, ∗}, and a possible restriction on the attributes composition (that we call strict composition). Deterministic regular grammars with attributes computed by arithmetic expressions over {| |, +, −, ÷, %} are P-complete. If the way to compose expressions is strict, they can be parsed in L, and they remain tractable even if multiplication is allowed.
    [Show full text]
  • Computation of Infix Probabilities for Probabilistic Context-Free Grammars
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by St Andrews Research Repository Computation of Infix Probabilities for Probabilistic Context-Free Grammars Mark-Jan Nederhof Giorgio Satta School of Computer Science Dept. of Information Engineering University of St Andrews University of Padua United Kingdom Italy [email protected] [email protected] Abstract or part-of-speech, when a prefix of the input has al- ready been processed, as discussed by Jelinek and The notion of infix probability has been intro- Lafferty (1991). Such distributions are useful for duced in the literature as a generalization of speech recognition, where the result of the acous- the notion of prefix (or initial substring) prob- tic processor is represented as a lattice, and local ability, motivated by applications in speech recognition and word error correction. For the choices must be made for a next transition. In ad- case where a probabilistic context-free gram- dition, distributions for the next word are also useful mar is used as language model, methods for for applications of word error correction, when one the computation of infix probabilities have is processing ‘noisy’ text and the parser recognizes been presented in the literature, based on vari- an error that must be recovered by operations of in- ous simplifying assumptions. Here we present sertion, replacement or deletion. a solution that applies to the problem in its full generality. Motivated by the above applications, the problem of the computation of infix probabilities for PCFGs has been introduced in the literature as a generaliza- 1 Introduction tion of the prefix probability problem.
    [Show full text]
  • 1 Finite Automata and Regular Languages
    Aalto University Department of Computer Science CS-C2150 Theoretical Computer Science Solved Example Problems, Spring 2020 1 Finite Automata and Regular Languages 1. Problem: Describe the following languages both in terms of regular expressions and in terms of deterministic finite automata: (a) fw 2 f0; 1g∗ j w contains 101 as a substringg, (b) fw 2 f0; 1g∗ j w does not contain 101 as a substringg. Solution: (a) A regular expression for this language is easy to construct: (0j1)∗101(0j1)∗: A nondeterministic finite automaton is also easy to come up with: 0; 1 0; 1 1 0 1 start q0 q1 q2 q3 Let us then determinise this using the subset construction: 0 1 ! fq0g fq0g fq0; q1g fq0; q1g fq0; q2g fq0; q1g fq0; q2g fq0g fq0; q1; q3g f:::; q3g f:::; q3g f:::; q3g The last row of the table has been simplified based on the observation that from a set containing the accepting state q3, one always moves again to some set con- taining q3, and so all such states are equivalent. Thus, we obtain the following deterministic automaton: 1 0; 1 0 1 0 1 start fq0g fq0; q1g fq0; q2g f:::; q3g 0 (b) Observe that the language here is the complement of the language in part (a), and the DFA provided in part (a) is complete, i.e. all possible transitions are explicitly listed. Therefore, complementing the DFA from part (a) yields a DFA for this language: 0 1 0; 1 1 0 1 start q0 q1 q2 q3 0 All accepting states are inaccessible from state q3, thus we may ignore it.
    [Show full text]
  • Automata Theory and Formal Languages
    Alberto Pettorossi Automata Theory and Formal Languages Third Edition ARACNE Contents Preface 7 Chapter 1. Formal Grammars and Languages 9 1.1. Free Monoids 9 1.2. Formal Grammars 10 1.3. The Chomsky Hierarchy 13 1.4. Chomsky Normal Form and Greibach Normal Form 19 1.5. Epsilon Productions 20 1.6. Derivations in Context-Free Grammars 24 1.7. Substitutions and Homomorphisms 27 Chapter 2. Finite Automata and Regular Grammars 29 2.1. Deterministic and Nondeterministic Finite Automata 29 2.2. Nondeterministic Finite Automata and S-extended Type 3 Grammars 33 2.3. Finite Automata and Transition Graphs 35 2.4. Left Linear and Right Linear Regular Grammars 39 2.5. Finite Automata and Regular Expressions 44 2.6. Arden Rule 56 2.7. Equations Between Regular Expressions 57 2.8. Minimization of Finite Automata 59 2.9. Pumping Lemma for Regular Languages 72 2.10. A Parser for Regular Languages 74 2.10.1. A Java Program for Parsing Regular Languages 82 2.11. Generalizations of Finite Automata 90 2.11.1. Moore Machines 91 2.11.2. Mealy Machines 91 2.11.3. Generalized Sequential Machines 92 2.12. Closure Properties of Regular Languages 94 2.13. Decidability Properties of Regular Languages 96 Chapter 3. Pushdown Automata and Context-Free Grammars 99 3.1. Pushdown Automata and Context-Free Languages 99 3.2. From PDA’s to Context-Free Grammars and Back: Some Examples 111 3.3. Deterministic PDA’s and Deterministic Context-Free Languages 117 3.4. Deterministic PDA’s and Grammars in Greibach Normal Form 121 3.5.
    [Show full text]
  • Context-Sensitive Grammars and Linear- Bounded Automata
    I. J. Computer Network and Information Security, 2016, 1, 61-66 Published Online January 2016 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijcnis.2016.01.08 Context-Sensitive Grammars and Linear- Bounded Automata Prem Nath The Patent Office, CP-2, Sector-5, Salt Lake, Kolkata-700091, India Email: [email protected] Abstract—Linear-bounded automata (LBA) accept The head moves left or right on the tape utilizing finite context-sensitive languages (CSLs) and CSLs are number of states. A finite contiguous portion of the tape generated by context-sensitive grammars (CSGs). So, for whose length is a linear function of the length of the every CSG/CSL there is a LBA. A CSG is converted into initial input can be accessed by the read/write head. This normal form like Kuroda normal form (KNF) and then limitation makes LBA a more accurate model of corresponding LBA is designed. There is no algorithm or computers that actually exists than a Turing machine in theorem for designing a linear-bounded automaton (LBA) some respects. for a context-sensitive grammar without converting the LBA are designed to use limited storage (limited tape). grammar into some kind of normal form like Kuroda So, as a safety feature, we shall employ end markers normal form (KNF). I have proposed an algorithm for ($ on the left and on the right) on tape and the head this purpose which does not require any modification or never goes past these markers. This will ensure that the normalization of a CSG. storage bound is maintained and helps to keep LBA from leaving their tape.
    [Show full text]
  • Problem Book
    Faculty of Mathematics, Informatics and Mechanics, UW Computational Complexity Problem book Spring 2011 Coordinator Damian Niwiński Contributors: Contents 1 Turing Machines 5 1.1 Machines in general.........................................5 1.2 Coding objects as words and problems as languages.......................5 1.3 Turing machines as computation models..............................5 1.4 Turing machines — basic complexity................................7 1.5 One-tape Turing machines......................................7 1.6 Decidability and undecidability...................................7 2 Circuits 9 2.1 Circuit basics.............................................9 2.2 More on the parity function..................................... 11 2.3 Circuit depth and communication complexity........................... 12 2.4 Machines with advice......................................... 12 2.4.1 P-selective problems..................................... 13 3 Basic complexity classes 15 3.1 Polynomial time........................................... 15 3.2 Logarithmic space.......................................... 17 3.3 Alternation.............................................. 18 3.4 Numbers................................................ 19 3.5 PSPACE................................................ 20 3.6 NP................................................... 21 3.7 BPP.................................................. 21 3.8 Diverse................................................ 22 3 4 Chapter 1 Turing Machines Contributed: Vince Bárány,
    [Show full text]
  • Formal Languages and Automata Theory
    5Th Semester(COMPUTER SCIENCE AND ENGG .GCEKJR) Formal Languages and Automata Theory. Introduction -:Automata theory is a study of abstract machine , automat and a theoretical way solve computational problem using this abstract machine .It is the theoretical computer science .The word automata plural of automaton comes from Greek word ...which means “self making”. This automaton consists of states (represented in the figure by circles) and transitions (represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to its transition function , which takes the current state and the recent symbol as its inputs. Automata theory is closely related to formal language theory. A formal language consist of word whose latter are taken from an alphabet and are well formed according to specific set of rule . so we can say An automaton is a finite representation of a formal language that may be an infinite set. Automata are often classified by the class of formal languages they can recognize, typically illustrated by the CHOMSKY HIERARCHY which describes the relations between various languages and kinds of formalized logics. Following are the few automata over formal language. Automaton Recognizable Language. Nondetermistic /Deterministic Finate Regular language. state Machine(FSM) Deterministic push down Deterministic context free language. automaton(DPDA) Pushdown automaton(PDA) Context_free language. Linear bounded automata(LBA) Context _sensitive language. Turing machine Recursively enumerable language. Why Study of automata theory is important? Because Automata play a major role in theory of computation, compiler construction, artificial intelligence , parsing and formal verification. Alphabets, Strings and Languages; Automata and Grammars-: Symbols and Alphabet: Symbol-: is an abstract user defined entity for example if we say pi(π) its value is 1.414.
    [Show full text]
  • Formal Languages and Automata
    Formal Languages and Automata Stephan Schulz & Jan Hladik [email protected] [email protected] L ⌃⇤ ✓ with contributions from David Suendermann 1 Table of Contents Formal Grammars and Context-Free Introduction Languages Basics of formal languages Formal Grammars The Chomsky Hierarchy Regular Languages and Finite Automata Right-linear Grammars Regular Expressions Context-free Grammars Finite Automata Push-Down Automata Non-Determinism Properties of Context-free Regular expressions and Finite Languages Automata Turing Machines and Languages of Type Minimisation 1 and 0 Equivalence Turing Machines The Pumping Lemma Unrestricted Grammars Properties of regular languages Linear Bounded Automata Scanners and Flex Properties of Type-0-languages 2 Outline Introduction Basics of formal languages Regular Languages and Finite Automata Scanners and Flex Formal Grammars and Context-Free Languages Turing Machines and Languages of Type 1 and 0 3 Introduction I Stephan Schulz I Dipl.-Inform., U. Kaiserslautern, 1995 I Dr. rer. nat., TU Munchen,¨ 2000 I Visiting professor, U. Miami, 2002 I Visiting professor, U. West Indies, 2005 I Lecturer (Hildesheim, Offenburg, . ) since 2009 I Industry experience: Building Air Traffic Control systems I System engineer, 2005 I Project manager, 2007 I Product Manager, 2013 I Professor, DHBW Stuttgart, 2014 Research: Logic & Automated Reasoning 4 Introduction I Jan Hladik I Dipl.-Inform.: RWTH Aachen, 2001 I Dr. rer. nat.: TU Dresden, 2007 I Industry experience: SAP Research I Work in publicly funded research projects I Collaboration with SAP product groups I Supervision of Bachelor, Master, and PhD students I Professor: DHBW Stuttgart, 2014 Research: Semantic Web, Semantic Technologies, Automated Reasoning 5 Literature I Scripts I The most up-to-date version of this document as well as auxiliary material will be made available online at http://wwwlehre.dhbw-stuttgart.de/ ˜sschulz/fla2015.html and http://wwwlehre.dhbw-stuttgart.de/ ˜hladik/FLA I Books I John E.
    [Show full text]