Part III: Context-Free Languages and Pushdown Automata

Total Page:16

File Type:pdf, Size:1020Kb

Part III: Context-Free Languages and Pushdown Automata Part III: Context-Free Languages and Pushdown Automata In this section, we move out one level and explore the class of context-free languages. This class is important. For most programming languages, the set of syntactically legal statements is (except possibly for type checking) a context-free language. The set of well-formed Boolean queries is a context-free language. A great deal of the syntax of English can be described in the context-free framework that we are about to discuss. To describe these languages, we need more power than the regular language definition allows. For example, to describe both programming language statements and Boolean queries requires the ability to specify that parentheses be balanced. Yet we showed in Section 8.4 that it is not possible to define a regular language that contains exactly the set of strings of balanced parentheses. We will begin our discussion of the context-free languages by defining a grammatical formalism that can be used to describe every language in the class (which, by the way, does include the language of balanced parentheses). Then, in Chapter 12, we will return to the question of defining machines that can accept strings in the language. At that point, we’ll see that the pushdown automaton, an NDFSM augmented with a single stack, can accept exactly the class of context-free languages that we are about to describe. In Chapter 13, we will see that the formalisms that we have presented stop short of the full power that is provided by a more general computational model. So we’ll see that there are straightforward languages that are not context-free. But, because of the restrictions that the context-free formalism imposes, it will turn out to be possible to define algorithms that perform at least the most basic operations on context- free languages, including deciding whether a string is in a language. We’ll summarize those algorithms in Chapters 14 and 15. The theory that we are about to present for the context-free languages is not as straightforward and elegant as the one that we have just described for the regular languages. We’ll see, for example, that there doesn’t exist an algorithm that compares two pushdown automata to see if they are equivalent. Given an arbitrary context-free grammar G, there doesn’t exist a linear-time algorithm that decides whether a string w is an element of L(G). But there does exist such an algorithm if we restrict our attention to a useful subset of the context-free languages. The context-free languages are not closed under many common operations like intersection and complement. On the other hand, because the class of context-free languages includes most programming languages, query languages, and a host of other languages that we use daily to communicate with computers, it is worth taking the time to work through the theory that is presented here, even though it is less clear than the one we were able to build in Part II. Part III 145 Context-Free Grammars and Pushdown AutomataContext-Free Grammars 11 Context-Free Grammars We saw, in our discussion of the regular languages in Part II, that there are substantial advantages to using descriptive frameworks (in that case, FSMs, regular expressions, and regular grammars) that offer less power and flexibility than a general purpose programming language provides. Because the frameworks were restrictive, we were able to describe a large class of useful operations that could be performed on the languages that we defined. We will begin our discussion of the context-free languages with another restricted formalism, the context-free grammar. But before we define it, we will pause and answer the more general question, “What is a grammar?” 11.1 Introduction to Rewrite Systems and Grammars We’ll begin with a very general computational model: Define a rewrite system (also called a production system or a rule-based system) to be a list of rules and an algorithm for applying them. Each rule has a left-hand side and a right- hand side. For example, the following could be rewrite-system rules: S → aSb aS → aSb → bSabSa In the discussion that follows, we will focus on rewrite system that operate on strings. But the core ideas that we will present can be used to define rewrite systems that operate on richer data structures. Of course, such data structures can be represented as strings, but the power of many practical rule-based systems comes from their ability to manipulate other structures directly. Expert systems, C 774, are programs that perform tasks in domains like engineering, medicine, and business, that require expertise when done by people. Many kinds of expertise can naturally be modeled as sets of condition/action rules. So many expert systems are built using tools that support rule-based programming. Rule based systems are also used to model business practices, C 774, and as the basis for reasoning about the behavior of nonplayer characters in computer games, C 791. When a rewrite system R is invoked on some initial string w, it operates as follows: simple-rewrite(R: rewrite system, w: initial string) = 1. Set working-string to w. 2. Until told by R to halt do: 2.1. Match the left-hand side of some rule against some part of working-string. 2.2. Replace the matched part of working-string with the right-hand side of the rule that was matched. 3. Return working-string. If simple-rewrite(R, w) can return some string s then we’ll say that R can derive s from w or that there exists a derivation in R of s from w. Rewrite systems can model natural growth processes, as occur, for example, in plants. In addition, evolutionary algorithms can be applied to rule sets. Thus rewrite systems can model evolutionary processes, C 809. We can define a particular rewrite-system formalism by specifying the form of the rules that are allowed and the algorithm by which they will be applied. In most of the rewrite-system formalisms that we will consider, a rule is simply a pair of strings. If the string on the left-hand side matches, it is replaced by the string on the right-hand side. Chapter 11 146 Context-Free Grammars But more flexible forms are also possible. For example, variables may be allowed. Let x be a variable. Then consider the rule: axa → aa This rule will squeeze out whatever comes between a pair of a’s. Another useful form allows regular expressions as left-hand sides. If we do that, we can write rules like the following, which squeezes out b’s between a’s: ab*ab*a → aaa The extended form of regular expressions that is supported in programming languages like Perl is often used to write substitution rules. C 792. In addition to describing the form of its rules, a rewrite-system formalism must describe how its rules will be applied. In particular, a rewrite-system formalism will define the conditions under which simple-rewrite will halt and the method by which it will choose a match in step 2.1. For example, one rewrite-system formalism might specify that any rule that matches may be chosen. A different formalism might specify that the rules have to be tried in the order in which they are written, with the first one that matches being the one that is chosen next. Rewrite systems can be used to define functions. In this case, we write rules that operate on an input string to produce the required output string. And rewrite systems can be used to define languages. In this case, we define a unique start symbol. The rules then apply and we will say that the language L that is generated by the system is exactly the set of strings, over L’s alphabet, that can be derived by simple-rewrite from the start symbol. A rewrite-system formalism can be viewed as a programming language and some such languages turn out to be useful. For example, Prolog, C 762, supports a style of programming called logic programming. A logic program is a set of rules that correspond to logical statements of the form A if B. The interpreter for a logic program reasons backwards from a goal (such as A), chaining rules together until each right-hand side has been reduced to a set of facts (axioms) that are already known to be true. The study of rewrite systems has played an important role in the development of the theory of computability. We’ll see in Part V that there exist rewrite-system formalisms that have the same computational power as the Turing machine, both with respect to computing functions and with respect to defining languages. In the rest of our discussion in this chapter, however, we will focus just on their use to define languages. A rewrite system that is used to define a language is called a grammar. If G is a grammar, let L(G) be the language that G generates. Like every rewrite system, every grammar contains a list (almost always treated as a set, i.e., as an unordered list) of rules. Also, like every rewrite system, every grammar works with an alphabet, which we can call V. In the case of grammars, we will divide V into two subsets: • a terminal alphabet, generally called , which contains the symbols that make up the strings in L(G), and • a nonterminal alphabet, the elements of which will function as working symbols that will be used while the grammar is operating.
Recommended publications
  • More Concise Representation of Regular Languages by Automata and Regular Expressions
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Information and Computation 208 (2010)385–394 Contents lists available at ScienceDirect Information and Computation journal homepage: www.elsevier.com/locate/ic More concise representation of regular languages by automata ୋ and regular expressions Viliam Geffert a, Carlo Mereghetti b,∗, Beatrice Palano b a Department of Computer Science, P. J. Šafárik University, Jesenná 5, 04154 Košice, Slovakia b Dipartimento di Scienze dell’Informazione, Università degli Studi di Milano, via Comelico 39, 20135 Milano, Italy ARTICLE INFO ABSTRACT Article history: We consider two formalisms for representing regular languages: constant height pushdown Received 12 February 2009 automata and straight line programs for regular expressions. We constructively prove that Revised 27 July 2009 their sizes are polynomially related. Comparing them with the sizes of finite state automata Available online 18 January 2010 and regular expressions, we obtain optimal exponential and double exponential gaps, i.e., a more concise representation of regular languages. Keywords: © 2010 Elsevier Inc. All rights reserved. Pushdown automata Regular expressions Straight line programs Descriptional complexity 1. Introduction Several systems for representing regular languages have been presented and studied in the literature. For instance, for the original model of finite state automaton [11], a lot of modifications have been introduced: nondeterminism [11], alternation [4], probabilistic evolution [10], two-way input head motion [12], etc. Other important formalisms for defining regular languages are, e.g., regular grammars [7] and regular expressions [8]. All these models have been proved to share the same expressive power by exhibiting simulation results.
    [Show full text]
  • Ambiguity Detection Methods for Context-Free Grammars
    Ambiguity Detection Methods for Context-Free Grammars H.J.S. Basten Master’s thesis August 17, 2007 One Year Master Software Engineering Universiteit van Amsterdam Thesis and Internship Supervisor: Prof. Dr. P. Klint Institute: Centrum voor Wiskunde en Informatica Availability: public domain Contents Abstract 5 Preface 7 1 Introduction 9 1.1 Motivation . 9 1.2 Scope . 10 1.3 Criteria for practical usability . 11 1.4 Existing ambiguity detection methods . 12 1.5 Research questions . 13 1.6 Thesis overview . 14 2 Background and context 15 2.1 Grammars and languages . 15 2.2 Ambiguity in context-free grammars . 16 2.3 Ambiguity detection for context-free grammars . 18 3 Current ambiguity detection methods 19 3.1 Gorn . 19 3.2 Cheung and Uzgalis . 20 3.3 AMBER . 20 3.4 Jampana . 21 3.5 LR(k)test...................................... 22 3.6 Brabrand, Giegerich and Møller . 23 3.7 Schmitz . 24 3.8 Summary . 26 4 Research method 29 4.1 Measurements . 29 4.1.1 Accuracy . 29 4.1.2 Performance . 31 4.1.3 Termination . 31 4.1.4 Usefulness of the output . 31 4.2 Analysis . 33 4.2.1 Scalability . 33 4.2.2 Practical usability . 33 4.3 Investigated ADM implementations . 34 5 Analysis of AMBER 35 5.1 Accuracy . 35 5.2 Performance . 36 3 5.3 Termination . 37 5.4 Usefulness of the output . 38 5.5 Analysis . 38 6 Analysis of MSTA 41 6.1 Accuracy . 41 6.2 Performance . 41 6.3 Termination . 42 6.4 Usefulness of the output . 42 6.5 Analysis .
    [Show full text]
  • Iterated Stack Automata and Complexity Classes
    INFORMATION AND COMPUTATION 95, 2 1-75 ( t 99 1) Iterated Stack Automata and Complexity Classes JOOST ENCELFRIET Department of Computer Science, Leiden University. P.O. Box 9.512, 2300 RA Leiden, The Netherlands An iterated pushdown is a pushdown of pushdowns of . of pushdowns. An iterated exponential function is 2 to the 2 to the to the 2 to some polynomial. The main result presented here is that the nondeterministic 2-way and multi-head iterated pushdown automata characterize the deterministic iterated exponential time complexity classes. This is proved by investigating both nondeterministic and alternating auxiliary iterated pushdown automata, for which similar characteriza- tion results are given. In particular it is shown that alternation corresponds to one more iteration of pushdowns. These results are applied to the l-way iterated pushdown automata: (1) they form a proper hierarchy with respect to the number of iterations, and (2) their emptiness problem is complete in deterministic iterated exponential time. Similar results are given for iterated stack (checking stack, non- erasing stack, nested stack, checking stack-pushdown) automata. ? 1991 Academic Press. Inc INTRODUCTION It is well known that several types of 2-way and multi-head pushdown automata and stack automata have the same power as certain time or space bounded Turing machines; see, e.g., Chapter 14 of (Hopcroft and Ullman, 1979), or Sections 13 and 20.2 of (Wagner and Wechsung, 1986). For the deterministic and nondeterministic case such characterizations were given by Fischer (1969) for checking stack automata, by Hopcroft and Ullman (1967b) for nonerasing stack automata, by Cook (1971) for auxiliary pushdown and 2-way stack automata, by Ibarra (1971) for auxiliary (nonerasing and erasing) stack automata, by Beeri (1975) for 2-way and auxiliary nested stack automata, and by van Leeuwen (1976) for auxiliary checking stack-pushdown automata.
    [Show full text]
  • 2 Context-Free Grammars and Pushdown Automata
    2 Context-free Grammars and Pushdown Automata As we have seen in Exercise 15, regular languages are not even sufficient to cover well-balanced brackets. In order even to treat the problem of parsing a program we have to consider more powerful languages. Our aim in this section is to generalize the concept of a regular language to cover this example (and others like it), and to find ways of describing such languages. The analogue of a regular expression is a context-free grammar while finite state machines are generalized to pushdown automata. We establish the correspondence between the two and then finish by describing the kinds of languages that are not captured by these more general methods. 2.1 Context-free grammars The question of recognizing well-balanced brackets is the same as that of dealing with nesting to arbitrary depths, which certainly occurs in all programming languages. To cover these examples we introduce the following definition. Definition 11 A context-free grammar is given by the following: • an alphabet Σ of terminal symbols, also called the object alphabet; • an alphabet N of non-terminal symbols, the elements of which are also referred to as auxiliary characters, placeholders or, in some books, variables, where N \ Σ = ;;7 • a special non-terminal symbol S 2 N called the start symbol; • a finite set of production rules, that is strings of the form R ! Γ where R 2 N is a non- terminal symbol and Γ 2 (Σ [ N)∗ is an arbitrary string of terminal and non-terminal symbols, which can be read as `R can be replaced by Γ'.8 You will meet grammars in more detail in the other part of the course, when examining parsers.
    [Show full text]
  • Pushdown Automata
    Pushdown Automata Announcements ● Problem Set 5 due this Friday at 12:50PM. ● Late day extension: Using a 72-hour late day now extends the due date to 12:50PM on Tuesday, February 19th. The Weak Pumping Lemma ● The Weak Pumping Lemma for Regular Languages states that For any regular language L, There exists a positive natural number n such that For any w ∈ L with |w| ≥ n, There exists strings x, y, z such that For any natural number i, w = xyz, w can be broken into three pieces, y ≠ ε where the middle piece isn't empty, where the middle piece can be xyiz ∈ L replicated zero or more times. Counting Symbols ● Consider the alphabet Σ = { 0, 1 } and the language L = { w ∈ Σ* | w contains an equal number of 0s and 1s. } ● For example: ● 01 ∈ L ● 110010 ∈ L ● 11011 ∉ L ● Question: Is L a regular language? The Weak Pumping Lemma L = { w ∈ {0, 1}*| w contains an equal number of 0s and 1s. } 1 0 0 1 An Incorrect Proof Theorem: L is regular. Proof: We show that L satisfies the condition of the pumping lemma. Let n = 2 and consider any string w ∈ L such that |w| ≥ 2. Then we can write w = xyz such that x = z = ε and y = w, so y ≠ ε. Then for any natural number i, xyiz = wi, which has the same number of 0s and 1s. Since L passes the conditions of the weak pumping lemma, L is regular. ■ The Weak Pumping Lemma ● The Weak Pumping Lemma for Regular Languages states that ThisThis sayssays nothingnothing aboutabout For any regular language L, languageslanguages thatthat aren'taren't regular!regular! There exists a positive natural number n such that For any w ∈ L with |w| ≥ n, There exists strings x, y, z such that For any natural number i, w = xyz, w can be broken into three pieces, y ≠ ε where the middle piece isn't empty, where the middle piece can be xyiz ∈ L replicated zero or more times.
    [Show full text]
  • Input Finite State Control Accept Or Reject Stack
    Pushdown Automata CS351 Just as a DFA was equivalent to a regular expression, we have a similar analogy for the context-free grammar. A pushdown automata (PDA) is equivalent in power to context- free grammars. We can give either a context-free grammar generating a context-free language, or we can give a pushdown automaton. Certain languages are more easily described in forms of one or the other. A pushdown automata is quite similar to a nondeterministic finite automata, but we add the extra component of a stack. The stack is of infinite size, and this is what will allow us to recognize some of the non-regular languages. We can visualize the pushdown automata as the following schematic: Finite input Accept or reject State Control Stack The finite state control reads inputs, one symbol at a time. The pushdown automata makes transitions to new states based on the input symbol, current state, and the top of the stack. In the process we may manipulate the stack. Ultimately, the control accepts or rejects the input. In one transition the PDA may do the following: 1. Consume the input symbol. If ε is the input symbol, then no input is consumed. 2. Go to a new state, which may be the same as the previous state. 3. Replace the symbol at the top of the stack by any string. a. If this string is ε then this is a pop of the stack b. The string might be the same as the current stack top (does nothing) c. Replace with a new string (pop and push) d.
    [Show full text]
  • Context Free Languages and Pushdown Automata
    Context Free Languages and Pushdown Automata COMP2600 — Formal Methods for Software Engineering Ranald Clouston Australian National University Semester 2, 2013 COMP 2600 — Context Free Languages and Pushdown Automata 1 Parsing The process of parsing a program is partly about confirming that a given program is well-formed – syntax. But it is also about representing the structure of the program so that it can be executed – semantics. For this purpose, the trail of sentential forms created en route to generating a given sentence is just as important as the question of whether the sentence can be generated or not. COMP 2600 — Context Free Languages and Pushdown Automata 2 The Semantics of Parses Take the code if e1 then if e2 then s1 else s2 where e1, e2 are boolean expressions and s1, s2 are subprograms. Does this mean if e1 then( if e2 then s1 else s2) or if e1 then( if e2 else s1) else s2 We’d better have an unambiguous way to tell which is right, or we cannot know what the program will do at runtime! COMP 2600 — Context Free Languages and Pushdown Automata 3 Ambiguity Recall that we can present CFG derivations as parse trees. Until now this was mere pretty presentation; now it will become important. A context-free grammar G is unambiguous iff every string can be derived by at most one parse tree. G is ambiguous iff there exists any word w 2 L(G) derivable by more than one parse tree. COMP 2600 — Context Free Languages and Pushdown Automata 4 Example: If-Then and If-Then-Else Consider the CFG S ! if bexp then S j if bexp then S else S j prog where bexp and prog stand for boolean expressions and (if-statement free) programs respectively, defined elsewhere.
    [Show full text]
  • Topics in Context-Free Grammar CFG's
    Topics in Context-Free Grammar CFG’s HUSSEIN S. AL-SHEAKH 1 Outline Context-Free Grammar Ambiguous Grammars LL(1) Grammars Eliminating Useless Variables Removing Epsilon Nullable Symbols 2 Context-Free Grammar (CFG) Context-free grammars are powerful enough to describe the syntax of most programming languages; in fact, the syntax of most programming languages is specified using context-free grammars. In linguistics and computer science, a context-free grammar (CFG) is a formal grammar in which every production rule is of the form V → w Where V is a “non-terminal symbol” and w is a “string” consisting of terminals and/or non-terminals. The term "context-free" expresses the fact that the non-terminal V can always be replaced by w, regardless of the context in which it occurs. 3 Definition: Context-Free Grammars Definition 3.1.1 (A. Sudkamp book – Language and Machine 2ed Ed.) A context-free grammar is a quadruple (V, Z, P, S) where: V is a finite set of variables. E (the alphabet) is a finite set of terminal symbols. P is a finite set of rules (Ax). Where x is string of variables and terminals S is a distinguished element of V called the start symbol. The sets V and E are assumed to be disjoint. 4 Definition: Context-Free Languages A language L is context-free IF AND ONLY IF there is a grammar G with L=L(G) . 5 Example A context-free grammar G : S aSb S A derivation: S aSb aaSbb aabb L(G) {anbn : n 0} (((( )))) 6 Derivation Order 1.
    [Show full text]
  • Context Free Grammars October 16, 2019 14.1 Review 14.2 Pushdown
    CS 4510: Automata and Complexity Fall 2019 Lecture 14: Context Free Grammars October 16, 2019 Lecturer: Santosh Vempala Scribe: Aditi Laddha 14.1 Review Context-free grammar: Context-free grammar(CFG), we define the following: V : set of variables, S 2 V : start symbol, Σ: set of terminals, and R: production rules, V ! (V [ Σ)∗ 14.2 Pushdown Automaton Pushdown Automaton: A pushdown automaton(PDA) is defined as Q: the set of states, q0 2 Q: the start state, F ⊆ Q: the set of accept states, Σ: Input alphabet, Γ: Stack alphabet, and δ: the transition function for PDA where δ : Q × (Σ [ ) × (Γ [ ) ! 2Q×(Γ[). Note that the transition function is non-deterministic and defines the following stack operations: • Push: (q; a; ) ! (q0; b0) In state q on reading input letter a, we go to state q0 and push b0 on top of the stack. • Pop: (q; a; b) ! (q0; ) In state q on reading input letter a and top symbol of the stack b, we go to state q0 and pop b from top of the stack. Example 1: L = f0i1i : i 2 Ng∗ • CFG G1: S ! S ! SS1 S1 ! S1 ! 0S11 • PDA: 14-1 Lecture 14: Context Free Grammars 14-2 0; ! 0 1; 0 ! , ! $ 1; 0 ! start qstart q1 q2 , $ ! qaccept Figure 14.1: PDA for L Example 2: The set of balanced parenthesis • CFG G2: S ! S ! SS S ! (S) • PDA: (; ! ( , ! $ start qstart q1 , $ ! ); (! qaccept Figure 14.2: PDA for balanced parenthesis How can we generate (()(())) using these production rules? There can be multiple ways to generate the same terminal string from S.
    [Show full text]
  • Context Free Languages
    Context Free Languages COMP2600 — Formal Methods for Software Engineering Katya Lebedeva Australian National University Semester 2, 2016 Slides by Katya Lebedeva and Ranald Clouston. COMP 2600 — Context Free Languages 1 Ambiguity The definition of CF grammars allow for the possibility of having more than one structure for a given sentence. This ambiguity may make the meaning of a sentence unclear. A context-free grammar G is unambiguous iff every string can be derived by at most one parse tree. G is ambiguous iff there exists any word w 2 L(G) derivable by more than one parse trees. COMP 2600 — Context Free Languages 2 Dangling else Take the code if e1 then if e2 then s1 else s2 where e1, e2 are boolean expressions and s1, s2 are subprograms. Does this mean if e1 then( if e2 then s1 else s2) or if e1 then( if e2 then s1) else s2 The dangling else is a problem in computer programming in which an optional else clause in an “ifthen(else)” statement results in nested conditionals being ambiguous. COMP 2600 — Context Free Languages 3 This is a problem that often comes up in compiler construction, especially parsing. COMP 2600 — Context Free Languages 4 Inherently Ambiguous Languages Not all context-free languages can be given unambiguous grammars – some are inherently ambiguous. Consider the language L = faib jck j i = j or j = kg How do we know that this is context-free? First, notice that L = faibickg [ faib jc jg We then combine CFGs for each side of this union (a standard trick): S ! T j W T ! UV W ! XY U ! aUb j e X ! aX j e V ! cV j e Y ! bYc j e COMP 2600 — Context Free Languages 5 The problem with L is that its sub-languages faibickg and faib jc jg have a non-empty intersection.
    [Show full text]
  • Using Jflap in Academics to Understand Automata Theory in a Better Way
    International Journal of Advances in Electronics and Computer Science, ISSN: 2393-2835 Volume-5, Issue-5, May.-2018 http://iraj.in USING JFLAP IN ACADEMICS TO UNDERSTAND AUTOMATA THEORY IN A BETTER WAY KAVITA TEWANI Assistant Professor, Institute of Technology, Nirma University E-mail: [email protected] Abstract - As it is been observed that the things get much easier once we visualize it and hence this paper is just a step towards it. Usually many subjects in computer engineering curriculum like algorithms, data structures, and theory of computation are taught theoretically however it lacks the visualization that may help students to understand the concepts in a better way. The paper shows some of the practices on tool called JFLAP (Java Formal Languages and Automata Package) and the statistics on how it affected the students to learn the concepts easily. The diagrammatic representation is used to help the theory mentioned. Keywords - Automata, JFLAP, Grammar, Chomsky Heirarchy , Turing Machine, DFA, NFA I. INTRODUCTION visualization and interaction in the course is enhanced through the popular software tools[6,7]. The proper The theory of computation is a basic course in use of JFLAP and instructions are provided in the computer engineering discipline. It is the subject manual “JFLAP activities for Formal Languages and which is been taught theoretically and hands-on Automata” by Peter Linz and Susan Rodger. It problems are given to the students to enhance the includes the theoretical approach and their understanding but it lacks the programming implementation in JFLAP. [10] assignment. However the subject is the base for creation of compilers and therefore should be given III.
    [Show full text]
  • Ambiguity Detection for Context-Free Grammars in Eli
    Fakultät für Elektrotechnik, Informatik und Mathematik Bachelor’s Thesis Ambiguity Detection for Context-Free Grammars in Eli Michael Kruse Student Id: 6284674 E-Mail: [email protected] Paderborn, 7th May, 2008 presented to Dr. Peter Pfahler Dr. Matthias Fischer Statement on Plagiarism and Academic Integrity I declare that all material in this is my own work except where there is clear acknowledgement or reference to the work of others. All material taken from other sources have been declared as such. This thesis as it is or in similar form has not been presented to any examination authority yet. Paderborn, 7th May, 2008 Michael Kruse iii iv Acknowledgement I especially whish to thank the following people for their support during the writing of this thesis: • Peter Pfahler for his work as advisor • Sylvain Schmitz for finding some major errors • Anders Møller for some interesting conversations • Ulf Schwekendiek for his help to understand Eli • Peter Kling and Tobias Müller for proofreading v vi Contents 1. Introduction1 2. Parsing Techniques5 2.1. Context-Free Grammars.........................5 2.2. LR(k)-Parsing...............................7 2.3. Generalised LR-Parsing......................... 11 3. Ambiguity Detection 15 3.1. Ambiguous Context-Free Grammars.................. 15 3.1.1. Ambiguity Approximation.................... 16 3.2. Ambiguity Checking with Language Approximations......... 16 3.2.1. Horizontal and Vertical Ambiguity............... 17 3.2.2. Approximation of Horizontal and Vertical Ambiguity..... 22 3.2.3. Approximation Using Regular Grammars........... 24 3.2.4. Regular Supersets........................ 26 3.3. Detection Schemes Based on Position Automata........... 26 3.3.1. Regular Unambiguity...................... 31 3.3.2.
    [Show full text]