Context-Free Languages & Grammars (Cfls & Cfgs)

Total Page:16

File Type:pdf, Size:1020Kb

Context-Free Languages & Grammars (Cfls & Cfgs) Context-Free Languages & Grammars (CFLs & CFGs) Reading: Chapter 5 1 Not all languages are regular n So what happens to the languages which are not regular? n Can we still come up with a language recognizer? n i.e., something that will accept (or reject) strings that belong (or do not belong) to the language? 2 Context-Free Languages n A language class larger than the class of regular languages n Supports natural, recursive notation called “context- free grammar” n Applications: Context- n Parse trees, compilers Regular free n XML (FA/RE) (PDA/CFG) 3 An Example n A palindrome is a word that reads identical from both ends n E.g., madam, redivider, malayalam, 010010010 n Let L = { w | w is a binary palindrome} n Is L regular? n No. n Proof: N N (assuming N to be the p/l constant) n Let w=0 10 k n By Pumping lemma, w can be rewritten as xyz, such that xy z is also L (for any k≥0) n But |xy|≤N and y≠e + n ==> y=0 k n ==> xy z will NOT be in L for k=0 n ==> Contradiction 4 But the language of palindromes… is a CFL, because it supports recursive substitution (in the form of a CFG) n This is because we can construct a “grammar” like this: Same as: 1. A ==> e A => 0A0 | 1A1 | 0 | 1 | e Terminal 2. A ==> 0 3. A ==> 1 Variable or non-terminal Productions 4. A ==> 0A0 5. A ==> 1A1 How does this grammar work? 5 How does the CFG for palindromes work? An input string belongs to the language (i.e., accepted) iff it can be generated by the CFG G: n Example: w=01110 A => 0A0 | 1A1 | 0 | 1 | e n G can generate w as follows: Generating a string from a grammar: 1. A => 0A0 1. Pick and choose a sequence 2. => 01A10 of productions that would allow us to generate the 3. => 01110 string. 2. At every step, substitute one variable with one of its productions. 6 Context-Free Grammar: Definition n A context-free grammar G=(V,T,P,S), where: n V: set of variables or non-terminals n T: set of terminals (= alphabet U {e}) n P: set of productions, each of which is of the form V ==> a1 | a2 | … n Where each ai is an arbitrary string of variables and terminals n S ==> start variable CFG for the language of binary palindromes: G=({A},{0,1},P,A) P: A ==> 0 A 0 | 1 A 1 | 0 | 1 | e 7 More examples n Parenthesis matching in code n Syntax checking n In scenarios where there is a general need for: n Matching a symbol with another symbol, or n Matching a count of one symbol with that of another symbol, or n Recursively substituting one symbol with a string of other symbols 8 Example #2 n Language of balanced paranthesis e.g., ()(((())))((()))…. n CFG? G: S => (S) | SS | e How would you “interpret” the string “(((()))()())” using this grammar? Sà SS à (S) à (SS)à((S)SS)à (((S))(S)(S))à (((()))()()) 9 Example #3 m n n A grammar for L = {0 1 | m≥n} n CFG? G: S => 0S1 | A A => 0A | e How would you interpret the string “00000111” using this grammar? Sà0S1à 0 0S1 1à 0 00S11 1à 0 000A11 1à 0 0000A11 1à 000001111 10 Example #4 A program containing if-then(-else) statements if Condition then Statement else Statement (Or) if Condition then Statement CFG? 11 More examples n n L1 = {0 | n≥0 } n n L2 = {0 | n≥1 } i j k n L3={0 1 2 | i=j or j=k, where i,j,k≥0} i j k n L4={0 1 2 | i=j or i=k, where i,j,k≥1} 12 Applications of CFLs & CFGs n Compilers use parsers for syntactic checking n Parsers can be expressed as CFGs 1. Balancing paranthesis: n B ==> BB | (B) | Statement n Statement ==> … 2. If-then-else: n S ==> SS | if Condition then Statement else Statement | if Condition then Statement | Statement n Condition ==> … n Statement ==> … 3. C paranthesis matching { … } 4. Pascal begin-end matching 5. YACC (Yet Another Compiler-Compiler) 13 More applications n Markup languages n Nested Tag Matching n HTML n <html> …<p> … <a href=…> … </a> </p> … </html> n XML n <PC> … <MODEL> … </MODEL> .. <RAM> … </RAM> … </PC> 14 Tag-Markup Languages Roll ==> <ROLL> Class Students </ROLL> Class ==> <CLASS> Text </CLASS> Text ==> Char Text | Char Char ==> a | b | … | z | A | B | .. | Z Students ==> Student Students | e Student ==> <STUD> Text </STUD> Here, the left hand side of each production denotes one non-terminals (e.g., “Roll”, “Class”, etc.) Those symbols on the right hand side for which no productions (i.e., substitutions) are defined are terminals (e.g., ‘a’, ‘b’, ‘|’, ‘<‘, ‘>’, “ROLL”, etc.) 15 Structure of a production head derivation body A =======> a1 | a2 | … | ak The above is same as: 1. A ==> a1 2. A ==> a2 3. A ==> a3 … K. A ==> ak 16 CFG conventions n Terminal symbols <== a, b, c… n Non-terminal symbols <== A,B,C, … n Terminal or non-terminal symbols <== X,Y,Z n Terminal strings <== w, x, y, z n Arbitrary strings of terminals and non- terminals <== a, b, g, .. 17 Syntactic Expressions in Programming Languages result = a*b + score + 10 * distance + c terminals variables Operators are also terminals Regular languages have only terminals n Reg expression = [a-z][a-z0-1]* n If we allow only letters a & b, and 0 & 1 for constants (for simplification) n Regular expression = (a+b)(a+b+0+1)* 18 String membership How to say if a string belong to the language defined by a CFG? 1. Derivation n Head to body Both are equivalent forms 2. Recursive inference n Body to head G: Example: A => 0A0 | 1A1 | 0 | 1 | e n w = 01110 A => 0A0 n Is w a palindrome? => 01A10 => 01110 19 Simple Expressions… n We can write a CFG for accepting simple expressions n G = (V,T,P,S) n V = {E,F} n T = {0,1,a,b,+,*,(,)} n S = {E} n P: n E ==> E+E | E*E | (E) | F n F ==> aF | bF | 0F | 1F | a | b | 0 | 1 20 Generalization of derivation n Derivation is head ==> body n A==>X (A derives X in a single step) n A ==>*G X (A derives X in a multiple steps) n Transitivity: IF A ==>*GB, and B ==>*GC, THEN A ==>*G C 21 Context-Free Language n The language of a CFG, G=(V,T,P,S), denoted by L(G), is the set of terminal strings that have a derivation from the start variable S. n L(G) = { w in T* | S ==>*G w } 22 Left-most & Right-most G: Derivation Styles E => E+E | E*E | (E) | F F => aF | bF | 0f | 1f | e * Derive the string a*(ab+10) from G: E = =>G a*(ab+10) nE nE n==> E * E n==> E * E Left-most n==> F * E n==> E * (E) Right-most n==> aF * E n==> E * (E + E) derivation: derivation: n==> a * E n==> E * (E + F) n==> a * (E) n==> E * (E + 1F) Always Always substitute n==> a * (E + E) n==> E * (E + 10F) n==> a * (F + E) n==> E * (E + 10) substitute leftmost n==> a * (aF + E) n==> E * (F + 10) rightmost variable n==> a * (abF + E) n==> E * (aF + 10) variable n==> a * (ab + E) n==> E * (abF + 0) n==> a * (ab + F) n==> E * (ab + 10) n==> a * (ab + 1F) n==> F * (ab + 10) n==> a * (ab + 10F) n==> aF * (ab + 10) n==> a * (ab + 10) n==> a * (ab + 10) 23 Leftmost vs. Rightmost derivations Q1) For every leftmost derivation, there is a rightmost derivation, and vice versa. True or False? True - will use parse trees to prove this Q2) Does every word generated by a CFG have a leftmost and a rightmost derivation? Yes – easy to prove (reverse direction) Q3) Could there be words which have more than one leftmost (or rightmost) derivation? Yes – depending on the grammar 24 How to prove that your CFGs are correct? (using induction) 25 Gpal: CFG & CFL A => 0A0 | 1A1 | 0 | 1 | e n Theorem: A string w in (0+1)* is in L(Gpal), if and only if, w is a palindrome. n Proof: n Use induction n on string length for the IF part n On length of derivation for the ONLY IF part 26 Parse trees 27 Parse Trees n Each CfG can be represented using a parse tree: n Each internal node is labeled by a variable in V n Each leaf is terminal symbol n for a production, A==>X1X2…Xk, then any internal node labeled A has k children which are labeled from X1,X2,…Xk from left to right Parse tree for production and all other subsequent productions: A ==> X1..Xi..Xk A X1 … Xi … Xk 28 Examples E A E + E 0 A 0 f f 1 A 1 a 1 e Derivation Recursiveinference Parse tree for 0110 Parse tree for a + 1 G: G: E => E+E | E*E | (E) | F A => 0A0 | 1A1 | 0 | 1 | e F => aF | bF | 0f | 1f | 0 | 1 | a | b 29 Parse Trees, Derivations, and Recursive Inferences Production: A ==> X ..X ..X A 1 i k X1 … Xi … Xk Derivation Recursive inference Left-most Parse tree derivation Derivation Right-most derivation Recursive inference 30 Interchangeability of different CFG representations n Parse tree ==> left-most derivation n DFS left to right n Parse tree ==> right-most derivation n DFS right to left n ==> left-most derivation == right-most derivation n Derivation ==> Recursive inference n Reverse the order of productions n Recursive inference ==> Parse trees n bottom-up traversal of parse tree 31 Connection between CFLs and RLs 32 What kind of grammars result for regular languages? CFLs & Regular Languages n A CFG is said to be right-linear if all the productions are one of the following two forms: A ==> wB (or) A ==> w Where: • A & B are variables, • w is a string of terminals n Theorem 1: Every right-linear CFG generates a regular language n Theorem 2: Every regular language has a right-linear grammar n Theorem 3: Left-linear CFGs also represent RLs 33 Some Examples 0 1 0,1 0 1 ØA => 01B | C 1 0 B => 11B | 0C | 1A A B C 1 0 A B 1 C C => 1A | 0 | 1 0 Right linear CFG? Right linear CFG? Finite Automaton? 34 Ambiguity in CFGs and CFLs 35 Ambiguity in CFGs n A CFG is said to be ambiguous if there exists a string which has more than one left-most derivation Example: S ==> AS | e LM derivation #1: LM derivation #2: S => AS S => AS A ==> A1 | 0A1 | 01 => 0A1S => A1S =>0A11S => 0A11S => 00111S => 00111S Input string: 00111 => 00111 => 00111 Can be derived in two ways 36 Why does ambiguity matter? Values are E ==> E + E | E * E | (E) | a | b | c | 0 | 1 different !!! string = a * b + c E • LM derivation #1: •E => E + E => E * E + E E + E (a*b)+c ==>* a * b + c E * E c a b E • LM derivation #2 •E => E * E => a * E => E * E a*(b+c) a * E + E ==>* a * b + c a E + E The calculated value depends on which b c of the two parse trees is actually used.
Recommended publications
  • Grammars (Cfls & Cfgs) Reading: Chapter 5
    Context-Free Languages & Grammars (CFLs & CFGs) Reading: Chapter 5 1 Not all languages are regular So what happens to the languages which are not regular? Can we still come up with a language recognizer? i.e., something that will accept (or reject) strings that belong (or do not belong) to the language? 2 Context-Free Languages A language class larger than the class of regular languages Supports natural, recursive notation called “context- free grammar” Applications: Context- Parse trees, compilers Regular free XML (FA/RE) (PDA/CFG) 3 An Example A palindrome is a word that reads identical from both ends E.g., madam, redivider, malayalam, 010010010 Let L = { w | w is a binary palindrome} Is L regular? No. Proof: N N (assuming N to be the p/l constant) Let w=0 10 k By Pumping lemma, w can be rewritten as xyz, such that xy z is also L (for any k≥0) But |xy|≤N and y≠ + ==> y=0 k ==> xy z will NOT be in L for k=0 ==> Contradiction 4 But the language of palindromes… is a CFL, because it supports recursive substitution (in the form of a CFG) This is because we can construct a “grammar” like this: Same as: 1. A ==> A => 0A0 | 1A1 | 0 | 1 | Terminal 2. A ==> 0 3. A ==> 1 Variable or non-terminal Productions 4. A ==> 0A0 5. A ==> 1A1 How does this grammar work? 5 How does the CFG for palindromes work? An input string belongs to the language (i.e., accepted) iff it can be generated by the CFG G: Example: w=01110 A => 0A0 | 1A1 | 0 | 1 | G can generate w as follows: Generating a string from a grammar: 1.
    [Show full text]
  • Formal Language and Automata Theory (CS21004)
    Context Free Grammar Normal Forms Derivations and Ambiguities Pumping lemma for CFLs PDA Parsing CFL Properties Formal Language and Automata Theory (CS21004) Soumyajit Dey CSE, IIT Kharagpur Context Free Formal Language and Automata Theory Grammar (CS21004) Normal Forms Derivations and Ambiguities Pumping lemma Soumyajit Dey for CFLs CSE, IIT Kharagpur PDA Parsing CFL Properties DPDA, DCFL Membership CSL Soumyajit Dey CSE, IIT Kharagpur Formal Language and Automata Theory (CS21004) Context Free Grammar Normal Forms Derivations and Ambiguities Pumping lemma for CFLs PDA Parsing CFL Properties Formal Language and Automata Announcements Theory (CS21004) Soumyajit Dey CSE, IIT Kharagpur Context Free Grammar Normal Forms Derivations and The slide is just a short summary Ambiguities Follow the discussion and the boardwork Pumping lemma for CFLs Solve problems (apart from those we dish out in class) PDA Parsing CFL Properties DPDA, DCFL Membership CSL Soumyajit Dey CSE, IIT Kharagpur Formal Language and Automata Theory (CS21004) Context Free Grammar Normal Forms Derivations and Ambiguities Pumping lemma for CFLs PDA Parsing CFL Properties Formal Language and Automata Table of Contents Theory (CS21004) Soumyajit Dey 1 Context Free Grammar CSE, IIT Kharagpur 2 Normal Forms Context Free Grammar 3 Derivations and Ambiguities Normal Forms 4 Derivations and Pumping lemma for CFLs Ambiguities 5 Pumping lemma PDA for CFLs 6 Parsing PDA Parsing 7 CFL Properties CFL Properties DPDA, DCFL 8 DPDA, DCFL Membership 9 Membership CSL 10 CSL Soumyajit Dey CSE,
    [Show full text]
  • (Or Simply a Grammar) Is (V, T, P, S), Where (I) V Is a Finite Nonempty Set Whose Elements Are Called Variables
    UNIT - III GRAMMARS FORMALISM Definition of Grammar: A phrase-Structure grammar (or Simply a grammar) is (V, T, P, S), Where (i) V is a finite nonempty set whose elements are called variables. (ii) T is finite nonempty set, whose elements are called terminals. (iii) S is a special variable (i.e an element of V ) called the start symbol, and (iv) P is a finite set whose elements are ,where and are strings on V T, has at least one symbol from V. Elements of P are called Productions or production rules or rewriting rules. 1. Regular Grammar: A grammar G=(V,T,P,S) is said to be Regular grammar if the productions are in either right-linear grammar or left-linear grammar. ABx|xB|x 1.1 Right –Linear Grammar: A grammar G=(V,T,P,S) is said to be right-linear if all productions are of the form AxB or Ax. Where A,B V and x T*. 1.2 Left –Linear Grammar: A grammar G=(V,T,P,S) is said to be Left-linear grammar if all productions are of the form ABx or Ax Example: 1. Construct the regular grammar for regular expression r=0(10)*. Sol: Right-Linear Grammar: S0A A10A| Computer Science & Engineering Formal Languages and Automata Theory Left-Linear Grammar: SS10|0 2. Equivalence of NFAs and Regular Grammar 2.1 Construction of -NFA from a right –linear grammar: Let G=(V,T,P,S) be a right-linear grammar .we construct an NFA with -moves, M=(Q,T, ,[S],[ ]) that simulates deviation in ‘G’ Q consists of the symbols [ ] such that is S or a(not necessarily proper)suffix of some Right-hand side of a production in P.
    [Show full text]
  • A Complex Measure for Linear Grammars‡
    1 Demonstratio Mathematica, Vol. 38, No. 3, 2005, pp 761-775 A Complex Measure for Linear Grammars‡ Ishanu Chattopadhyay Asok Ray [email protected] [email protected] The Pennsylvania State University University Park, PA 16802, USA Keywords: Language Measure; Discrete Event Supervisory Control; Linear Grammars Abstract— The signed real measure of regular languages, introduced supervisors that satisfy their own respective specifications. Such and validated in recent literature, has been the driving force for a partially ordered set of sublanguages requires a quantitative quantitative analysis and synthesis of discrete-event supervisory (DES) measure for total ordering of their respective performance. To control systems dealing with finite state automata (equivalently, regular languages). However, this approach relies on memoryless address this issue, a signed real measures of regular languages state-based tools for supervisory control synthesis and may become has been reported in literature [7] to provide a mathematical inadequate if the transitions in the plant dynamics cannot be captured framework for quantitative comparison of controlled sublanguages by finitely many states. From this perspective, the measure of regular of the unsupervised plant language. This measure provides a languages needs to be extended to that of non-regular languages, total ordering of the sublanguages of the unsupervised plant such as Petri nets or other higher level languages in the Chomsky hierarchy. Measures for non-regular languages has not apparently and formalizes a procedure for synthesis of DES controllers for been reported in open literature and is an open area of research. finite state automaton plants, as an alternative to the procedure This paper introduces a complex measure of linear context free of Ramadge and Wonham [5].
    [Show full text]
  • Automaty a Gramatiky
    Context-Free Grammars and Languages More powerful than finite automata. Used to describe document formats, via DTD - document-type definition used in XML (extensible markup language) We introduce context-free grammar notation parse tree. There exist an ’pushdown automaton’ that describes all and only the context-free languages. Will be described later. Automata and Grammars Grammars 6 March 30, 2017 1 / 39 Palindrome example A string the same forward and backward, like otto or Madam, I’m Adam. w is a palindrome iff w = w R . The language Lpal of palindromes is not a regular language. A context-free grammar for We use the pumping lemma. palindromes If Lpal is a regular language, let n be the 1. P → λ asssociated constant, and consider: 2. P → 0 w = 0n10n. 3. P → 1 For regular L, we can break w = xyz such that y consists of one or more 0’s from the 4. P → 0P0 5. P → 1P1 first group. Thus, xz would be also in Lpal if Lpal were regular. A context-free grammar (right) consists of one or more variables, that represent classes of strings, i.e., languages. Automata and Grammars Grammars 6 March 30, 2017 2 / 39 Definition (Grammar) A Grammar G = (V , T , P, S) consists of Finite set of terminal symbols (terminals) T , like {0, 1} in the previous example. Finite set of variables V (nonterminals,syntactic categories), like {P} in the previous example. Start symbol S is a variable that represents the language being defined. P in the previous example. Finite set of rules (productions) P that represent the recursive definition of the language.
    [Show full text]
  • A Turing Machine
    1/45 Turing Machines and General Grammars Alan Turing (1912 – 1954) 2/45 Turing Machines (TM) Gist: The most powerful computational model. Finite State Control Tape: Read-write head a1 a2 … ai … an … moves Note: = blank 3/45 Turing Machines: Definition Definition: A Turing machine (TM) is a 6-tuple M = (Q, , , R, s, F), where • Q is a finite set of states • is an input alphabet • is a tape alphabet; ; • R is a finite set of rules of the form: pa qbt, where p, q Q, a, b , t {S, R, L} • s Q is the start state • F Q is a set of final states Mathematical note: • Mathematically, R is a relation from Q to Q {S, R, L} • Instead of (pa, qbt), we write pa qbt 4/45 Interpretation of Rules • pa qbS: If the current state and tape symbol are p and a, respectively, then replace a with b, change p to q, and keep the head Stationary. p q x a y x b y • pa qbR: If the current state and tape symbol are p and a, respectively, then replace a with b, shift the head a square Right, and change p to q. p q x a y x b y • pa qbL: If the current state and tape symbol are p and a, respectively, then replace a with b, shift the head a square Left, and change p to q. p q x a y x b y 5/45 Graphical Representation q represents q Q s represents the initial state s Q f represents a final state f F p a/b, S q denotes pa qbS R p a/b, R q denotes pa qbR R p a/b, L q denotes pa qbL R 6/45 Turing Machine: Example 1/2 M = (Q, , , R, s, F) where: 6/45 Turing Machine: Example 1/2 M = (Q, , , R, s, F) where: • Q = {s, p, q, f}; s f p q 6/45 Turing
    [Show full text]
  • Pushdown Automata 1 Formal Definition
    Abstract. This chapter introduces Pushdown Au- finite tomata, and presents their basic theory. The two control δ top A language families defined by pda (acceptance by final state, resp. by empty stack) are shown to be equal. p stack The pushdown languages are shown to be equal to . the context-free languages. Closure properties of the . context-free languages are given that can be obtained ···a ··· using this characterization. Determinism is considered, input tape and it is formally shown that deterministic pda are less powerfull than the general class. Figure 1: Pushdown automaton Pushdown Automata input. Otherwise, it reaches the end of the input, which is then accepted only if the stack is empty. A pushdown automaton is like a finite state automa- ton that has a pushdown stack as additional mem- ory. Thus, it is a finite state device equipped with a 1 Formal Definition one-way input tape and a ‘last-in first-out’ external memory that can hold unbounded amounts of infor- The formal specification of a pushdown automaton mation; each individual stack element contains finite extends that of a finite state automaton by adding information. The usual stack access is respected: the an alphabet of stack symbols. For technical reasons automaton is able to see (test) only the topmost sym- the initial stack is not empty, but it contains a single bol of the stack and act based on its value, it is able symbol which is also specified. Finally the transition to pop symbols from the top of the stack, and to push relation must also account for stack inspection and symbols onto the top of the stack.
    [Show full text]
  • Chapter 4 Context-Free Grammars And
    LIACS — Second Course ... IV Chapter 4 Context-free Grammars and Lan- guages IV 1 context-free languages 4.0 Review 4.1 Closure properties counting letters 4.2 Unary context-free languages 4.6 Parikh’s theorem pumping & swapping 4.3 Ogden’s lemma 4.4 Applications of Ogden’s lemma 4.5 The interchange lemma subfamilies 4.7 Deterministic context-free languages 4.8 Linear languages 4.0 Review ⊲ The book uses a transition funtion ∗ δ : Q × (Σ ∪{ǫ}) × Γ → 2Q×Γ , i.e., a function into (finite) subsets of Q × Γ∗. My personal favourite is a (finite) transi- tion relation δ ⊆ Q × (Σ ∪{ǫ}) × Γ × Q × Γ∗. In the former one writes δ(p, a, A) ∋ (q,α) and in the latter (p,a,A,q,α) ∈ δ. The meaning is the same. ⊳ IV 2 pushdown automaton (syntax) 7-tuple finite top control δ A = (Q, Σ, Γ, δ, q0, Z0, F ) A Q states p, q p q0 ∈ Q initial state . F ⊆ Q final states ···a ··· Σ input alphabet a,b w,x Γ stack alphabet A,B α input tape stack Z0 ∈ Γ initial stack symbol transition function (finite) ∗ δ : Q × (Σ ∪{ǫ}) × Γ → 2Q×Γ from to ( p a A ) ∋ ( q α ) read pop push before after | {z } | {z } IV 3 pushdown automaton (semantics) Q × Σ∗ × Γ∗ configuration β p A = p state . Aγ ( p, w, β ) w input, unread part ···a ··· β stack, top-to-bottom w = ax move (step) ⊢A | {z } ( p, ax, Aγ ) ⊢A ( q,x, αγ ) iff ( p, a, A, q, α ) ∈ δ, x ∈ Σ∗ and γ ∈ Γ∗ α q computation ⊢∗ αγ A .
    [Show full text]
  • Formal Language and Automata Theory (CS21 004) Course Coverage
    Formal Language and Automata Theory (CS21 004) Course Coverage Class: CSE 2 nd Year 4th January, 201 0 ( 2 hours) : Tutorial-1 + Alphabet: Σ; Γ; , string over Σ, Σ 0 = " , Σ n = x : x = a a a ; a Σ; 1 i n , Σ ? = f g f 1 2 n i 2 6 6 g Σ n, Σ + = Σ ? " , ( Σ ?; co n; ") is a monoid. Language L over the alphabet Σ is a subset of n N nf g 2 ΣS?, L Σ ∗ . ⊆ Σ ? N The size of Σ ∗ is countably infinite , so the collection of all languages over Σ, 2 2 , is uncountably infinite . So every language cannot have finite description. ' 5 th January, 201 0 ( 1 hour) : No set is equinumerous to its power-set ( Cantor) . The proof is by reductio ad absurdum ( reduc- tion to a contradiction) . An one to one map from A to 2 A is easy to get, a ¡ a . f g Let A be a set and there is an onto map ( surjection ) f from A to 2 A. We consider the set ¢ A A B = x A: x f( x) 2 . As f: A 2 is a surjection, there is an element a0 A so that f 2 2 ¢ g ! 2 f( a ) = 2 A, but then a f( a ) = . So, a B, and B is a non-empty subset of A. So, there 0 ; 2 0 0 ; 0 2 is an element a1 A such that f( a1 ) = B. Does a1 f( a1 ) = B? This leads to contradiction. ¢ 2 ¢ 2 If a1 f( a1 ) = B, then a1 B, if a1 f( a1 ) = B, then a1 B i.
    [Show full text]
  • Formal Languages and Automata
    Formal Languages and Automata Stephan Schulz & Jan Hladik [email protected] [email protected] L ⌃⇤ ✓ with contributions from David Suendermann 1 Table of Contents Formal Grammars and Context-Free Introduction Languages Basics of formal languages Formal Grammars The Chomsky Hierarchy Regular Languages and Finite Automata Right-linear Grammars Regular Expressions Context-free Grammars Finite Automata Push-Down Automata Non-Determinism Properties of Context-free Regular expressions and Finite Languages Automata Turing Machines and Languages of Type Minimisation 1 and 0 Equivalence Turing Machines The Pumping Lemma Unrestricted Grammars Properties of regular languages Linear Bounded Automata Scanners and Flex Properties of Type-0-languages 2 Outline Introduction Basics of formal languages Regular Languages and Finite Automata Scanners and Flex Formal Grammars and Context-Free Languages Turing Machines and Languages of Type 1 and 0 3 Introduction I Stephan Schulz I Dipl.-Inform., U. Kaiserslautern, 1995 I Dr. rer. nat., TU Munchen,¨ 2000 I Visiting professor, U. Miami, 2002 I Visiting professor, U. West Indies, 2005 I Lecturer (Hildesheim, Offenburg, . ) since 2009 I Industry experience: Building Air Traffic Control systems I System engineer, 2005 I Project manager, 2007 I Product Manager, 2013 I Professor, DHBW Stuttgart, 2014 Research: Logic & Automated Reasoning 4 Introduction I Jan Hladik I Dipl.-Inform.: RWTH Aachen, 2001 I Dr. rer. nat.: TU Dresden, 2007 I Industry experience: SAP Research I Work in publicly funded research projects I Collaboration with SAP product groups I Supervision of Bachelor, Master, and PhD students I Professor: DHBW Stuttgart, 2014 Research: Semantic Web, Semantic Technologies, Automated Reasoning 5 Literature I Scripts I The most up-to-date version of this document as well as auxiliary material will be made available online at http://wwwlehre.dhbw-stuttgart.de/ ˜sschulz/fla2015.html and http://wwwlehre.dhbw-stuttgart.de/ ˜hladik/FLA I Books I John E.
    [Show full text]
  • On Chomsky Hierarchy of Palindromic Languages∗
    Acta Cybernetica 22 (2016) 703{713. On Chomsky Hierarchy of Palindromic Languages∗ P´alD¨om¨osi,y Szil´ardFazekas,z and Masami Itox Abstract The characterization of the structure of palindromic regular and palindromic context-free languages is described by S. Horv´ath,J. Karhum¨aki, and J. Kleijn in 1987. In this paper alternative proofs are given for these characterizations. Keywords: palindromic formal languages, combinatorics of words and lan- guages 1 Introduction The study of combinatorial properties of words is a well established field and its results show up in a variety of contexts in computer science and related disciplines. In particular, formal language theory has a rich connection with combinatorics on words, even at the most basic level. Consider, for example, the various pumping lemmata for the different language classes of the Chomsky hierarchy, where ap- plicability of said lemmata boils down in most cases to showing that the resulting words, which are rich in repetitions, cannot be elements of a certain language. After repetitions, the most studied special words are arguably the palindromes. These are sequences, which are equal to their mirror image. Apart from their combi- natorial appeal, palindromes come up frequently in the context of algorithms for DNA sequences or when studying string operations inspired by biological processes, e.g., hairpin completion [2], palindromic completion [10], pseudopalindromic com- pletion [3], etc. Said string operations are often considered as language generating formalisms, either by applying them to all words in a given language or by apply- ing them iteratively to words. One of the main questions, when considering the languages arising from these operations, is how they relate to the classes defined by the Chomsky hierarchy.
    [Show full text]
  • Binary Context-Free Grammars
    S S symmetry Article Binary Context-Free Grammars Sherzod Turaev 1,* , Rawad Abdulghafor 2 , Ali Amer Alwan 2 , Ali Abd Almisreb 3 and Yonis Gulzar 4 1 Department of Computer Science & Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain 15551, UAE 2 Department of Computer Science, Faculty of Information and Communication Technology, International Islamic University Malaysia, Gombak, Selangor 53100, Malaysia; [email protected] (R.A.); [email protected] (A.A.A.) 3 Faculty of Engineering and Natural Sciences, International University of Sarajevo, 71210 Sarajevo, Bosnia and Herzegovina; [email protected] 4 Department of Management Information Systems, King Faisal University, Al-Ahsa 31982, Saudi Arabia; [email protected] * Correspondence: [email protected]; Tel.: +971-58-567-6013 Received: 17 June 2020; Accepted: 21 July 2020; Published: 24 July 2020 Abstract: A binary grammar is a relational grammar with two nonterminal alphabets, two terminal alphabets, a set of pairs of productions and the pair of the initial nonterminals that generates the binary relation, i.e., the set of pairs of strings over the terminal alphabets. This paper investigates the binary context-free grammars as mutually controlled grammars: two context-free grammars generate strings imposing restrictions on selecting production rules to be applied in derivations. The paper shows that binary context-free grammars can generate matrix languages whereas binary regular and linear grammars have the same power as Chomskyan regular and linear grammars. Keywords: formal language; binary grammar; context-free grammar; matrix grammar; relational grammar; computation power; chomsky hierarchy 1. Introduction A “traditional” phrase-structure grammar (also known as a Chomskyan grammar) is a generative computational mechanism that produces strings (words) over some alphabet starting from the initial symbol and sequentially applying production rules that rewrite sequences of symbols [1–3].
    [Show full text]