A Second Course in Formal Languages and Automata Theory Jeffrey Shallit Index More Information

Total Page:16

File Type:pdf, Size:1020Kb

A Second Course in Formal Languages and Automata Theory Jeffrey Shallit Index More Information Cambridge University Press 978-0-521-86572-2 - A Second Course in Formal Languages and Automata Theory Jeffrey Shallit Index More information Index 2DFA, 66 Berstel, J., 48, 106 2DPDA, 213 Biedl, T., 135 3-SAT, 21 Van Biesbrouck, M., 139 Birget, J.-C., 107 abelian cube, 47 blank symbol, B, 14 abelian square, 47, 137 Blum, N., 106 Ackerman, M., xi Boolean closure, 221 Adian, S. I., 43, 47, 48 Boolean grammar, 223 Aho, A. V., 173, 201, 223 Boolean matrices, 73, 197 Alces, A., 29 Boonyavatana, R., 139 alfalfa, 30, 34, 37 border, 35, 104 Allouche, J.-P., 48, 105 bordered, 222 alphabet, 1 bordered word, 35 unary, 1 Borges, M., xi always-halting TM, 209 Borwein, P., 48 ambiguous, 10 Boyd, D., 223 NFA, 102 Brady, A. H., 200 Angluin, D., 105 branch point, 113 angst Brandenburg, F.-J., 139 existential, 54 Brauer, W., 106 antisymmetric, 92 Brzozowski, J., 27, 106 assignment Bucher, W., 138 satisfying, 20 Buntrock, J., 200 automaton Burnside problem for groups, 42 pushdown, 11 Buss, J., xi, 223 synchronizing, 105 busy beaver problem, 183, 184, 200 two-way, 66 Axelrod, R. M., 105 calliope, 3 Cantor, D. C., 201 Bader, C., 138 Cantor set, 102 balanced word, 137 cardinality, 1 Bar-Hillel, Y., 201 census generating function, 133 Barkhouse, C., xi Cerny’s conjecture, 105 base alphabet, 4 Chaitin, G. J., 200 Berman, P., 200 characteristic sequence, 28 233 © Cambridge University Press www.cambridge.org Cambridge University Press 978-0-521-86572-2 - A Second Course in Formal Languages and Automata Theory Jeffrey Shallit Index More information 234 Index chess, 41 cycle operation, 59 German rule in, 42 cyclic shift, 34, 60 laws of, 41 CYK algorithm, 141, 142, 144 Chomsky hierarchy, 212 Chomsky, N., 27, 201, 212, 223 Damon, T., xi Chomsky normal form, 10 Davis, M., 27 clickomania, 135 DCFL’s, 126 closed, 7 closure under complement, 128 closure properties, 7 decision problem co-CFL, 132 solvable, 17 Cocke, J., 141 deeded, 47 cold, 3 deified, 3 commutative image, 122 derivation, 10 comparable, 92 derivation tree, 10 complement deterministic context-free languages, of a set, 1 126 complete item, 145 deterministic finite automaton, 4 complex, 178 definition, 4 complexity class, 19 deterministic PDA, 126 compression, 178 DFA, 4 optimal, 179 DiRamio, M., xi computing model directed graph, 73 robust, 66 Dirichlet’s theorem, 95 concatenation division-free sequence, 93 of languages, 3 drawer, 3 of strings, 3 Drobot, V., 173 configuration, 12, 14 Du, D.-Z., 139 conflict, 156 conjugate, 34, 44, 60, 137 Earley, J., 173 conjunctive grammar, 222 Earley’s parsing method, 144 conjunctive language, 222 Eggan, L. C., 105 conjunctive normal form, 20 Ehrig, H., 138 context-free grammar, 8, 9 Ellul, K., xi definition of, 9 empty stack pure, 133 acceptance by in PDA, 12 context-free language empty string, 2 definition of, 10 Engels, G., 138 context-free languages enlightenment, 2 Boolean closure of, 221 entanglement, 34 closed under inverse morphism, 109 -production, 10 closed under substitution, 108 -transition, 6 context-sensitive grammar, 202 equivalence classes, 77 context-sensitive language, 202 equivalence relation, 77 Cook, S. A., 26, 223 index, 77 Coppersmith, D., 106 Myhill-Nerode, 78 Crochemore, M., 47 refinement of, 77 CSG, 202 right invariant, 78 CSL, 202 Euwe, M., 42, 48 cube, 37 exponent abelian, 47 of a group, 43 cubefree, 37 extended regular expression, 23 © Cambridge University Press www.cambridge.org Cambridge University Press 978-0-521-86572-2 - A Second Course in Formal Languages and Automata Theory Jeffrey Shallit Index More information Index 235 factor, 27 Hadley, R., xi factoring, 162 Haines, L. H., 107 Fermat’s last theorem, 36 Hall, M., 43 Fernandes, S., xi halting state, 14 Fibonacci number, 132 Hanan, J., 222 final state, 4 handle, 164 acceptance by in PDA, 12 Harju, T., 48, 106 Fine, N. J., 48 Harrison, M. A., 107, 223 finite index, 77 Hartmanis, J., 105 finite-state transducer, 61, 102 Hashiguchi, K., 105 Fischer, P. C., 139 high information content, 178 Floyd, R. W., 201 homomorphism, 29, 54 FORTRAN, 133 iterated, 29 fractional power, 34 Hopcroft, J. E., 26, 106, 173, 201, 223 Franz, J., xi Friedman property, 100 Ibarra, O. H., 105, 223 full configuration immediate left recursion, 161 of a 2DPDA, 214 Immerman, N., 210, 223 function incidence matrix, 73 partial, 14 incompressibility method, 180, 182 Gabarro, J., 138 incompressible, 179 Garey, M. R., 27 index Gaspar, C., xi of an equivalence relation, 77 Gazdar, G., 138 finite, 77 general sequential machine, indistinguishable, 82 106 inductive counting, 210 German rule, 42 infinite sequence, 28 Ginsburg, S., 106, 138, 139, 201 infinite string, 28 Glaister, I., 107 infinite word, 28 Glover, M., xi Ingalls, C., 48 Golbeck, R., xi inherently ambiguous, 114, 115 Goldstine, J., 138, 139 initial state, 5 Golod, D., xi input grammar, 8 accepted, 4 Boolean, 223 rejected, 4 conjunctive, 222 integer context-free, 8 random, 181 context-sensitive, 202 interchange lemma, 118–121 length-increasing, 203 invalid computation, 190 Type 0, 200 invariance theorem, 178 unrestricted, 174–176 inverse morphism, 57 Gray, J. N., 223 inverse substitution, 97, 135 Greibach, S. A., 139 alternate definition, 98, 135 Gries, D., 106 Istead, J., xi group item, 163 exponent of, 43 complete, 145 infinite periodic, 46 periodic, 43 Jiang, T., 105, 107 torsion, 43 Johnson, D. S., 27 Gupta, N., 47 Jones, N. D., 223 © Cambridge University Press www.cambridge.org Cambridge University Press 978-0-521-86572-2 - A Second Course in Formal Languages and Automata Theory Jeffrey Shallit Index More information 236 Index Kalbfleisch, R., xi Lin, S., 200 Kao, J.-Y., xi Lindenmayer, A., 222 Karhumaki,¨ J., 48, 106 linear, 122 Karp reduction, 20 linear bounded automaton, 206 Kasami, T., 141 linear grammar, 130 Keanie, A., xi linear languages, 130 Kfoury, A. J., 47 pumping lemma for, 131 Kleene closure, 4 Linster, B. G., 105 Klein, E., 138 Litt, A., xi Knuth, D. E., 173 Liu, L. Y., 138 Knuth DFA, 164 log(L), 76 Knuth NFA-, 165 Lorentz, R. J., 200 Ko, K.-I., 139 Lothaire, M., 48, 107 Kolmogorov complexity, 177, 178 LR(0) grammar uncomputable, 179 definition of, 167 Koo, A., xi Lucier, B., xi Kozen, D., 105 Lutz, B., xi Kreowski, H.-J., 138 Lyndon, R. C., 30, 48 Krieger, D., xi Lyndon–Schutzenberger¨ theorems, 30, 31 Kuroda, S. Y., 223 MacDonald, I., xi Landry, D., xi Main, M. G., 138 language, 2 Martin, A., xi conjunctive, 222 Martin, G., xi context-free, 10 Martin, J. C., 26, 139 inherently ambiguous, 115 Martinez, A., xi prefix, 3, 54, 96 Marxen, H., 200, 201 recursive, 15 Maslov, A. N., 107 recursively enumerable, 15 matrices regular, 4 Boolean, 73, 197 suffix, 3 matrix multiplication, 75 languages, 3 Maurer, H. A., 138 product of, 3 McCulloch, W. S., 27 Laplace, P. S., 177 McNaughton, R., 105, 106 Lawson, M. V., 106 Mealy, G. H., 106 LBA, 206 Mealy machine, 49 deterministic, 223 Mehta, G., xi leading zeros Miltchman, O., xi removing, 54, 62 minimal Lee, J., xi in partially ordered set, 94 left quotient, 106 minimal word, 133 leftmost derivation, 10 mirror invariant, 45 length-increasing grammar, 203 Mobius¨ function, 46 letters, 1 Montanari, U., 138 Levi’s lemma, 30 Moore, E. F., 106 Lewis, H. R., 26 Moore machine, 49 lexicographic order, 3 moose, 3 lexicographically median, 100 antlers, 29 Li, M., 200 morphism, 29, 54 Ligocki, S., 201 convention when defining, 29 Ligocki, T. J., 201 iterated, 29 © Cambridge University Press www.cambridge.org Cambridge University Press 978-0-521-86572-2 - A Second Course in Formal Languages and Automata Theory Jeffrey Shallit Index More information Index 237 nonerasing, 104 outshout, 35 overlap-free, 43 overlap-free, 37, 39 Morse, M., 38 morphism, 43 Motwani, R., 26 Moura, A., 138 P (polynomial time complexity class), 19 multigrade problem, 40 Palacios, A., xi murmur, 104 palindrome, 3 Myhill, J., 106 palindromic closure, 104 Myhill–Nerode equivalence relation, Panini, 27 78 Papadimitriou, C. H., 26, 27 Myhill–Nerode theorem, 77 Papert, S., 106 Parikh map, 122 n! trick, 114 Parikh, R. J., 139 Nazari, S., xi parse tree, 10 Nerode, A., 106 partial configuration Nevraumont, A. F., 200 of a 2DPDA, 218 NFA, 6 partial function, 14 ambiguous, 102 partial order, 92 NFA-, 6 pattern, 104, 199 Ng, S., xi pattern matching, 199 Nguyen, L., xi PCF grammar, 133 Nichols, M., xi PCP, 186 Nivat’s theorem PDA for transducers, 63 acceptance by empty stack, 12 nonconstructive, 54 acceptance by final state, 12 nondeterministic finite automaton, 6 deterministic, 126 nondeterministic state complexity, 90 perfect shuffle, 3, 120 nonerasing morphism, 104 period, 29 nonpalindromes, 9 periodic nontrivial purely, 29 prefix, 2 ultimately, 29 suffix, 2 Perles, M., 201 Nørgard,˚ P., 42 permutations, 97 normal form for transducers, 64 Perrin, D., 47, 48 Novikov, P. S., 43 photograph, 35 Nowotka, D., 48 π, 43 Nozaki, A., 106 Pilling, D. L., 139 NP-complete, 20 Pin, J.-E., 47, 48, 105 NP (nondeterministic polynomial time), Pitts, W., 27 20 positive closure, 4, 103 NPSPACE, 21 Post correspondence problem, 186, 201 Post, E., 201 ODDPAL, 9 power, 33 Ogden, W., 138, 139 power set, 1 Ogden’s lemma, 112 prefix, 2 example of, 114 nontrivial, 2 Okhotin, A., 222, 223 proper, 2 ω-language, 47 prefix language, 3, 54, 96 order prefix-free encoding, 181 lexicographic, 3 preperiod, 29 radix, 3 Price, J. K., 138 © Cambridge University Press www.cambridge.org Cambridge University Press 978-0-521-86572-2 - A Second Course in Formal Languages and Automata Theory Jeffrey Shallit Index More information 238 Index prime number theorem, 180 removing trailing zeros, 54 primes repaper, 3 in unary, 7 reversal, 3 PRIMES2 (primes in base 2), 2, 8, 17 reward, 3 irregularity of, 7 Rice, H.
Recommended publications
  • Randomness and Computation
    Randomness and Computation Rod Downey School of Mathematics and Statistics, Victoria University, PO Box 600, Wellington, New Zealand [email protected] Abstract This article examines work seeking to understand randomness using com- putational tools. The focus here will be how these studies interact with classical mathematics, and progress in the recent decade. A few representa- tive and easier proofs are given, but mainly we will refer to the literature. The article could be seen as a companion to, as well as focusing on develop- ments since, the paper “Calibrating Randomness” from 2006 which focused more on how randomness calibrations correlated to computational ones. 1 Introduction The great Russian mathematician Andrey Kolmogorov appears several times in this paper, First, around 1930, Kolmogorov and others founded the theory of probability, basing it on measure theory. Kolmogorov’s foundation does not seek to give any meaning to the notion of an individual object, such as a single real number or binary string, being random, but rather studies the expected values of random variables. As we learn at school, all strings of length n have the same probability of 2−n for a fair coin. A set consisting of a single real has probability zero. Thus there is no meaning we can ascribe to randomness of a single object. Yet we have a persistent intuition that certain strings of coin tosses are less random than others. The goal of the theory of algorithmic randomness is to give meaning to randomness content for individual objects. Quite aside from the intrinsic mathematical interest, the utility of this theory is that using such objects instead distributions might be significantly simpler and perhaps giving alternative insight into what randomness might mean in mathematics, and perhaps in nature.
    [Show full text]
  • Extended Finite Autómata O Ver Group S
    Extended finite autómata o ver group s Víctor Mitrana , Ralf Stiebe Abstract Some results from Dassow and Mitrana (Internat. J. Comput. Algebra (2000)), Griebach (Theoret. Comput. Sci. 7 (1978) 311) and Ibarra et al. (Theoret. Comput. Sci. 2 (1976) 271) are generalized for finite autómata over arbitrary groups. The closure properties of these autómata are poorer and the accepting power is smaller when abelian groups are considered. We prove that the addition of any abelian group to a finite automaton is less powerful than the addition of the multiplicative group of rational numbers. Thus, each language accepted by a finite au­ tomaton over an abelian group is actually a unordered vector language. Characterizations of the context-free and recursively enumerable languages classes are set up in the case of non-abelian groups. We investigate also deterministic finite autómata over groups, especially over abelian groups. Keywords: Finite autómata over groups; Closure properties; Accepting capacity; Interchange lemma; Free groups 1. Introduction One of the oldest and most investigated device in the autómata theory is the finite automaton. Many fundamental properties have been established and many problems are still open. Unfortunately, the finite autómata without any external control have a very limited accepting power. DirTerent directions of research have been considered for overcoming this limitation. The most known extensión added to a finite autómata is the pushdown memory. In this way, a considerable increasing of the accepting capacity has been achieved: the pushdown autómata are able to recognize all context-free languages. Another simple and natural extensión, related somehow to the pushdown memory, was considered in a series of papers [2,4,5], namely to associate an element of a given group to each configuration, but no information regarding the associated element is allowed.
    [Show full text]
  • Applications of an Infinite Square-Free Co-Cfl”
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Theoretical Computer Science 49 (1987) 113-119 113 North-Holland APPLICATIONS OF AN INFINITE SQUARE-FREE CO-CFL” Michael G. MAIN Department of Computer Science, University of Colorado, Boulder, CO 80309, U.S.A. Walter BUCHER Institutes for information Processing, Technical Universify sf Graz, A-8010 Graz, Austria David HAUSSLER Department of Mathematics and Computer Science, University of Denver, Denver, CO 80208, U.S.A. Abstract. We disprove several conjectures about context-free languages. The proofs use the set of all strings which are nor prefixes of Thue’s infinite square-free sequence. This is a context-free language with an infinite square-free complement. 1. Introduction A square is an immediately repeated nonempty string, e.g., au, abub, newyorknewyork. A string x is called square-conruining if it contains a substring which is a square. For example, the string mississippi contains the segment iss twice in a row; in fact, mississippi contains a total of five squares. On the other hand, colorudo contains no squares (the character o appears several times, but not consecu- tively). A string without squares is called square-free. In a study of context-free languages, Autebert et al. [2,3] collected several conjectures about square-containing strings. One of the weakest conjectures was that no context-free language (CFL) could include all of the square-containing strings and still have an infinite complement. Contrary to this conjecture, such a language has recently been constructed [ 141.
    [Show full text]
  • Kolmogorov Complexity of Graphs John Hearn Harvey Mudd College
    Claremont Colleges Scholarship @ Claremont HMC Senior Theses HMC Student Scholarship 2006 Kolmogorov Complexity of Graphs John Hearn Harvey Mudd College Recommended Citation Hearn, John, "Kolmogorov Complexity of Graphs" (2006). HMC Senior Theses. 182. https://scholarship.claremont.edu/hmc_theses/182 This Open Access Senior Thesis is brought to you for free and open access by the HMC Student Scholarship at Scholarship @ Claremont. It has been accepted for inclusion in HMC Senior Theses by an authorized administrator of Scholarship @ Claremont. For more information, please contact [email protected]. Applications of Kolmogorov Complexity to Graphs John Hearn Ran Libeskind-Hadas, Advisor Michael Orrison, Reader May, 2006 Department of Mathematics Copyright © 2006 John Hearn. The author grants Harvey Mudd College the nonexclusive right to make this work available for noncommercial, educational purposes, provided that this copyright statement appears on the reproduced materials and notice is given that the copy- ing is by permission of the author. To disseminate otherwise or to republish re- quires written permission from the author. Abstract Kolmogorov complexity is a theory based on the premise that the com- plexity of a binary string can be measured by its compressibility; that is, a string’s complexity is the length of the shortest program that produces that string. We explore applications of this measure to graph theory. Contents Abstract iii Acknowledgments vii 1 Introductory Material 1 1.1 Definitions and Notation . 2 1.2 The Invariance Theorem . 4 1.3 The Incompressibility Theorem . 7 2 Graph Complexity and the Incompressibility Method 11 2.1 Complexity of Labeled Graphs . 12 2.2 The Incompressibility Method .
    [Show full text]
  • Compressing Information Kolmogorov Complexity Optimal Decompression
    Compressing information Optimal decompression algorithm Almost everybody now is familiar with compress- The definition of KU depends on U. For the trivial ÞÔ ÞÔ µ ´ µ ing/decompressing programs such as , , decompression algorithm U ´y y we have KU x Ö , , etc. A compressing program can x. One can try to find better decompression algo- be applied to any file and produces the “compressed rithms, where “better” means “giving smaller com- version” of that file. If we are lucky, the compressed plexities”. However, the number of short descrip- version is much shorter than the original one. How- tions is limited: There is less than 2n strings of length ever, no information is lost: the decompression pro- less than n. Therefore, for any fixed decompres- gram can be applied to the compressed version to get sion algorithm the number of words whose complex- n the original file. ity is less than n does not exceed 2 1. One may [Question: A software company advertises a com- conclude that there is no “optimal” decompression pressing program and claims that this program can algorithm because we can assign short descriptions compress any sufficiently long file to at most 90% of to some string only taking them away from other its original size. Would you buy this program?] strings. However, Kolmogorov made a simple but How compression works? A compression pro- crucial observation: there is asymptotically optimal gram tries to find some regularities in a file which decompression algorithm. allow to give a description of the file which is shorter than the file itself; the decompression program re- Definition 1 An algorithm U is asymptotically not µ 6 ´ µ · constructs the file using this description.
    [Show full text]
  • The Folklore of Sorting Algorithms
    IJCSI International Journal of Computer Science Issues, Vol. 4, No. 2, 2009 25 ISSN (Online): 1694-0784 ISSN (Print): 1694-0814 The Folklore of Sorting Algorithms Dr. Santosh Khamitkar1, Parag Bhalchandra2, Sakharam Lokhande 2, Nilesh Deshmukh2 School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded (MS ) 431605, India Email : 1 [email protected] , 2 [email protected] Abstract progress. Every researcher attempted optimization in past has found stuck to his / her observations regarding sorting The objective of this paper is to review the folklore knowledge experiments and produced his/ her own results. Since seen in research work devoted on synthesis, optimization, and these results were specific to software and hardware effectiveness of various sorting algorithms. We will examine environment therein, we found abundance in the sorting algorithms in the folklore lines and try to discover the complexity analysis which possibly could be thought as tradeoffs between folklore and theorems. Finally, the folklore the folklore knowledge. We tried to review this folklore knowledge on complexity values of the sorting algorithms will be considered, verified and subsequently converged in to knowledge in past research work and came to a conclusion theorems. that it was mainly devoted on synthesis, optimization, effectiveness in working styles of various sorting Key words: Folklore, Algorithm analysis, Sorting algorithm, algorithms. Computational Complexity notations. Our work is not mere commenting earlier work rather we are putting the knowledge embedded in sorting folklore 1. Introduction with some different and interesting view so that students, teachers and researchers can easily understand them. Folklore is the traditional beliefs, legend and customs, Synthesis, in terms of folklore and theorems, hereupon current among people.
    [Show full text]
  • On the Language of Primitive Words*
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Theoretical Computer Science ELSEVIER Theoretical Computer Science 161 (1996) 141-156 On the language of primitive words* H. Petersen *J Universitiit Stuttgart, Institut fiir Informatik, Breitwiesenstr. 20-22, D-70565 Stuttgart, Germany Received April 1994; revised June 1995 Communicated by P. Enjalbert Abstract A word is primitive if it is not a proper power of a shorter word. We prove that the set Q of primitive words over an alphabet is not an unambiguous context-free language. This strengthens the previous result that Q cannot be deterministic context-free. Further we show that the same holds for the set L of Lyndon words. We investigate the complexity of Q and L. We show that there are families of constant-depth, polynomial-size, unbounded fan-in boolean circuits and two-way deterministic pushdown automata recognizing both languages. The latter result implies efficient decidability on the RAM model of computation and we analyze the number of comparisons required for deciding Q. Finally we give a new proof showing a related language not to be context-free, which only relies on properties of semi-linear and regular sets. 1. Introduction This paper presents some further steps towards characterizing the set of primitive words over some fixed alphabet in terms of classical formal language theory and com- plexity. The notion of primitivity of words plays a central role in algebraic coding theory [28] and combinatorial theory of words [23,24]. Recently attention has been drawn to the language Q of all primitive words over an alphabet with at least two symbols.
    [Show full text]
  • Analysis of Sorting Algorithms by Kolmogorov Complexity (A Survey)
    Analysis of Sorting Algorithms by Kolmogorov Complexity (A Survey) Paul Vit¶anyi ¤ December 18, 2003 Abstract Recently, many results on the computational complexity of sorting algorithms were ob- tained using Kolmogorov complexity (the incompressibility method). Especially, the usually hard average-case analysis is ammenable to this method. Here we survey such results about Bub- blesort, Heapsort, Shellsort, Dobosiewicz-sort, Shakersort, and sorting with stacks and queues in sequential or parallel mode. Especially in the case of Shellsort the uses of Kolmogorov complex- ity surprisingly easily resolved problems that had stayed open for a long time despite strenuous attacks. 1 Introduction We survey recent results in the analysis of sorting algorithms using a new technical tool: the incompressibility method based on Kolmogorov complexity. Complementing approaches such as the counting method and the probabilistic method, the new method is especially suited for the average- case analysis of algorithms and machine models, whereas average-case analysis is usually more di±cult than worst-case analysis using the traditional methods. Obviously, the results described can be obtained using other proof methods|all true provable statements must be provable from the axioms of mathematics by the inference methods of mathematics. The question is whether a particular proof method facilitates and guides the proving e®ort. The following examples make clear that thinking in terms of coding and the incompressibility method suggests simple proofs that resolve long-standing open problems. A survey of the use of the incompressibility method in combinatorics, computational complexity, and the analysis of algorithms is [16] Chapter 6, and other recent work is [2, 15].
    [Show full text]
  • Pattern Selector Grammars and Several Parsing Algorithms in the Context-Free Style
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF COMPUTER AND SYSTEM SCIENCES 30, 249-273 (1985) Pattern Selector Grammars and Several Parsing Algorithms in the Context-free Style J. GONCZAROWSKI AND E. SHAMIR* Institute of Mathematics and Computer Science, The Hebrew University of Jerusalem, Jerusalem 91904, Israel Received March 30, 1982; accepted March 5, 1985 Pattern selector grammars are defined in general. We concentrate on the study of special grammars, the pattern selectors of which contain precisely k “one% (0*( 10*)k) or k adjacent “one? (O*lkO*). This means that precisely k symbols (resp. k adjacent symbols) in each sen- tential form are rewritten. The main results concern parsing algorithms and the complexity of the membership problem. We first obtain a polynomial bound on the shortest derivation and hence an NP time bound for parsing. In the case k = 2, we generalize the well-known context- free dynamic programming type algorithms, which run in polynomial time. It is shown that the generated languages, for k = 2, are log-space reducible to the context-free languages. The membership problem is thus solvable in log2 space. 0 1985 Academic Press, Inc. 1. INTRODUCTION The parsing and membership testing algorithms for context-free (CF) grammars occupy a peculiar border position in the complexity hierarchy. The dynamic programming algorithm runs in 0(n3) steps [21], but more relined methods reduce the problem to matrix multiplication with O(n*+‘) run-time, where a < 1 ([20]. Earley’s algorithm runs in time O(n*) for grammars with bounded ambiguity [4].
    [Show full text]
  • New Applications of the Incompressibility Method (Extended Abstract)
    New Applications of the Incompressibility Method (Extended Abstract) Harry Buhrman1 , Tao Jiang2 , Ming Li3 , and Paul Vitanyi1 1 CWI, Kruislaan 413, 1098 SJ Amsterdam, The Netherlands, {buhrman, paul v }©cwi. nl Supported in part via NeuroCOLT II ESPRIT Working Group 2 Dept of Computing and Software, McMaster University, Hamilton, Ontario L8S 4Kl, Canada, jiang©cas .mcmaster. ea Supported in part by NSERC and CITO grants 3 Dept of Computer Science, University of Waterloo, Waterloo, Ontario N2L 3Gl, Canada, mli©math.uwaterloo.ca Supported in part by NSERC and CITO grants and Steacie Fellowship Abstract. The incompressibility method is an elementary yet powerful proof technique based on Kolmogorov complexity [13]. We show that it is particularly suited to obtain average-case computational complexity lower bounds. Such lower bounds have been difficult to obtain in the past by other methods. In this paper we present four new results and also give four new proofs of known results to demonstrate the power and elegance of the new method. 1 Introduction The incompressibility of individual random objects yields a simple but powerful proof technique: the incompressibility method. This method is a general purpose tool that can be used to prove lower bounds on computational problems, to obtain combinatorial properties of concrete objects, and to analyze the average complexity of an algorithm. Since the early 1980's, the incompressibility method has been successfully used to solve many well-known questions that had been open for a long time and to supply new simplified proofs for known results. Here we demonstrate how easy the incompressibility method can be used in the particular case of obtaining average-case computational complexity lower bounds.
    [Show full text]
  • COMPUTABILITY and RANDOMNESS 1. Historical Roots
    COMPUTABILITY AND RANDOMNESS ROD DOWNEY AND DENIS R. HIRSCHFELDT 1. Historical roots 1.1. Von Mises. Around 1930, Kolmogorov and others founded the theory of prob- ability, basing it on measure theory. Probability theory is concerned with the dis- tribution of outcomes in sample spaces. It does not seek to give any meaning to the notion of an individual object, such as a single real number or binary string, being random, but rather studies the expected values of random variables. How could a binary string representing a sequence of n coin tosses be random, when all strings of length n have the same probability of 2−n for a fair coin? Less well known than the work of Kolmogorov are early attempts to answer this kind of question by providing notions of randomness for individual objects. The modern theory of algorithmic randomness realizes this goal. One way to develop this theory is based on the idea that an object is random if it passes all relevant \randomness tests". For example, by the law of large numbers, for a random real X, we would expect the number of 1's in the binary expansion of X to have limiting 1 frequency 2 . (That is, writing X(j) for the jth bit of this expansion, we would jfj<n:X(j)=1gj 1 expect to have limn!1 n = 2 .) Indeed, we would expect X to be normal to base 2, meaning that for any binary string σ of length k, the occurrences of σ in the binary expansion of X should have limiting frequency 2−k.
    [Show full text]
  • Comparisons of Parikh's Condition
    J. Lopez, G. Ramos, and R. Morales, \Comparaci´onde la Condici´onde Parikh con algunas Condiciones de los Lenguajes de Contexto Libre", II Jornadas de Inform´atica y Autom´atica, pp. 305-314, 1996. NICS Lab. Publications: https://www.nics.uma.es/publications COMPARISONS OF PARIKH’S CONDITION TO OTHER CONDITIONS FOR CONTEXT-FREE LANGUAGES G. Ramos-Jiménez, J. López-Muñoz and R. Morales-Bueno E.T.S. de Ingeniería Informática - Universidad de Málaga Dpto. Lenguajes y Ciencias de la Computación P.O.B. 4114, 29080 - Málaga (SPAIN) e-mail: [email protected] Abstract: In this paper we first compare Parikh’s condition to various pumping conditions - Bar-Hillel’s pumping lemma, Ogden’s condition and Bader-Moura’s condition; secondly, to interchange condition; and finally, to Sokolowski’s and Grant’s conditions. In order to carry out these comparisons we present some properties of Parikh’s languages. The main result is the orthogonality of the previously mentioned conditions and Parikh’s condition. Keywords: Context-Free Languages, Parikh’s Condition, Pumping Lemmas, Interchange Condition, Sokolowski’s and Grant’s Condition. 1. INTRODUCTION The context-free grammars and the family of languages they describe, context free languages, were initially defined to formalize the grammatical properties of natural languages. Afterwards, their considerable practical importance was noticed, specially for defining programming languages, formalizing the notion of parsing, simplifying the translation of programming languages and in other string-processing applications. It’s very useful to discover the internal structure of a formal language class during its study. The determination of structural properties allows us to increase our knowledge about this language class.
    [Show full text]