A First-Order Isomorphism Theorem

Total Page:16

File Type:pdf, Size:1020Kb

A First-Order Isomorphism Theorem A First-Order Isomorphism Theorem y Eric Allender Department of Computer Science, Rutgers University New Brunswick, NJ, USA 08903 [email protected] z Jos e Balc azar U. Polit ecnica de Catalunya, Departamento L.S.I. Pau Gargallo 5, E-08071 Barcelona, Spain [email protected] x Neil Immerman Computer Science Department, University of Massachusetts Amherst, MA, USA 01003 [email protected] Abstract We show that for most complexity classes of interest, all sets complete under rst- order pro jections fops are isomorphic under rst-order isomorphisms. That is, a very restricted version of the Berman-Hartmanis Conjecture holds. Since \natural" com- plete problems seem to stay complete via fops, this indicates that up to rst-order isomorphism there is only one \natural" complete problem for each \nice" complexity class. 1 Intro duction In 1977 Berman and Hartmanis noticed that all NP complete sets that they knew of were p olynomial-time isomorphic, [BH77]. They made their now-famous isomorphism conjecture: namely that all NP complete sets are p olynomial-time isomorphic. This conjecture has engendered a large amountofwork cf. [KMR90,You] for surveys. The isomorphism conjecture was made using the notion of NP completeness via p olynomial- time, many-one reductions b ecause that was the standard de nition at the time. In [Co o], A preliminary version of this work app eared in Pro c. 10th Symp osium on Theoretical Asp ects of Computer Science, 1993, Lecture Notes in Computer Science 665, pp. 163{174. y Some of this work was done while on leave at Princeton University; supp orted in part by National Science Foundation grant CCR-9204874. z Supp orted in part by ESPRIT-I I BRA EC pro ject 3075 ALCOM and by Acci on Integrada Hispano- Alemana 131 B x Supp orted by NSF grant CCR-9207797. 1 Co ok proved that the Bo olean satis ability problem SAT is NP complete via p olynomial- time Turing reductions. Over the years SAT has b een shown complete via weaker and weaker reductions, e.g. p olynomial-time many-one [Kar], logspace many-one [Jon], one-way logspace many-one [HIM], and rst-order pro jections fops [Dah]. These last reductions, de ned in Section 3, are provably weaker than logspace reductions. It has b een observed 1 that natural complete problems for various complexity classes including NC , L, NL, P,NP, and PSPACE remain complete via fops, cf. [I87, IL , SV, Ste, MI]. On the other hand, Joseph and Young, [JY] have p ointed out that p olynomial-time, many- one reductions maybesopowerful as to allow unnatural NP-complete sets. Most researchers now b elieve that the isomorphism conjecture as originally stated by Berman and Hartmanis 1 is false. We feel that the choice of p olynomial-time, many-one reductions in the statement of the Isomorphism Conjecture was made in part for historical rather than purely scienti c reasons. To elab orate on this claim, note that the class NP arises naturally in the study of logic and can b e de ned entirely in terms of logic, without any mention of computation [Fa]. Thus it is natural to have a notion of NP-completeness that is formulated entirely in terms of logic. On another front, Valiant[Val] noticed that reducibility can b e formulated in algebra using the natural notion of a pro jection, again with no mention of computation. The sets that are complete under fops are complete in al l of these di erentways of formulating the notion of NP-completeness. Since natural complete problems turn out to b e complete via very low-level reductions such as fops, it is natural to mo dify the isomorphism conjecture to consider NP-complete re- ductions via fops. Motivating this in another way, one could prop ose as a slightly more general form of the isomorphism conjecture the question: is completeness a sucient struc- tural condition for isomorphism? Our work answers this question by presenting a notion of completeness for which the answer is yes. Namely for every nice complexity class including P,NP, etc, anytwo sets complete via fops are not only p olynomial-time isomorphic, they are rst-order isomorphic. There are additional reasons to b e interested in rst-order computation. It was shown 0 in [BIS] that rst-order computation corresp onds exactly to computation by uniform AC 0 circuits under a natural notion of uniformity. Although it is known that AC is prop erly contained in NP, knowing that a set A is complete for NP under p olynomial-time or 0 logspace reductions do es not currently allow us to conclude that A is not in AC ;however, knowing that A is complete for NP under rst-order reductions do es allow us to make that conclusion. First-order reducibility is a uniform version of the constant-depth reducibility studied in [FSS, CSV]; sometimes this uniformity is imp ortant. For a concrete example where rst- order reducibility is used to provide a circuit lower b ound, see [AG92]. Preliminary results and background on isomorphisms follow in Section 2. De nitions and background on descriptive complexity are found in Section 3. The main result is stated and proved in Section 4, and then we conclude with some related results and remarks ab out the structure of NP under rst-order reducibilities. 1 One way of quantifying this observation is that since Joseph and Young pro duced their unnatural NP- complete sets, Hartmanis has b een referring to the isomorphism conjecture as the \Berman" conjecture. 2 2 Short History of the Isomorphism Conjecture The Isomorphism Conjecture is analogous to Myhill's Theorem that all r.e. complete sets are recursively isomorphic, [Myh]. In this section we summarize some of the relevant background material. In the following FP is the set of functions computable in p olynomial time. p De nition 2.1 For A; B ,wesay that A and B are p-isomorphic A B i there = 1 exists a bijection f 2 FP with inverse f 2 FP such that A is many-one reducible to B 1 A B via f and therefore B A via f . 2 m m Observation 2.2 [BH77] Al l the NP complete sets in [GJ] are p-isomorphic. How did Berman and Hartmanis make their observation? They did it by proving a p olynomial- time version of the Schroder-Bernstein Theorem. Recall: Theorem 2.3 [Kel, Th. 20] Let A and B be any two sets. Suppose that thereare 1:1 maps from A to B and from B to A. Then there is a 1:1 and onto map from A to B . Pro of Let f : A ! B and g : B ! A b e the given 1:1 maps. For simplicity assume that A and B are disjoint. For a; c 2 A [ B ,wesay that c is an ancestor of a i we can reach a from c by a nite non-zero numb er of applications of the functions f and/or g .Now 1 we can de ne a bijection h : A ! B which applies either f or g according as whether a p oint has an o dd numb er of ancestors or not: 1 g a if a has an o dd numb er of ancestors ha= f a if a has an even or in nite numb er of ancestors 2 The feasible version of the Schroder-Bernstein theorem is as follows: Theorem 2.4 [BH77] Let f : A B and g : B A, where f and g are 1:1, length- m m 1 1 1 1 increasing functions. Assume that f; f ;g;g 2 FP where f ;g are inverses of f; g. p Then A B . = Pro of Let the ancestor chain of a string w b e the path from w to w 's parent, to w 's grandparent, and so on. Ancestor chains are at most linear in length b ecause f and g are length-increasing. Thus they can b e computed in p olynomial time. The theorem now follows as in the pro of of Theorem 2.3. 2 Consider the following de nition: De nition 2.5 [BH77] Wesay that the language A has p-time padding functions i there exist e; d 2 FP such that 1. For all w; x 2 , w 2 A , ew; x 2 A 3 2. For all w; x 2 , dew; x = x 3. For all w; x 2 , jew; xj jw j + jxj 2 As a simple example, the following is a padding function for SAT, ew; x = w ^ c ^ c ^ ^ c 1 2 jxj th where c is y _ yifthe i bit of x is 1 and y _ y otherwise, where y is a Bo olean variable i numb ered higher than all the Bo olean variables o ccurring in w . Then the following theorem follows from Theorem 2.4: Theorem 2.6 [BH77] If A and B areNPcomplete and have p-time padding functions, p then A B . = Finally, Observation 2.2 now follows from the following: Observation 2.7 [BH77] Al l the NP complete problems in [GJ] have p-time padding functions. Hartmanis also extended the ab ovework as follows: Say that A has logspace-padding func- tions if there are logspace computable functions as in De nition 2.5. Theorem 2.8 [Har] If A and B areNPcomplete via logspacereductions and have logspacepadding functions, then A and B arelogspace isomorphic. Pro of Since A and B have logspace padding functions, we can create functions f and g as in Theorem 2.4 that are length squaring and computable in logspace.
Recommended publications
  • Computational Complexity Computational Complexity
    In 1965, the year Juris Hartmanis became Chair Computational of the new CS Department at Cornell, he and his KLEENE HIERARCHY colleague Richard Stearns published the paper On : complexity the computational complexity of algorithms in the Transactions of the American Mathematical Society. RE CO-RE RECURSIVE The paper introduced a new fi eld and gave it its name. Immediately recognized as a fundamental advance, it attracted the best talent to the fi eld. This diagram Theoretical computer science was immediately EXPSPACE shows how the broadened from automata theory, formal languages, NEXPTIME fi eld believes and algorithms to include computational complexity. EXPTIME complexity classes look. It As Richard Karp said in his Turing Award lecture, PSPACE = IP : is known that P “All of us who read their paper could not fail P-HIERARCHY to realize that we now had a satisfactory formal : is different from ExpTime, but framework for pursuing the questions that Edmonds NP CO-NP had raised earlier in an intuitive fashion —questions P there is no proof about whether, for instance, the traveling salesman NLOG SPACE that NP ≠ P and problem is solvable in polynomial time.” LOG SPACE PSPACE ≠ P. Hartmanis and Stearns showed that computational equivalences and separations among complexity problems have an inherent complexity, which can be classes, fundamental and hard open problems, quantifi ed in terms of the number of steps needed on and unexpected connections to distant fi elds of a simple model of a computer, the multi-tape Turing study. An early example of the surprises that lurk machine. In a subsequent paper with Philip Lewis, in the structure of complexity classes is the Gap they proved analogous results for the number of Theorem, proved by Hartmanis’s student Allan tape cells used.
    [Show full text]
  • The Communication Complexity of Interleaved Group Products
    The communication complexity of interleaved group products W. T. Gowers∗ Emanuele Violay April 2, 2015 Abstract Alice receives a tuple (a1; : : : ; at) of t elements from the group G = SL(2; q). Bob similarly receives a tuple (b1; : : : ; bt). They are promised that the interleaved product Q i≤t aibi equals to either g and h, for two fixed elements g; h 2 G. Their task is to decide which is the case. We show that for every t ≥ 2 communication Ω(t log jGj) is required, even for randomized protocols achieving only an advantage = jGj−Ω(t) over random guessing. This bound is tight, improves on the previous lower bound of Ω(t), and answers a question of Miles and Viola (STOC 2013). An extension of our result to 8-party number-on-forehead protocols would suffice for their intended application to leakage- resilient circuits. Our communication bound is equivalent to the assertion that if (a1; : : : ; at) and t (b1; : : : ; bt) are sampled uniformly from large subsets A and B of G then their inter- leaved product is nearly uniform over G = SL(2; q). This extends results by Gowers (Combinatorics, Probability & Computing, 2008) and by Babai, Nikolov, and Pyber (SODA 2008) corresponding to the independent case where A and B are product sets. We also obtain an alternative proof of their result that the product of three indepen- dent, high-entropy elements of G is nearly uniform. Unlike the previous proofs, ours does not rely on representation theory. ∗Royal Society 2010 Anniversary Research Professor. ySupported by NSF grant CCF-1319206.
    [Show full text]
  • DIMACS Series in Discrete Mathematics and Theoretical Computer Science
    DIMACS Series in Discrete Mathematics and Theoretical Computer Science Volume 65 The Random Projection Method Santosh S. Vempala American Mathematical Society The Random Projection Method https://doi.org/10.1090/dimacs/065 DIMACS Series in Discrete Mathematics and Theoretical Computer Science Volume 65 The Random Projection Method Santosh S. Vempala Center for Discrete Mathematics and Theoretical Computer Science A consortium of Rutgers University, Princeton University, AT&T Labs–Research, Bell Labs (Lucent Technologies), NEC Laboratories America, and Telcordia Technologies (with partners at Avaya Labs, IBM Research, and Microsoft Research) American Mathematical Society 2000 Mathematics Subject Classification. Primary 68Q25, 68W20, 90C27, 68Q32, 68P20. For additional information and updates on this book, visit www.ams.org/bookpages/dimacs-65 Library of Congress Cataloging-in-Publication Data Vempala, Santosh S. (Santosh Srinivas), 1971– The random projection method/Santosh S. Vempala. p.cm. – (DIMACS series in discrete mathematics and theoretical computer science, ISSN 1052- 1798; v. 65) Includes bibliographical references. ISBN 0-8218-2018-4 (alk. paper) 1. Random projection method. 2. Algorithms. I. Title. II. Series. QA501 .V45 2004 518.1–dc22 2004046181 0-8218-3793-1 (softcover) Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society.
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • Download This PDF File
    News from New Zealand by C. S. Calude Department of Computer Science, University of Auckland Auckland, New Zealand [email protected] 1 Scientific and Community News The latest CDMTCS research reports are ( http://www.cs.auckland.ac.nz/ staff-cgi-bin/mjd/secondcgi.pl ): 403 U. Speidel. A Forward-Parsing Randomness Test Based on the Expected Codeword Length of T-codes 05 /2011 404. M.J. Dinneen, Y.-B. Kim and R. Nicolescu. An Adaptive Algorithm for P System Synchronization 05 /2011 405. A.A. Abbott, M. Bechmann, C.S. Calude, and A. Sebald. A Nuclear Mag- netic Resonance Implementation of a Classical Deutsch-Jozsa Algorithm 05 /2011 406. K. Tadaki. A Computational Complexity-Theoretic Elaboration of Weak Truth-Table Reducibility 07 /2011 29 BEATCS no 105 EATCS NEWS 2 A Dialogue with Juris Hartmanis about Com- plexity Professor Juris Hartmanis, http://en.wikipedia.org/wiki/Juris_ Hartmanis , a Turing Award Winner, is the Walter R. Read Professor of Engi- neering at Cornell University. He is a pioneer, founder and a major contributor to the area of computational complexity. Professor Hartmanis eminent career includes also a strong service compo- nent: he served in numerous important committees (Turing Award Committee, Gödel Prize Committee, Waterman Award Committee); he was director of NSF’s Directorate for Computer and Information Science and Engineering. Professor Hartmanis has been honoured with many awards and prizes. He was elected a member of the National Academy of Engineering and Latvian Academy of Sci- ences, and a fellow of the American Academy of Arts and Sciences, the Association for Computing Machinery, New York State Academy of Sciences, and the Ameri- can Association for the Advancement of Science.
    [Show full text]
  • Formal Methods
    Lecture Notes in Computer Science 1709 Edited by G. Goos, J. Hartmanis and J. van Leeuwen 3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo Jeannette M. Wing Jim Woodcock Jim Davies (Eds.) FM’99 – Formal Methods World Congress on Formal Methods in the Development of Computing Systems Toulouse, France, September 20-24, 1999 Proceedings, Volume II 13 Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Jeannette M. Wing Carnegie Mellon University, Computer Science Department 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: [email protected] Jim Woodcock Jim Davies Oxford University Computing Laboratory Software Engineering Programme Wolfson Building, Parks Road, Oxford OX1 3QD, UK E-mail: {jim.woodcock,jim.davies}@comlab.ox.ac.uk Cataloging-in-Publication data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Formal methods : proceedings / FM ’99, World Congress on Formal Methods in the Development of Computing Systems, Toulouse, France, September 20 - 24, 1999 / Jeannette M. Wing . (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer Vol. 2. - (1999) (Lecture notes in computer science ; Vol. 1709) ISBN 3-540-66588-9 CR Subject Classification (1998): F.3, D.2, F.4.1, D.3, D.1, C.2, C.3, I.2.3, B, J.2 ISSN 0302-9743 ISBN 3-540-66588-9 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks.
    [Show full text]
  • Introduction to the Theory of Computation, Michael Sipser
    Introduction to the Theory of Computation, Michael Sipser Chapter 0: Introduction Automata, Computability and Complexity: · They are linked by the question: o “What are the fundamental capabilities and limitations of computers?” · The theories of computability and complexity are closely related. In complexity theory, the objective is to classify problems as easy ones and hard ones, whereas in computability theory he classification of problems is by those that are solvable and those that are not. Computability theory introduces several of the concepts used in complexity theory. · Automata theory deals with the definitions and properties of mathematical models of computation. · One model, called the finite automaton, is used in text processing, compilers, and hardware design. Another model, called the context – free grammar, is used in programming languages and artificial intelligence. Strings and Languages: · The string of the length zero is called the empty string and is written as e. · A language is a set of strings. Definitions, Theorems and Proofs: · Definitions describe the objects and notions that we use. · A proof is a convincing logical argument that a statement is true. · A theorem is a mathematical statement proved true. · Occasionally we prove statements that are interesting only because they assist in the proof of another, more significant statement. Such statements are called lemmas. · Occasionally a theorem or its proof may allow us to conclude easily that other, related statements are true. These statements are called corollaries of the theorem. Chapter 1: Regular Languages Introduction: · An idealized computer is called a “computational model” which allows us to set up a manageable mathematical theory of it directly.
    [Show full text]
  • A Quantum Query Complexity Trichotomy for Regular Languages
    A Quantum Query Complexity Trichotomy for Regular Languages Scott Aaronson∗ Daniel Grier† Luke Schaeffer UT Austin MIT MIT [email protected] [email protected] [email protected] Abstract We present a trichotomy theorem for the quantum query complexity of regular languages. Every regular language has quantum query complexity Θ(1), Θ˜ (√n), or Θ(n). The extreme uniformity of regular languages prevents them from taking any other asymptotic complexity. This is in contrast to even the context-free languages, which we show can have query complex- ity Θ(nc) for all computable c [1/2,1]. Our result implies an equivalent trichotomy for the approximate degree of regular∈ languages, and a dichotomy—either Θ(1) or Θ(n)—for sensi- tivity, block sensitivity, certificate complexity, deterministic query complexity, and randomized query complexity. The heart of the classification theorem is an explicit quantum algorithm which decides membership in any star-free language in O˜ (√n) time. This well-studied family of the regu- lar languages admits many interesting characterizations, for instance, as those languages ex- pressible as sentences in first-order logic over the natural numbers with the less-than relation. Therefore, not only do the star-free languages capture functions such as OR, they can also ex- press functions such as “there exist a pair of 2’s such that everything between them is a 0.” Thus, we view the algorithm for star-free languages as a nontrivial generalization of Grover’s algorithm which extends the quantum quadratic speedup to a much wider range of string- processing algorithms than was previously known.
    [Show full text]
  • The Book Review Column1 by William Gasarch Department of Computer Science University of Maryland at College Park College Park, MD, 20742 Email: [email protected]
    The Book Review Column1 by William Gasarch Department of Computer Science University of Maryland at College Park College Park, MD, 20742 email: [email protected] Welcome to the Book Reviews Column. We hope to bring you at least two reviews of books every month. In this column six books are reviewed. 1. The following three books are all reviewed together by William Gasarch. Descriptive Com- plexity Theory by Neal Immerman, Finite Model Theory by Heinz-Dieter Ebbinhaus and Jorg Flum, and Descriptive Complexity and Finite Models (Proceedings from a DIMACS workshop) edited by Neil Immerman and Phokion Kolaitis. These books deal with how complicated it is to describe a set in terms of how many quantifiers you need and what symbols are needed in the language. There are many connections to complexity theory in that virtually all descriptive classes are equivalent to the more standard complexity classes. 2. Theory of Computing: A Gentle Introduction by Efim Kinber and Carl Smith, reviewed by Judy Goldsmith. This book is a textbook aimed at undergraduates who would not be happy with the mathematical rigour of the automata books of my youth, such as Hopcroft and Ullman's book. 3. Microsurveys in Discrete Probability (Proceedings from a DIMACS workshop) edited by David Aldous and James Propp is reviewed by Hassan Masum. This is a collection of articles (not all surveys) in the area of probability. 4. Term Rewriting and all that by Franz Baader and Tobias Nipkow, reviewed by Paliath Narendran. This is intended as both a text and a reference book on term rewriting.
    [Show full text]
  • 1. Introduction
    Logical Methods in Computer Science Vol. 7 (3:14) 2011, pp. 1–24 Submitted Nov. 17, 2010 www.lmcs-online.org Published Sep. 21, 2011 RANDOMISATION AND DERANDOMISATION IN DESCRIPTIVE COMPLEXITY THEORY KORD EICKMEYER AND MARTIN GROHE Humboldt-Universit¨at zu Berlin, Institut f¨ur Informatik, Logik in der Informatik Unter den Linden 6, 10099 Berlin, Germany e-mail address: [email protected], [email protected] Abstract. We study probabilistic complexity classes and questions of derandomisation from a logical point of view. For each logic L we introduce a new logic BPL, bounded error probabilistic L, which is defined from L in a similar way as the complexity class BPP, bounded error probabilistic polynomial time, is defined from P. Our main focus lies on questions of derandomisation, and we prove that there is a query which is definable in BPFO, the probabilistic version of first-order logic, but not ω in C∞ω, finite variable infinitary logic with counting. This implies that many of the standard logics of finite model theory, like transitive closure logic and fixed-point logic, both with and without counting, cannot be derandomised. Similarly, we present a query on ordered structures which is definable in BPFO but not in monadic second-order logic, and a query on additive structures which is definable in BPFO but not in FO. The latter of these queries shows that certain uniform variants of AC0 (bounded-depth polynomial sized circuits) cannot be derandomised. These results are in contrast to the general belief that most standard complexity classes can be derandomised.
    [Show full text]
  • CS 273 Introduction to the Theory of Computation Fall 2006
    Welcome to CS 373 Theory of Computation Spring 2010 Madhusudan Parthasarathy ( Madhu ) [email protected] What is computable? • Examples: – check if a number n is prime – compute the product of two numbers – sort a list of numbers – find the maximum number from a list • Hard but computable: – Given a set of linear inequalities, maximize a linear function Eg. maximize 5x+2y 3x+2y < 53 x < 32 5x – 9y > 22 Theory of Computation Primary aim of the course: What is “computation”? • Can we define computation without referring to a modern c computer? • Can we define, mathematically, a computer? (yes, Turing machines) • Is computation definable independent of present-day engineering limitations, understanding of physics, etc.? • Can a computer solve any problem, given enough time and disk-space? Or are they fundamental limits to computation? In short, understand the mathematics of computation Theory of Computation - What can be computed? Computability - Can a computer solve any problem, given enough time and disk-space? - How fast can we solve a problem? Complexity - How little disk-space can we use to solve a problem -What problems can we solve given Automata really very little space? (constant space) Theory of Computation What problems can a computer solve? Not all problems!!! Computability Eg. Given a C-program, we cannot check if it will not crash! Verification of correctness of programs Complexity is hence impossible! (The woe of Microsoft!) Automata Theory of Computation What problems can a computer solve? Even checking whether a C-program will Computability halt/terminate is not possible! input n; assume n>1; No one knows Complexity while (n !=1) { whether this if (n is even) terminates on n := n/2; on all inputs! else n := 3*n+1; Automata } 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1.
    [Show full text]
  • The History of Algorithmic Complexity
    City University of New York (CUNY) CUNY Academic Works Publications and Research Borough of Manhattan Community College 2016 The history of Algorithmic complexity Audrey A. Nasar CUNY Borough of Manhattan Community College How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/bm_pubs/72 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] The Mathematics Enthusiast Volume 13 Article 4 Number 3 Number 3 8-2016 The history of Algorithmic complexity Audrey A. Nasar Follow this and additional works at: http://scholarworks.umt.edu/tme Part of the Mathematics Commons Recommended Citation Nasar, Audrey A. (2016) "The history of Algorithmic complexity," The Mathematics Enthusiast: Vol. 13: No. 3, Article 4. Available at: http://scholarworks.umt.edu/tme/vol13/iss3/4 This Article is brought to you for free and open access by ScholarWorks at University of Montana. It has been accepted for inclusion in The Mathematics Enthusiast by an authorized administrator of ScholarWorks at University of Montana. For more information, please contact [email protected]. TME, vol. 13, no.3, p.217 The history of Algorithmic complexity Audrey A. Nasar1 Borough of Manhattan Community College at the City University of New York ABSTRACT: This paper provides a historical account of the development of algorithmic complexity in a form that is suitable to instructors of mathematics at the high school or undergraduate level. The study of algorithmic complexity, despite being deeply rooted in mathematics, is usually restricted to the computer science curriculum.
    [Show full text]