Gems of Theoretical Computer Science

Total Page:16

File Type:pdf, Size:1020Kb

Gems of Theoretical Computer Science original German edition by Uwe Schoning translated revised and expanded by Randall Pruim Gems of Theoretical Computer Science Computer Science Monograph English August SpringerVerlag Berlin Heidelb erg New York London Paris Tokyo Hong Kong Barcelona Budap est Preface to Original German Edition In the summer semester of at Universitat Ulm I tried out a new typ e of course which I called Theory lab as part of the computer science ma jor program As in an exp erimental lab oratory with written preparatory ma terials including exercises as well as materials for the actual meeting in which an isolated research result is represented with its complete pro of and all of its facets and false leads the students were supp osed to prove p or tions of the results themselves or at least to attempt their own solutions The goal was that the students understand and sense how theoretical research is done To this end I assembled a numb er of outstanding results highlights p earls gems from theoretical computer science and related elds in par ticular those for which some surprising or creative new metho d of pro of was employed Furthermore I chose several topics which dont represent a solu tion to an op en problem but which seem in themselves to b e surprising or unexp ected or place a wellknown problem in a new unusual context This b o ok is based primarily on the preparatory materials and worksheets which were prepared at that time for the students of my course and has b een subsequently augmented with additional topics This b o ok is not a text b o ok in the usual sense In a textb o ok one pays attention to breadth and completeness within certain b ounds This comes however at the cost of depth Therefore in a textb o ok one nds to o often following the statement of a theorem the phrase The pro of of this theorem would go b eyond the scop e of this b o ok and must therefore b e omitted It is precisely this that we do not do here on the contrary we want to dig in to the pro ofs and hop efully enjoy it The goal of this b o ok is not to reach an encyclop edic completeness but to pursue the pleasure of completely understanding a complex pro of with all of its clever insights It is obvious that in such a pursuit complete treatment of the topics cannot p ossibly b e guaranteed and that the selection of topics must necessarily b e sub jective The selected topics come from the areas of computability logic computational complexity circuit theory and algorithms Where is the p otential reader for this b o ok to b e found I b elieve he or she could b e an active computer scientist or an advanced student p erhaps sp ecializing theoretical computer science who works through various topics as an indep endent study attempting to crack the exercises and by this VI means learns the material on his or her own I could also easily imagine p ortions of this b o ok b eing used as the basis of a seminar as well as to provide a simplied intro duction into a p otential topic for a Diplomarbeit p erhaps even for a Dissertation A few words ab out the use of this b o ok A certain amount of basic knowl edge is assumed in theoretical computer science automata languages com putability complexity and for some topics probability and graph theory similar to what my students encounter prior to the Vordiplom This is very briey recapitulated in the preliminary chapter The amount of knowledge as sumed can vary greatly from topic to topic The topics can b e read and worked through largely indep endently of each other so one can b egin with any of the topics Within a topic there are only o ccasional references to other topics in the b o ok these are clearly noted References to the literature mostly articles from journals and conference pro ceedings are made throughout the text at the place where they are cited The global bibliography includes b o oks which were useful for me in preparing this text and which can b e recommended for further study or greater depth The numerous exercises are to b e understo o d as an integral part of the text and one should try to nd ones own solu tion b efore lo oking up the solutions in the back of the b o ok However if one initially wants to understand only the general outline of a result one could skip over the solutions altogether at the rst reading Exercises which have a somewhat higher level of diculty but are certainly still doable have b een marked with For pro ofreading the original German text and for various suggestions for improvement I want to thank Gerhard Buntro ck Volker Claus Uli Her trampf Johannes Kobler Christoph Meinel Rainer Schuler Thomas Thier auf and Jacob o Toran Christoph Karg prepared a preliminary version of Chapter as part of a course pap er Uwe Schoning Preface to the English Edition While I was visiting Boston University during the academic year I noticed a small b o ok written in German on a shelf in Steve Homers oce Curious I b orrowed it for my train ride home and b egan reading one of the chapters I liked the style and format of the b o ok so much that over the course of the next few months I frequently found myself reaching for it and working through one chapter or another This was my intro duction to Perlen der Theoretischen Informatik A few of my colleagues had also seen the b o ok They also found it inter esting but most of them did not read German well enough to read more than small p ortions of it enjoyably I hop e that the English version will rectify this situation and that many will enjoy and learn from the English version as much as I enjoyed the German version The front matter of this b o ok says that it has b een translated revised and expanded I should p erhaps say a few words ab out each of these tasks In translating the b o ok I have tried as much as p ossible to retain the feel of the original which is somewhat less formal and imp ersonal than a typical text b o ok yet relatively concise I certainly hop e that the pleasure of the pursuit of understanding has not gotten lost in the translation Most of the revisions to the b o ok are quite minor Some bibliography items have b een added or up dated a numb er of German sources have b een deleted The layout has b een altered somewhat In particular references now o ccur systematically at the end of each chapter and are often annotated This format makes it easier to nd references to the literature while providing a place to tie up lose ends summarize results and p oint out extensions Sp ecic mention of the works cited at the end of each chapter is made informally if at all in the course of the presentation Occasionally I have added or rearranged a paragraph included an additional exercise or elab orated on a solution but for the most part I have followed the original quite closely Where I sp otted errors I have tried to x them I hop e I have corrected more than I have intro duced While translating and up dating this b o ok I b egan to consider adding some additional gems of my own I am thankful to Uwe my colleagues and Hermann Engesser the sup ervising editor at Springer Verlag for en couraging me to do so In deciding which topics to add I asked myself two VI I I questions What is missing and What is new From the p ossible answers to each question I picked two new topics The intro duction to averagecase complexity presented in Topic seemed to me to b e a completion more accurately a continuation of some of the ideas from Topic where the term averagecase is used in a somewhat dif ferent manner It was an obvious gap to ll The chapter on quantum computation Topic covers material that is for the most part newer than the original b o ok indeed several of the articles used to prepare it have not yet app eared in print I considered covering Shors quantum factoring algorithm either instead or additionally but decided that Grovers search algorithm provided a gentler intro duction to quantum computation for those who are new to the sub ject I hop e interested readers will nd Shors algorithm easier to digest after having worked through the results presented here No doubt there are many other eligible topics for this b o ok but one must stop somewhere For reading p ortions of the text and providing various suggestions for improvement I want to thank Drue Coles Judy Goldsmith Fred Green Steve Homer Steve Kautz Luc Longpre Chris Pollett Marcus Schaefer and Martin Strauss each of whom read one or more chapters I also want to thank my wife Pennylyn DykstraPruim who in addition to putting up with my long and sometimes o dd hours also pro ofread the manuscript her eorts improved its style and reduced the numb er of typ ographical and grammatical errors Finally many thanks go to Uwe Schoning for writing the original b o ok and collab orating on the English edition Randall Pruim July Table of Contents Fundamental Denitions and Results The Priority Metho d Hilb erts Tenth Problem The Equivalence Problem for LOOP and LOOPPrograms The Second LBA Problem LOGSPACE Random Walks on Graphs and Universal Traversal Sequences Exp onential Lower Bounds for the Length of Resolution Pro ofs Sp ectral Problems and Descriptive Complexity Theory Kolmogorov Complexity the Universal Distribution and WorstCase vs AverageCase Lower Bounds via Kolmogorov Complexity PACLearning and Occams Razor Lower
Recommended publications
  • 1. the Self-Reducibility Technique
    1. The Self-Reducibility Technique A set S is sparse if it contains at most polynomially many elements at each length, i.e., ( polynomial p)( n)[ x x S x = n p(n)]: (1.1) 9 8 jjf j 2 ^ j j gjj · This chapter studies one of the oldest questions in computational complexity theory: Can sparse sets be NP-complete? As we noted in the Preface, the proofs of most results in complexity theory rely on algorithms, and the proofs in this chapter certainly support that claim. In Sect. 1.1, we1 will use a sequence of increasingly elaborate deterministic tree-pruning and interval-pruning procedures to show that sparse sets cannot p p be m-complete, or even btt-hard, for NP unless P = NP. (The appendices · · p contain de¯nitions of and introductions to the reduction types, such as m p · and btt, and the complexity classes, such as P and NP, that are used in this book.)· p p Section 1.2 studies whether NP can have T -complete or T -hard sparse NP[ (log n)] · · sets. P O denotes the class of languages that can be accepted by some deterministic polynomial-time Turing machine allowed at most (log n) queries to some NP oracle. In Sect. 1.2, we will|via binary searcOh, self- reducibility algorithms, and nondeterministic algorithms|prove that sparse p sets cannot be T -complete for NP unless the polynomial hierarchy collapses NP[ (log n)]· p to P O , and that sparse sets cannot be -hard for NP unless the ·T polynomial hierarchy collapses to NPNP.
    [Show full text]
  • Avoiding Simplicity Is Complex∗
    Avoiding Simplicity is Complex∗ Eric Allender Department of Computer Science Rutgers University Piscataway, NJ 08855, USA [email protected] Holger Spakowskiy Department of Mathematics & Applied Mathematics University of Cape Town Rondebosch 7701, South Africa [email protected] May 16, 2011 Abstract It is a trivial observation that every decidable set has strings of length n with Kolmogorov complexity log n+O(1) if it has any strings of length n at all. Things become much more interesting when one asks whether a similar property holds when one considers resource-bounded Kolmogorov complexity. This is the ques- tion considered here: Can a feasible set A avoid accepting strings of low resource- bounded Kolmogorov complexity, while still accepting some (or many) strings of length n? More specifically, this paper deals with two notions of resource-bounded Kol- mogorov complexity: Kt and KNt. The measure Kt was defined by Levin more than three decades ago and has been studied extensively since then. The measure KNt is a nondeterministic analog of Kt. For all strings x,Kt(x) KNt(x);the two measures are polynomially related if and only if NEXP EXP/poly≥ [6]. Many longstanding open questions in complexity theory boil⊆ down to the ques- tion of whether there are sets in P that avoid all strings of low Kt complexity. For example, the EXP vs ZPP question is equivalent to (one version of) the question of whether avoiding simple strings is difficult: (EXP = ZPP if and only if there exist >0 and a “dense” set in P having no strings x with Kt(x) x [5]).
    [Show full text]
  • The Strong Exponential Hierarchy Collapses*
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF COMPUTEK AND SYSTEM SCIENCES 39, 299-322 (1989) The Strong Exponential Hierarchy Collapses* LANE A. HEMACHANDRA~ Department of Cornpurer Science, Cornell University, Ithaca, New York 14853 Received April 20, 1988; revised October 15, 1988 Composed of the levels E (i.e., IJ, DTIME[Z’“]), NE, PNE, NPNE, etc., the strong exponen- tial hierarchy is an exponential-time analogue of the polynomial-time hierarchy. This paper shows that the strong exponential hierarchy collapses to PNE, its A2 level. E#P NE=NpNEyNPNPhtU . .._ The proof stresses the use of partial census information and the exploitation of nondeter- minism. Extending these techniques, we derive new quantitative relativization results: if the weak exponential hierarchy’s A, + , and Z, + , levels, respectively E’; and NE’;, do separate, this is due to the large number of queries NE makes to its ZP database. Our techniques provide a successful method of proving the collapse of certain complexity classes. ( 19x9 Academic Pres. Inc 1. INTR~DUCTI~N 1.1. Background During 1986 and 1987, many structurally defined complexity hierarchies collap- sed. The proofs of collapse shared a technique, the use of census functions, and shared a background, the previous work on quantitative relativizations of Mahaney [Mah82], Book, Long, and Selman [BLS84], and Long [Lon85]. These three papers use the idea that if you know the strings in a set, you implicitly know the strings in the complement. Thus, Mahaney shows that: THEOREM 1 .l [ Mah82]. IfNP has a sparse Turing complete set (i.e., if there is a sparse set SE NP such that NP c P”), then the polynomial hierarchy collapses to PNP, by showing that for any polynomial p( ), a PNP machine can, on input X, explicitly find all strings of length at most p( 1x1) that are in S.
    [Show full text]
  • Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems
    Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems Lijie Chen MIT, Cambridge, MA, USA Dylan M. McKay MIT, Cambridge, MA, USA Cody D. Murray No Affiliation R. Ryan Williams MIT, Cambridge, MA, USA Abstract A frontier open problem in circuit complexity is to prove PNP 6⊂ SIZE[nk] for all k; this is a necessary NP intermediate step towards NP 6⊂ P/poly. Previously, for several classes containing P , including NP NP NP , ZPP , and S2P, such lower bounds have been proved via Karp-Lipton-style Theorems: to k prove C 6⊂ SIZE[n ] for all k, we show that C ⊂ P/poly implies a “collapse” D = C for some larger class D, where we already know D 6⊂ SIZE[nk] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for PNP that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for PNP are equivalent to fixed-polynomial NP NP k size circuit lower bounds for P . That is, P 6⊂ SIZE[n ] for all k if and only if (NP ⊂ P/poly NP implies PH ⊂ i.o.-P/n). Next, we present new consequences of the assumption NP ⊂ P/poly, towards proving similar results for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses.
    [Show full text]