Efficient Probabilistically Checkable Proofs and Applications to Approximation Abstract 1 Introduction

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Probabilistically Checkable Proofs and Applications to Approximation Abstract 1 Introduction Appears in Proceedings of the 25th Annual ACM Symposium on the Theory of Computing, ACM (1993). Efficient Probabilistically Checkable Proofs and Applications to Approximation M. Bellare∗ S. Goldwassery C. Lundz A. Russellx May 1993 Abstract 1 Introduction We construct multi-prover proof systems for NP which The last two years have witnessed major advances in use only a constant number of provers to simultaneously classifying the complexity of approximating classical op- achieve low error, low randomness and low answer size. timization problems [Co, FGLSS, AS, ALMSS, BR, FL, As a consequence, we obtain asymptotic improvements Be, Zu, LY1, LY2]. These works indicate that a prob- to approximation hardness results for a wide range of lem P is hard to approximate by showing that the ex- optimization problems including minimum set cover, istence of a polynomial time algorithm that approxi- dominating set, maximum clique, chromatic number, mates problem P to within some factor Q would im- and quartic programming; and constant factor improve- ply an unlikely conclusion like NP ⊆ DTIME(T (n)) ments on the hardness results for MAXSNP problems. or NP ⊆ RTIME(T (n)), with T polynomial or quasi- In particular, we show that approximating minimum polynomial. These results are derived by reductions set cover within any constant is NP-complete; approx- from interactive proofs. Namely, by first characteriz- imating minimum set cover within Θ(log n) implies ing NP as those languages which have efficient interac- NP ⊆ DTIME(nlog log n); approximating the maximum tive proofs of membership; and second by reducing the of a quartic program within any constant is NP-hard; problem of whether there exists an interactive proof for approximating maximum clique within n1=30 implies membership in L (L 2 NP) to the problem of approxi- NP ⊆ BPP; approximating chromatic number within mating the cost of an optimal solution to an instance of n1=146 implies NP ⊆ BPP; and approximating MAX- problem P . 3SAT within 113=112 is NP-complete. Today such results are known for many important ∗Department of Computer Science & Engineering, Mail Code problems P , with values of Q and T which differ from 0114, University of California at San Diego, 9500 Gilman Drive, problem to problem; for example, it was shown by [LY1] La Jolla, CA 92093. E-mail: [email protected]. y that approximating the size of the minimum set cover to MIT Laboratory for Computer Science, 545 Technology polylog n Square, Cambridge, MA 02139, USA. e-mail: shafi@theory. within Θ(log N) implies NP ⊆ DTIME(n ), and lcs.mit.edu. Partially supported by NSF FAW grant No. it was shown by [FGLSS, AS, ALMSS] that for some 9023312-CCR, DARPA grant No. N00014-92-J-1799, and grant constant c > 0 approximating the size of a maximum No. 89-00312 from the United States - Israel Binational Science clique in a graph within factor nc implies that P = NP. Foundation (BSF), Jerusalem, Israel. zAT&T Bell Laboratories, Room 2C324, 600 Mountain Av- The values of Q and T achieved depend on the effi- enue, P. O. Box 636, Murray Hill, NJ 07974-0636, USA. email: ciency parameters of the underlying proof system used [email protected]. in the reduction to optimization problem P . The precise x MIT Laboratory for Computer Science, 545 Technology manner in which Q and T depend on these parameters Square, Cambridge, MA 02139, USA. e-mail: [email protected]. mit.edu. Supported by a NSF Graduate Fellowship and by NSF depends on the particular problem P and the particular grant 92-12184, AFOSR 89-0271, and DARPA N00014-92-J-1799. reduction used. 1 r = r(n) p = p(n) a = a(n) = (n) How (in a word) 1 (1) O(log n) 2 O(1) 2 + ∗ [ALMSS]+[FRS]+[Fe] (2) O(log n) O(k(n)) O(1) 2−k(n) O(k(n)) [CW, IZ]-style repetitions of (1). (3) O(k(n) log2 n) 2 O(k(n) log2 n) 2−k(n) [FL] (4) O(k(n) log n) + poly(k(n); log log n) 4 poly(k(n); log log n) 2−k(n) This paper. Figure 1: Number of random bits (r), number of provers (p), answer size (a) and error probability () in results of the form NP ⊆ MIP1[r; p; a; q; ]. Here k(n) is any function bounded above by O(log n) and ∗ is any positive constant. The goal of this paper is to improve the values of We emphasize that the model (PCP or MIP1) is not Q and T in such reductions. Thus we need to reduce as important, in this context, as the values of the pa- the complexity of the underlying proof systems. Let us rameters p; r; a; . Although the parameterized versions begin by seeing what are the proof systems in question of PCP and MIP1 are not known to have equal language and what within these proof systems are the complexity recognition power for a given r; p; a; ,1 most known re- parameters we need to consider. ductions to approximation problems in one model are easily modified to work in the other as long as p; r; a; remain the same. Accordingly, we sometimes state re- 1.1 PCP and MIP sults in terms of MIP1 and sometimes in terms of PCP, Several variants of the interactive proof model have been and leave the translations to the reader. used to derive intractability of approximation results. We note that there may be a motivation to move to We focus on two of them. The first is the (single round the PCP model when proving an approximation hard- version of the) multi-prover model of Ben-Or, Gold- ness result if one could prove results of the form NP ⊆ wasser, Kilian and Wigderson [BGKW]. The second PCP[r; p; a; q; ] which attain better values of the param- is the \oracle" model of Fortnow, Rompel and Sipser eters than results of the form NP ⊆ MIP1[r; p; a; q; ]. [FRS], renamed \probabilistically checkable proofs" by Arora and Safra [AS]. In each case, we may distin- 1.2 New Proof Systems for NP guish five parameters which we denote by r; p; a; q and (all are in general functions of the input length n). In a Our main result is the construction of low complexity, multi prover proof these are, respectively, the number of low error proof systems for NP which have only a con- random bits used by the verifier, the number of provers, stant number of provers. In its most general form the the size of each prover's answer, the size of each of the result is the following. verifier’s questions to the provers, and the error proba- bility. Correspondingly in a probabilistically checkable Theorem 1.1 Let k(n) ≤ O(log n) be any func- −k(n) proof these are the number of random bits used by the tion. Then NP ⊆ MIP1[r; 4; a; q; 2 ], where verifier, the number of queries to the oracle, the size r = O(k(n) log n) + poly(k(n); log log n), a = of each of the oracle's answers, the size of each indi- poly(k(n); log log n), and q = O(r). vidual query, and the error probability. We denote by MIP1[r; p; a; q; ] and PCP[r; p; a; q; ] the corresponding The table of Figure 1 summarizes previous results of classes of languages. the form NP ⊆ MIP1[r; p; a; q; ] in comparison with Note that the total number of bits returned by the ours. We omit the question size q from the table be- provers or oracle is pa; we will sometimes denote this cause it is in all cases O(r) and doesn't matter in re- quantity by t. In some applications the important pa- ductions anyway. The main result of [ALMSS], which rameter is the expected value of t. To capture this we states that there is a constant t such that NP = 0 define PCP as PCP except that the p parameter is the PCP[O(log n); t; 1;O(log n); 1=2], is incorporated as the expected number of queries made by the verifier. 1 The parameters which are important in applications It is easy to see that MIP1[r; p; a; q; ] ⊆ PCP[r; p; a; q; ], to approximation seem to be r; p; a; . However, as q is but the converse containment is not known to be true. When complexity is ignored the models are of course the same [FRS, important in transformations of one type of proof sys- BFL]. For explanations and more information we refer the reader tem to another, we included it in the list. to §2. 2 special case k(n) = 1 of the result shown in (2). The re- and quartic programming (using Theorem 1.1), and we sult shown in (1) is obtained as follows. First apply the have improvements in the factor Q for maximum clique, transformation of [FRS] to the [ALMSS] result to get chromatic number, MAX3SAT, and quadratic program- NP ⊆ MIP1[O(log n); 2; t; O(log n); 1 − 1=(2t)] where t ming (using Theorem 1.2). is the constant from [ALMSS] as above. Then apply [Fe] to bring the error to any constant strictly greater than Set Cover. For a definition of the problem we re- 1=2, at constant factor cost in the other parameters. fer the reader to §4.1. Recall that there exists a poly- Comparing these results with ours, we note the follow- nomial time algorithm for approximating the size of ing features, all of which are important to our applica- the minimum set cover to within a factor of Θ(log N), tions.
Recommended publications
  • Database Theory
    DATABASE THEORY Lecture 4: Complexity of FO Query Answering Markus Krotzsch¨ TU Dresden, 21 April 2016 Overview 1. Introduction | Relational data model 2. First-order queries 3. Complexity of query answering 4. Complexity of FO query answering 5. Conjunctive queries 6. Tree-like conjunctive queries 7. Query optimisation 8. Conjunctive Query Optimisation / First-Order Expressiveness 9. First-Order Expressiveness / Introduction to Datalog 10. Expressive Power and Complexity of Datalog 11. Optimisation and Evaluation of Datalog 12. Evaluation of Datalog (2) 13. Graph Databases and Path Queries 14. Outlook: database theory in practice See course homepage [) link] for more information and materials Markus Krötzsch, 21 April 2016 Database Theory slide 2 of 41 How to Measure Query Answering Complexity Query answering as decision problem { consider Boolean queries Various notions of complexity: • Combined complexity (complexity w.r.t. size of query and database instance) • Data complexity (worst case complexity for any fixed query) • Query complexity (worst case complexity for any fixed database instance) Various common complexity classes: L ⊆ NL ⊆ P ⊆ NP ⊆ PSpace ⊆ ExpTime Markus Krötzsch, 21 April 2016 Database Theory slide 3 of 41 An Algorithm for Evaluating FO Queries function Eval(', I) 01 switch (') f I 02 case p(c1, ::: , cn): return hc1, ::: , cni 2 p 03 case : : return :Eval( , I) 04 case 1 ^ 2 : return Eval( 1, I) ^ Eval( 2, I) 05 case 9x. : 06 for c 2 ∆I f 07 if Eval( [x 7! c], I) then return true 08 g 09 return false 10 g Markus Krötzsch, 21 April 2016 Database Theory slide 4 of 41 FO Algorithm Worst-Case Runtime Let m be the size of ', and let n = jIj (total table sizes) • How many recursive calls of Eval are there? { one per subexpression: at most m • Maximum depth of recursion? { bounded by total number of calls: at most m • Maximum number of iterations of for loop? { j∆Ij ≤ n per recursion level { at most nm iterations I • Checking hc1, ::: , cni 2 p can be done in linear time w.r.t.
    [Show full text]
  • CS601 DTIME and DSPACE Lecture 5 Time and Space Functions: T, S
    CS601 DTIME and DSPACE Lecture 5 Time and Space functions: t, s : N → N+ Definition 5.1 A set A ⊆ U is in DTIME[t(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M) ≡ w ∈ U M(w)=1 , and 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps. Definition 5.2 A set A ⊆ U is in DSPACE[s(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M), and 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells. (Input tape is “read-only” and not counted as space used.) Example: PALINDROMES ∈ DTIME[n], DSPACE[n]. In fact, PALINDROMES ∈ DSPACE[log n]. [Exercise] 1 CS601 F(DTIME) and F(DSPACE) Lecture 5 Definition 5.3 f : U → U is in F (DTIME[t(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. Definition 5.4 f : U → U is in F (DSPACE[s(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. (Input tape is “read-only”; Output tape is “write-only”.
    [Show full text]
  • Interactive Proof Systems and Alternating Time-Space Complexity
    Theoretical Computer Science 113 (1993) 55-73 55 Elsevier Interactive proof systems and alternating time-space complexity Lance Fortnow” and Carsten Lund** Department of Computer Science, Unicersity of Chicago. 1100 E. 58th Street, Chicago, IL 40637, USA Abstract Fortnow, L. and C. Lund, Interactive proof systems and alternating time-space complexity, Theoretical Computer Science 113 (1993) 55-73. We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following: . All of NC has interactive proofs, with a log-space polynomial-time public-coin verifier vastly improving the best previous lower bound of LOGCFL for this model (Fortnow and Sipser, 1988). All languages in P have interactive proofs with a polynomial-time public-coin verifier using o(log’ n) space. l All exponential-time languages have interactive proof systems with public-coin polynomial-space exponential-time verifiers. To achieve better bounds, we show how to reduce a k-tape alternating Turing machine to a l-tape alternating Turing machine with only a constant factor increase in time and space. 1. Introduction In 1981, Chandra et al. [4] introduced alternating Turing machines, an extension of nondeterministic computation where the Turing machine can make both existential and universal moves. In 1985, Goldwasser et al. [lo] and Babai [l] introduced interactive proof systems, an extension of nondeterministic computation consisting of two players, an infinitely powerful prover and a probabilistic polynomial-time verifier. The prover will try to convince the verifier of the validity of some statement.
    [Show full text]
  • If Np Languages Are Hard on the Worst-Case, Then It Is Easy to Find Their Hard Instances
    IF NP LANGUAGES ARE HARD ON THE WORST-CASE, THEN IT IS EASY TO FIND THEIR HARD INSTANCES Dan Gutfreund, Ronen Shaltiel, and Amnon Ta-Shma Abstract. We prove that if NP 6⊆ BPP, i.e., if SAT is worst-case hard, then for every probabilistic polynomial-time algorithm trying to decide SAT, there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the ¯rst worst-case to average-case reduction for NP of any kind. We stress however, that this does not mean that there exists one ¯xed samplable distribution that is hard for all probabilistic polynomial-time algorithms, which is a pre-requisite assumption needed for one-way func- tions and cryptography (even if not a su±cient assumption). Neverthe- less, we do show that there is a ¯xed distribution on instances of NP- complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case). Our results are based on the following lemma that may be of independent interest: Given the description of an e±cient (probabilistic) algorithm that fails to solve SAT in the worst case, we can e±ciently generate at most three Boolean formulae (of increasing lengths) such that the algorithm errs on at least one of them. Keywords. Average-case complexity, Worst-case to average-case re- ductions, Foundations of cryptography, Pseudo classes Subject classi¯cation. 68Q10 (Modes of computation (nondetermin- istic, parallel, interactive, probabilistic, etc.) 68Q15 Complexity classes (hierarchies, relations among complexity classes, etc.) 68Q17 Compu- tational di±culty of problems (lower bounds, completeness, di±culty of approximation, etc.) 94A60 Cryptography 2 Gutfreund, Shaltiel & Ta-Shma 1.
    [Show full text]
  • On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs*
    On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs* Benny Applebaum† Eyal Golombek* Abstract We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, CV , that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communica- tion from the prover by R−F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2R−F . Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero- knowledge holds against a “semi-malicious” verifier that maliciously selects its random tape and then plays honestly.
    [Show full text]
  • The Complexity of Space Bounded Interactive Proof Systems
    The Complexity of Space Bounded Interactive Proof Systems ANNE CONDON Computer Science Department, University of Wisconsin-Madison 1 INTRODUCTION Some of the most exciting developments in complexity theory in recent years concern the complexity of interactive proof systems, defined by Goldwasser, Micali and Rackoff (1985) and independently by Babai (1985). In this paper, we survey results on the complexity of space bounded interactive proof systems and their applications. An early motivation for the study of interactive proof systems was to extend the notion of NP as the class of problems with efficient \proofs of membership". Informally, a prover can convince a verifier in polynomial time that a string is in an NP language, by presenting a witness of that fact to the verifier. Suppose that the power of the verifier is extended so that it can flip coins and can interact with the prover during the course of a proof. In this way, a verifier can gather statistical evidence that an input is in a language. As we will see, the interactive proof system model precisely captures this in- teraction between a prover P and a verifier V . In the model, the computation of V is probabilistic, but is typically restricted in time or space. A language is accepted by the interactive proof system if, for all inputs in the language, V accepts with high probability, based on the communication with the \honest" prover P . However, on inputs not in the language, V rejects with high prob- ability, even when communicating with a \dishonest" prover. In the general model, V can keep its coin flips secret from the prover.
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University [email protected] Not to be reproduced or distributed without the authors’ permission This is an Internet draft. Some chapters are more finished than others. References and attributions are very preliminary and we apologize in advance for any omissions (but hope you will nevertheless point them out to us). Please send us bugs, typos, missing references or general comments to [email protected] — Thank You!! DRAFT ii DRAFT Chapter 9 Complexity of counting “It is an empirical fact that for many combinatorial problems the detection of the existence of a solution is easy, yet no computationally efficient method is known for counting their number.... for a variety of problems this phenomenon can be explained.” L. Valiant 1979 The class NP captures the difficulty of finding certificates. However, in many contexts, one is interested not just in a single certificate, but actually counting the number of certificates. This chapter studies #P, (pronounced “sharp p”), a complexity class that captures this notion. Counting problems arise in diverse fields, often in situations having to do with estimations of probability. Examples include statistical estimation, statistical physics, network design, and more. Counting problems are also studied in a field of mathematics called enumerative combinatorics, which tries to obtain closed-form mathematical expressions for counting problems. To give an example, in the 19th century Kirchoff showed how to count the number of spanning trees in a graph using a simple determinant computation. Results in this chapter will show that for many natural counting problems, such efficiently computable expressions are unlikely to exist.
    [Show full text]
  • NP-Completeness Part I
    NP-Completeness Part I Outline for Today ● Recap from Last Time ● Welcome back from break! Let's make sure we're all on the same page here. ● Polynomial-Time Reducibility ● Connecting problems together. ● NP-Completeness ● What are the hardest problems in NP? ● The Cook-Levin Theorem ● A concrete NP-complete problem. Recap from Last Time The Limits of Computability EQTM EQTM co-RE R RE LD LD ADD HALT ATM HALT ATM 0*1* The Limits of Efficient Computation P NP R P and NP Refresher ● The class P consists of all problems solvable in deterministic polynomial time. ● The class NP consists of all problems solvable in nondeterministic polynomial time. ● Equivalently, NP consists of all problems for which there is a deterministic, polynomial-time verifier for the problem. Reducibility Maximum Matching ● Given an undirected graph G, a matching in G is a set of edges such that no two edges share an endpoint. ● A maximum matching is a matching with the largest number of edges. AA maximummaximum matching.matching. Maximum Matching ● Jack Edmonds' paper “Paths, Trees, and Flowers” gives a polynomial-time algorithm for finding maximum matchings. ● (This is the same Edmonds as in “Cobham- Edmonds Thesis.) ● Using this fact, what other problems can we solve? Domino Tiling Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling The Setup ● To determine whether you can place at least k dominoes on a crossword grid, do the following: ● Convert the grid into a graph: each empty cell is a node, and any two adjacent empty cells have an edge between them.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • CS286.2 Lectures 5-6: Introduction to Hamiltonian Complexity, QMA-Completeness of the Local Hamiltonian Problem
    CS286.2 Lectures 5-6: Introduction to Hamiltonian Complexity, QMA-completeness of the Local Hamiltonian problem Scribe: Jenish C. Mehta The Complexity Class BQP The complexity class BQP is the quantum analog of the class BPP. It consists of all languages that can be decided in quantum polynomial time. More formally, Definition 1. A language L 2 BQP if there exists a classical polynomial time algorithm A that ∗ maps inputs x 2 f0, 1g to quantum circuits Cx on n = poly(jxj) qubits, where the circuit is considered a sequence of unitary operators each on 2 qubits, i.e Cx = UTUT−1...U1 where each 2 2 Ui 2 L C ⊗ C , such that: 2 i. Completeness: x 2 L ) Pr(Cx accepts j0ni) ≥ 3 1 ii. Soundness: x 62 L ) Pr(Cx accepts j0ni) ≤ 3 We say that the circuit “Cx accepts jyi” if the first output qubit measured in Cxjyi is 0. More j0i specifically, letting P1 = j0ih0j1 be the projection of the first qubit on state j0i, j0i 2 Pr(Cx accepts jyi) =k (P1 ⊗ In−1)Cxjyi k2 The Complexity Class QMA The complexity class QMA (or BQNP, as Kitaev originally named it) is the quantum analog of the class NP. More formally, Definition 2. A language L 2 QMA if there exists a classical polynomial time algorithm A that ∗ maps inputs x 2 f0, 1g to quantum circuits Cx on n + q = poly(jxj) qubits, such that: 2q i. Completeness: x 2 L ) 9jyi 2 C , kjyik2 = 1, such that Pr(Cx accepts j0ni ⊗ 2 jyi) ≥ 3 2q 1 ii.
    [Show full text]
  • CS 579: Computational Complexity. Lecture 2 Space Complexity
    CS 579: Computational Complexity. Lecture 2 Space complexity. Alexandra Kolla Today Space Complexity, L,NL Configuration Graphs Log- Space Reductions NL Completeness, STCONN Savitch’s theorem SL Turing machines, briefly (3-tape) Turing machine M described by tuple (Γ,Q,δ), where ◦ Γ is “alphabet” . Contains start and blank symbol, 0,1,among others (constant size). ◦ Q is set of states, including designated starting state and halt state (constant size). ◦ Transition function δ:푄 × Γ3 → 푄 × Γ2 × {퐿, 푆, 푅}3 describing the rules M uses to move. Turing machines, briefly (3-tape) NON-DETERMINISTIC Turing machine M described by tuple (Γ,Q,δ0,δ1), where ◦ Γ is “alphabet” . Contains start and blank symbol, 0,1,among others (constant size). ◦ Q is set of states, including designated starting state and halt state (constant size). ◦ Two transition functions δ0, δ1 :푄 × Γ3 → 푄 × Γ2 × {퐿, 푆, 푅}3. At every step, TM makes non- deterministic choice which one to Space bounded turing machines Space-bounded turing machines used to study memory requirements of computational tasks. Definition. Let 푠: ℕ → ℕ and 퐿 ⊆ {0,1}∗. We say that L∈ SPACE(s(n)) if there is a constant c and a TM M deciding L s.t. at most c∙s(n) locations on M’s work tapes (excluding the input tape) are ever visited by M’s head during its computation on every input of length n. We will assume a single work tape and no output tape for simplicity. Similarly for NSPACE(s(n)), TM can only use c∙s(n) nonblank tape locations, regardless of its nondeterministic choices Space bounded turing machines Read-only “input” tape.
    [Show full text]
  • Complexity Theory
    Complexity Theory Course Notes Sebastiaan A. Terwijn Radboud University Nijmegen Department of Mathematics P.O. Box 9010 6500 GL Nijmegen the Netherlands [email protected] Copyright c 2010 by Sebastiaan A. Terwijn Version: December 2017 ii Contents 1 Introduction 1 1.1 Complexity theory . .1 1.2 Preliminaries . .1 1.3 Turing machines . .2 1.4 Big O and small o .........................3 1.5 Logic . .3 1.6 Number theory . .4 1.7 Exercises . .5 2 Basics 6 2.1 Time and space bounds . .6 2.2 Inclusions between classes . .7 2.3 Hierarchy theorems . .8 2.4 Central complexity classes . 10 2.5 Problems from logic, algebra, and graph theory . 11 2.6 The Immerman-Szelepcs´enyi Theorem . 12 2.7 Exercises . 14 3 Reductions and completeness 16 3.1 Many-one reductions . 16 3.2 NP-complete problems . 18 3.3 More decision problems from logic . 19 3.4 Completeness of Hamilton path and TSP . 22 3.5 Exercises . 24 4 Relativized computation and the polynomial hierarchy 27 4.1 Relativized computation . 27 4.2 The Polynomial Hierarchy . 28 4.3 Relativization . 31 4.4 Exercises . 32 iii 5 Diagonalization 34 5.1 The Halting Problem . 34 5.2 Intermediate sets . 34 5.3 Oracle separations . 36 5.4 Many-one versus Turing reductions . 38 5.5 Sparse sets . 38 5.6 The Gap Theorem . 40 5.7 The Speed-Up Theorem . 41 5.8 Exercises . 43 6 Randomized computation 45 6.1 Probabilistic classes . 45 6.2 More about BPP . 48 6.3 The classes RP and ZPP .
    [Show full text]