Theory of Computation Chapter 1

Total Page:16

File Type:pdf, Size:1020Kb

Theory of Computation Chapter 1 Chapter 8: Memory, Paths, and Games slides © 2019, David Doty ECS 220: Theory of Computation based on “The Nature of Computation” by Moore and Mertens - Space versus time PSPACE = problems decidable with polynomial memory You can reuse space, but you can’t reuse time. Leads to unintuitive results: • nondeterminism doesn’t help space-bounded computation: PSPACE = NPSPACE • proving something doesn’t exist is as easy as proving it exists: NPSPACE = coNPSPACE But some intuitions hold: • with more space, you can compute more: SPACE(o(t)) SPACE(t) • if time is bounded, space is also bounded: TIME(t) SPACE(t) • if space is bounded, time is also bounded: SPACE(t) ⊊TIME(2O(t)) ⊆ Biggest open question: does space help more than⊆ time? P ≠ PSPACE? Chapter 8 2 Read-only and write-only memory • Sublinear-time computation largely uninteresting for Turing machines. • Somewhat interesting for RAM machines, e.g., binary search. • Sublinear-space computation makes more sense, e.g., for searching the graph G=(V,E), where V = {all web pages} • Cannot load input into memory. • Formalized by giving Turing machine/RAM machine read-only input, but read-write working memory… only latter is counted as space usage. • To talk about writing more output than allowed space usage, use a write-only output. (irrelevant for Boolean output, but not for space-bounded reductions) • Textbook assumes RAM (random access memory) in this chapter. • given input/working memory location i, read/write to location i takes one step (note this only matters if we also care about the time) • note: if size of input/working memory is k, takes log(k) bits to write i 8.1: Welcome to the State Space 3 Space-bounded complexity classes • SPACE(s(n)) = class of problems solvable with O(s(n)) working memory on inputs of size n • L = SPACE(log n) • PSPACE = SPACE(nc) � ∈ ℕ 8.1: Welcome to the State Space 4 Logarithmic space example deciding palindromes def palindrome(x): i = 1 Space complexity: j = |x| log n for i while i < j: if x[i] != x[j]: log n for j return False i += 1 j -= 1 return True 8.1: Welcome to the State Space 5 Polynomial space example def periodic_orbit(ca,init): n = |init| checking if a configuration is in a periodic orbit of a x = init cellular automaton for j = 1..2n: x = update(ca, x) if x = init: return True return False def update(ca, x): y = "" for i = 0..|x|-1: l = x[(i-1) mod |x|] m = x[i] r = x[(i+1) mod |x|] y.append(ca(l,m,r)) memory: init, x, y, j (n bits each) and i (log n bits) return y 8.1: Welcome to the State Space 6 Time bounds versus space bounds Assumption: a program allocates O(1) bits per time step… then O(t(n)) bits allocated total, so space bound s(n) = O(t(n)). TIME(t) SPACE(t) P PSPACE ⊆ ⊆ If a program uses O(s(n)) bits, it has at most 2O(s(n)) configurations. If it repeats one it runs forever… so if it halts on all inputs, t(n) = 2O(s(n)) SPACE(s) TIME(2O(s)) L P PSPACE EXP ⊆ ⊆ ⊆ 8.1: Welcome to the State Space 7 Nondeterministic time versus deterministic space Recall NTIME(t) TIME(2O(t)) (e.g., NP EXP) ⊆ ⊆ memory used: def exhaustive_search_A(x): n = |x| witness-length(n) ≤ t(n) for each w in {0,1}≤witness-length(n): if V (x,w) = True: space(VA) ≤ t(n) A return True return False NTIME(t) SPACE(t) NP PSPACE ⊆ 8.1: Welcome to the State Space⊆ 8 Putting all relationships together L P NP PSPACE EXP NEXP EXPSPACE (Time Hierarchy Theorem) ⊆ P ⊆ ⊆ ⊆ EXP ⊆ ⊆ NP NEXP (Nondeterministic ⊊ Time Hierarchy Theorem) L PSPACE⊊ EXPSPACE Space⊊ Hierarchy Theorem⊊: If s1 = o(s2), then SPACE(s1) SPACE(s2) 8.1: Welcome to the State Space 9 ⊊ Nondeterministic space-bounded computation • Textbook goes through some “prover/verifier” formulations. Key difference with NP: witnesses can be exponential length, for example, sequence of moves in a sliding block puzzle or chess game. • I prefer the “nondeterministic program” formulation: • NSPACE(s) = problems solvable by a nondeterministic program using space O(s(n)) on inputs of size n. • [correct answer = yes] [some computation path accepts] • [correct answer = no] [no computation path accepts] ⇒ n • NL = NSPACE(log ) ⇒ • NPSPACE = NSPACE(nc) HuaRongDao, Wikipedia � 8.2: Show Me the Way 10 ∈ ℕ Prover/Verifier characterization of NSPACE(s) • verifier algorithm V • input x • witness/proof w • |w| is arbitrary • V has read-only access to both x and w • RAM access to x • sequential access to bits of w from-left-to-right (like a DFA) (*) • x is a yes-instance ( w) V(x,w) accepts • Why (*)? Otherwise we could encode NP-complete problems using only ⇔ ∃ logarithmic space, e.g., HAMPATH would be in NL, so we would have NL = NP. 8.2: Show Me the Way 11 Reachability • REACHABILITY: • Given: directed graph G=(V,E) and two nodes s,t in V • Question: is there a path from s to t in G? def reachable(G=(V,E),s,t): u = s • Claim: REACHABILITY NL. num_searched = 1 Why? while u != t: memory needed: ∈ v = guess neighbor of u u = v u,v,num_searched (log |V|) if num_searched = |V|: return False num_searched += 1 return True 8.2: Show Me the Way 12 The long computational reach of REACHABILITY • If our nondeterministic program has space s, we can search graphs of size up to 2s (i.e., internet-sized graphs) • Flip this around: every nondeterministic program is defined completely by its configuration reachability graph: • V = set of configurations of the program (state of memory) • (u,v) E iff there is a nondeterministic transition from u to v • can assume single accepting configuration a (TM erases all tapes before halting) ∈ • deterministic programs have a line graph; nondeterministic is more general • So every problem in NSPACE(s) is equivalent to a REACHABILITY problem on a graph O(s) of size 2 : given input x with starting configuration cx, can we reach from cx to a? • REACHABILITY on G=(V,E) is solvable using DFS in time O(|V|+|E|) = O(2O(s) + 2O(s)) O(s) NSPACE(s) TIME(2 ) NL P NPSPACE EXP O(1) degree 8.2: Show Me the Way 13 ⊆ ⊆ ⊆ NL-completeness • Previous slide: • REACHABILITY NL • every problem in NL is equivalent to a REACHABILITY problem on a polynomial- size graph ∈ • We’ll define NL-completeness, and show REACHABILITY is NL-complete. • L = NL REACHABILITY L. ⇔ ∈ 8.3: L and NL-completeness 14 Logspace reductions • A reduction f:{0,1}* {0,1}* from A to B is logspace if computable by a O(log n)-space bounded program. Write A ≤L B. • input is read-only • output is write-only • worktape is read/write; only worktape counts against space usage • Most reductions used in NP-completeness proofs are logspace • e.g., to reduce CLIQUE to INDEPENDENT-SET, to determine whether to add edge {u,v} to output, one need only ask whether {u,v} is edge in input graph • B is NL-complete if B NL and B is NL-hard: for all A NL, A ≤L B. • Claim: logspace reductions are transitive: A ≤L B and B ≤L C A ≤L C. ∈ ∈ Why? can be very slow! lots of recomputation of already-computed bits just to save space 8.3: L and NL-completeness ⇒ 15 First NL-complete problem • NL-WITNESS-EXISTENCE: • Given: nondeterministic program P, input x, integer k in unary (string 1k) • Question: Is there a sequence of guesses P(x) can make so it accepts while using at most log k bits of memory? • NL-WITNESS-EXISTENCE is NL-hard: For any A NL, decided by c·log(n)-space-bounded program P, to reduce A to c NL-WITNESS-EXISTENCE, on input x, output (P, x, k), where k = ??n need to count how many 1’s ∈ reduction has written; takes • NL-WITNESS-EXISTENCE NL: log(nc) = c·log n bits to store nondeterministic program Q deciding if (P, x, k) NL-WITNESS-EXISTENCE: run P(x), checking to ensure∈ space usage never exceeds log k. Since k is given in unary, Q uses space log k ≤ log n, where n = |(P,x∈,k)|. 8.3: L and NL-completeness 16 REACHABILITY is NL-complete • reduction showing NL-WITNESS-EXISTENCE ≤L REACHABILITY • input (P,x,k), output (G,s,t) What are G, s, and t? • want P(x) accepts using ≤ log k space there’s a path from s to t in G • G = k-space-bounded configuration reachability graph of P • V = { configurations of P using ≤ log k space⇔ } • E = { (u,v) | P goes from u to v in one step } • s = starting configuration cx on input x • t = accepting configuration 17 Simulating nondeterminism deterministically • Seems to incur exponential time overhead: • NTIME(t) TIME(2O(t)), and we don’t know how to do better with time • We can do much⊆ better with space, incurring only a quadratic overhead. • Savitch’s Theorem: For any s(n) ≥ log n, NSPACE(s) SPACE(s2). • Corollary: NPSPACE = PSPACE ⊆ • Corollary: NL SPACE(log2 n) (polylogarithmic, but not logarithmic) But Space Hierarchy Theorem says L = SPACE(log n) ≠ SPACE(log2 n), so we still don’t know whether⊆ L = NL. 8.4: Middle-first search and nondeterministic space 18 REACHABILITY SPACE(log2 n) • BFS and DFS use linear memory • to avoid visiting the∈ same node twice, must store all the visited nodes • Savitch’s algorithm (“middle-first search”) visits each node repeatedly s s s k/4 Let s,t V and k > 0.
Recommended publications
  • CS601 DTIME and DSPACE Lecture 5 Time and Space Functions: T, S
    CS601 DTIME and DSPACE Lecture 5 Time and Space functions: t, s : N → N+ Definition 5.1 A set A ⊆ U is in DTIME[t(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M) ≡ w ∈ U M(w)=1 , and 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps. Definition 5.2 A set A ⊆ U is in DSPACE[s(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M), and 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells. (Input tape is “read-only” and not counted as space used.) Example: PALINDROMES ∈ DTIME[n], DSPACE[n]. In fact, PALINDROMES ∈ DSPACE[log n]. [Exercise] 1 CS601 F(DTIME) and F(DSPACE) Lecture 5 Definition 5.3 f : U → U is in F (DTIME[t(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. Definition 5.4 f : U → U is in F (DSPACE[s(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. (Input tape is “read-only”; Output tape is “write-only”.
    [Show full text]
  • Interactive Proof Systems and Alternating Time-Space Complexity
    Theoretical Computer Science 113 (1993) 55-73 55 Elsevier Interactive proof systems and alternating time-space complexity Lance Fortnow” and Carsten Lund** Department of Computer Science, Unicersity of Chicago. 1100 E. 58th Street, Chicago, IL 40637, USA Abstract Fortnow, L. and C. Lund, Interactive proof systems and alternating time-space complexity, Theoretical Computer Science 113 (1993) 55-73. We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following: . All of NC has interactive proofs, with a log-space polynomial-time public-coin verifier vastly improving the best previous lower bound of LOGCFL for this model (Fortnow and Sipser, 1988). All languages in P have interactive proofs with a polynomial-time public-coin verifier using o(log’ n) space. l All exponential-time languages have interactive proof systems with public-coin polynomial-space exponential-time verifiers. To achieve better bounds, we show how to reduce a k-tape alternating Turing machine to a l-tape alternating Turing machine with only a constant factor increase in time and space. 1. Introduction In 1981, Chandra et al. [4] introduced alternating Turing machines, an extension of nondeterministic computation where the Turing machine can make both existential and universal moves. In 1985, Goldwasser et al. [lo] and Babai [l] introduced interactive proof systems, an extension of nondeterministic computation consisting of two players, an infinitely powerful prover and a probabilistic polynomial-time verifier. The prover will try to convince the verifier of the validity of some statement.
    [Show full text]
  • On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs*
    On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs* Benny Applebaum† Eyal Golombek* Abstract We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, CV , that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communica- tion from the prover by R−F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2R−F . Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero- knowledge holds against a “semi-malicious” verifier that maliciously selects its random tape and then plays honestly.
    [Show full text]
  • NP-Completeness Part I
    NP-Completeness Part I Outline for Today ● Recap from Last Time ● Welcome back from break! Let's make sure we're all on the same page here. ● Polynomial-Time Reducibility ● Connecting problems together. ● NP-Completeness ● What are the hardest problems in NP? ● The Cook-Levin Theorem ● A concrete NP-complete problem. Recap from Last Time The Limits of Computability EQTM EQTM co-RE R RE LD LD ADD HALT ATM HALT ATM 0*1* The Limits of Efficient Computation P NP R P and NP Refresher ● The class P consists of all problems solvable in deterministic polynomial time. ● The class NP consists of all problems solvable in nondeterministic polynomial time. ● Equivalently, NP consists of all problems for which there is a deterministic, polynomial-time verifier for the problem. Reducibility Maximum Matching ● Given an undirected graph G, a matching in G is a set of edges such that no two edges share an endpoint. ● A maximum matching is a matching with the largest number of edges. AA maximummaximum matching.matching. Maximum Matching ● Jack Edmonds' paper “Paths, Trees, and Flowers” gives a polynomial-time algorithm for finding maximum matchings. ● (This is the same Edmonds as in “Cobham- Edmonds Thesis.) ● Using this fact, what other problems can we solve? Domino Tiling Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling The Setup ● To determine whether you can place at least k dominoes on a crossword grid, do the following: ● Convert the grid into a graph: each empty cell is a node, and any two adjacent empty cells have an edge between them.
    [Show full text]
  • Sharp Lower Bounds for the Dimension of Linearizations of Matrix Polynomials∗
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 17, pp. 518-531, November 2008 ELA SHARP LOWER BOUNDS FOR THE DIMENSION OF LINEARIZATIONS OF MATRIX POLYNOMIALS∗ FERNANDO DE TERAN´ † AND FROILAN´ M. DOPICO‡ Abstract. A standard way of dealing with matrixpolynomial eigenvalue problems is to use linearizations. Byers, Mehrmann and Xu have recently defined and studied linearizations of dimen- sions smaller than the classical ones. In this paper, lower bounds are provided for the dimensions of linearizations and strong linearizations of a given m × n matrixpolynomial, and particular lineariza- tions are constructed for which these bounds are attained. It is also proven that strong linearizations of an n × n regular matrixpolynomial of degree must have dimension n × n. Key words. Matrixpolynomials, Matrixpencils, Linearizations, Dimension. AMS subject classifications. 15A18, 15A21, 15A22, 65F15. 1. Introduction. We will say that a matrix polynomial of degree ≥ 1 −1 P (λ)=λ A + λ A−1 + ···+ λA1 + A0, (1.1) m×n where A0,A1,...,A ∈ C and A =0,is regular if m = n and det P (λ)isnot identically zero as a polynomial in λ. We will say that P (λ)issingular otherwise. A linearization of P (λ)isamatrix pencil L(λ)=λX + Y such that there exist unimod- ular matrix polynomials, i.e., matrix polynomials with constant nonzero determinant, of appropriate dimensions, E(λ)andF (λ), such that P (λ) 0 E(λ)L(λ)F (λ)= , (1.2) 0 Is where Is denotes the s × s identity matrix. Classically s =( − 1) min{m, n}, but recently linearizations of smaller dimension have been considered [3].
    [Show full text]
  • Complexity Theory
    Complexity Theory Course Notes Sebastiaan A. Terwijn Radboud University Nijmegen Department of Mathematics P.O. Box 9010 6500 GL Nijmegen the Netherlands [email protected] Copyright c 2010 by Sebastiaan A. Terwijn Version: December 2017 ii Contents 1 Introduction 1 1.1 Complexity theory . .1 1.2 Preliminaries . .1 1.3 Turing machines . .2 1.4 Big O and small o .........................3 1.5 Logic . .3 1.6 Number theory . .4 1.7 Exercises . .5 2 Basics 6 2.1 Time and space bounds . .6 2.2 Inclusions between classes . .7 2.3 Hierarchy theorems . .8 2.4 Central complexity classes . 10 2.5 Problems from logic, algebra, and graph theory . 11 2.6 The Immerman-Szelepcs´enyi Theorem . 12 2.7 Exercises . 14 3 Reductions and completeness 16 3.1 Many-one reductions . 16 3.2 NP-complete problems . 18 3.3 More decision problems from logic . 19 3.4 Completeness of Hamilton path and TSP . 22 3.5 Exercises . 24 4 Relativized computation and the polynomial hierarchy 27 4.1 Relativized computation . 27 4.2 The Polynomial Hierarchy . 28 4.3 Relativization . 31 4.4 Exercises . 32 iii 5 Diagonalization 34 5.1 The Halting Problem . 34 5.2 Intermediate sets . 34 5.3 Oracle separations . 36 5.4 Many-one versus Turing reductions . 38 5.5 Sparse sets . 38 5.6 The Gap Theorem . 40 5.7 The Speed-Up Theorem . 41 5.8 Exercises . 43 6 Randomized computation 45 6.1 Probabilistic classes . 45 6.2 More about BPP . 48 6.3 The classes RP and ZPP .
    [Show full text]
  • Delegating Computation: Interactive Proofs for Muggles∗
    Electronic Colloquium on Computational Complexity, Revision 1 of Report No. 108 (2017) Delegating Computation: Interactive Proofs for Muggles∗ Shafi Goldwasser Yael Tauman Kalai Guy N. Rothblum MIT and Weizmann Institute Microsoft Research Weizmann Institute [email protected] [email protected] [email protected] Abstract In this work we study interactive proofs for tractable languages. The (honest) prover should be efficient and run in polynomial time, or in other words a \muggle".1 The verifier should be super-efficient and run in nearly-linear time. These proof systems can be used for delegating computation: a server can run a computation for a client and interactively prove the correctness of the result. The client can verify the result's correctness in nearly-linear time (instead of running the entire computation itself). Previously, related questions were considered in the Holographic Proof setting by Babai, Fortnow, Levin and Szegedy, in the argument setting under computational assumptions by Kilian, and in the random oracle model by Micali. Our focus, however, is on the original inter- active proof model where no assumptions are made on the computational power or adaptiveness of dishonest provers. Our main technical theorem gives a public coin interactive proof for any language computable by a log-space uniform boolean circuit with depth d and input length n. The verifier runs in time n · poly(d; log(n)) and space O(log(n)), the communication complexity is poly(d; log(n)), and the prover runs in time poly(n). In particular, for languages computable by log-space uniform NC (circuits of polylog(n) depth), the prover is efficient, the verifier runs in time n · polylog(n) and space O(log(n)), and the communication complexity is polylog(n).
    [Show full text]
  • 5.2 Intractable Problems -- NP Problems
    Algorithms for Data Processing Lecture VIII: Intractable Problems–NP Problems Alessandro Artale Free University of Bozen-Bolzano [email protected] of Computer Science http://www.inf.unibz.it/˜artale 2019/20 – First Semester MSc in Computational Data Science — UNIBZ Some material (text, figures) displayed in these slides is courtesy of: Alberto Montresor, Werner Nutt, Kevin Wayne, Jon Kleinberg, Eva Tardos. A. Artale Algorithms for Data Processing Definition. P = set of decision problems for which there exists a poly-time algorithm. Problems and Algorithms – Decision Problems The Complexity Theory considers so called Decision Problems. • s Decision• Problem. X Input encoded asX a finite“yes” binary string ; • Decision ProblemA : Is conceived asX a set of strings on whichs the answer to the decision problem is ; yes if s ∈ X Algorithm for a decision problemA s receives an input string , and no if s 6∈ X ( ) = A. Artale Algorithms for Data Processing Problems and Algorithms – Decision Problems The Complexity Theory considers so called Decision Problems. • s Decision• Problem. X Input encoded asX a finite“yes” binary string ; • Decision ProblemA : Is conceived asX a set of strings on whichs the answer to the decision problem is ; yes if s ∈ X Algorithm for a decision problemA s receives an input string , and no if s 6∈ X ( ) = Definition. P = set of decision problems for which there exists a poly-time algorithm. A. Artale Algorithms for Data Processing Towards NP — Efficient Verification • • The issue here is the3-SAT contrast between finding a solution Vs. checking a proposed solution.I ConsiderI for example : We do not know a polynomial-time algorithm to find solutions; but Checking a proposed solution can be easily done in polynomial time (just plug 0/1 and check if it is a solution).
    [Show full text]
  • Theory of Computation
    A Universal Program (4) Theory of Computation Prof. Michael Mascagni Florida State University Department of Computer Science 1 / 33 Recursively Enumerable Sets (4.4) A Universal Program (4) The Parameter Theorem (4.5) Diagonalization, Reducibility, and Rice's Theorem (4.6, 4.7) Enumeration Theorem Definition. We write Wn = fx 2 N j Φ(x; n) #g: Then we have Theorem 4.6. A set B is r.e. if and only if there is an n for which B = Wn. Proof. This is simply by the definition ofΦ( x; n). 2 Note that W0; W1; W2;::: is an enumeration of all r.e. sets. 2 / 33 Recursively Enumerable Sets (4.4) A Universal Program (4) The Parameter Theorem (4.5) Diagonalization, Reducibility, and Rice's Theorem (4.6, 4.7) The Set K Let K = fn 2 N j n 2 Wng: Now n 2 K , Φ(n; n) #, HALT(n; n) This, K is the set of all numbers n such that program number n eventually halts on input n. 3 / 33 Recursively Enumerable Sets (4.4) A Universal Program (4) The Parameter Theorem (4.5) Diagonalization, Reducibility, and Rice's Theorem (4.6, 4.7) K Is r.e. but Not Recursive Theorem 4.7. K is r.e. but not recursive. Proof. By the universality theorem, Φ(n; n) is partially computable, hence K is r.e. If K¯ were also r.e., then by the enumeration theorem, K¯ = Wi for some i. We then arrive at i 2 K , i 2 Wi , i 2 K¯ which is a contradiction.
    [Show full text]
  • Lecture 10: Space Complexity III
    Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Lecture 10: Space Complexity III Arijit Bishnu 27.03.2010 Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Outline 1 Space Complexity Classes: NL and L 2 Reductions 3 NL-completeness 4 The Relation between NL and coNL 5 A Relation Among the Complexity Classes Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Outline 1 Space Complexity Classes: NL and L 2 Reductions 3 NL-completeness 4 The Relation between NL and coNL 5 A Relation Among the Complexity Classes Definition for Recapitulation S c NPSPACE = c>0 NSPACE(n ). The class NPSPACE is an analog of the class NP. Definition L = SPACE(log n). Definition NL = NSPACE(log n). Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Space Complexity Classes Definition for Recapitulation S c PSPACE = c>0 SPACE(n ). The class PSPACE is an analog of the class P. Definition L = SPACE(log n). Definition NL = NSPACE(log n). Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Space Complexity Classes Definition for Recapitulation S c PSPACE = c>0 SPACE(n ). The class PSPACE is an analog of the class P. Definition for Recapitulation S c NPSPACE = c>0 NSPACE(n ).
    [Show full text]
  • Glossary of Complexity Classes
    App endix A Glossary of Complexity Classes Summary This glossary includes selfcontained denitions of most complexity classes mentioned in the b o ok Needless to say the glossary oers a very minimal discussion of these classes and the reader is re ferred to the main text for further discussion The items are organized by topics rather than by alphab etic order Sp ecically the glossary is partitioned into two parts dealing separately with complexity classes that are dened in terms of algorithms and their resources ie time and space complexity of Turing machines and complexity classes de ned in terms of nonuniform circuits and referring to their size and depth The algorithmic classes include timecomplexity based classes such as P NP coNP BPP RP coRP PH E EXP and NEXP and the space complexity classes L NL RL and P S P AC E The non k uniform classes include the circuit classes P p oly as well as NC and k AC Denitions and basic results regarding many other complexity classes are available at the constantly evolving Complexity Zoo A Preliminaries Complexity classes are sets of computational problems where each class contains problems that can b e solved with sp ecic computational resources To dene a complexity class one sp ecies a mo del of computation a complexity measure like time or space which is always measured as a function of the input length and a b ound on the complexity of problems in the class We follow the tradition of fo cusing on decision problems but refer to these problems using the terminology of promise problems
    [Show full text]
  • A Study of the NEXP Vs. P/Poly Problem and Its Variants by Barıs
    A Study of the NEXP vs. P/poly Problem and Its Variants by Barı¸sAydınlıoglu˘ A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2017 Date of final oral examination: August 15, 2017 This dissertation is approved by the following members of the Final Oral Committee: Eric Bach, Professor, Computer Sciences Jin-Yi Cai, Professor, Computer Sciences Shuchi Chawla, Associate Professor, Computer Sciences Loris D’Antoni, Asssistant Professor, Computer Sciences Joseph S. Miller, Professor, Mathematics © Copyright by Barı¸sAydınlıoglu˘ 2017 All Rights Reserved i To Azadeh ii acknowledgments I am grateful to my advisor Eric Bach, for taking me on as his student, for being a constant source of inspiration and guidance, for his patience, time, and for our collaboration in [9]. I have a story to tell about that last one, the paper [9]. It was a late Monday night, 9:46 PM to be exact, when I e-mailed Eric this: Subject: question Eric, I am attaching two lemmas. They seem simple enough. Do they seem plausible to you? Do you see a proof/counterexample? Five minutes past midnight, Eric responded, Subject: one down, one to go. I think the first result is just linear algebra. and proceeded to give a proof from The Book. I was ecstatic, though only for fifteen minutes because then he sent a counterexample refuting the other lemma. But a third lemma, inspired by his counterexample, tied everything together. All within three hours. On a Monday midnight. I only wish that I had asked to work with him sooner.
    [Show full text]