Lecture 18 1 History Of
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
COMPLEXITY THEORY Review Lecture 13: Space Hierarchy and Gaps
COMPLEXITY THEORY Review Lecture 13: Space Hierarchy and Gaps Markus Krotzsch¨ Knowledge-Based Systems TU Dresden, 26th Dec 2018 Markus Krötzsch, 26th Dec 2018 Complexity Theory slide 2 of 19 Review: Time Hierarchy Theorems Time Hierarchy Theorem 12.12 If f , g : N N are such that f is time- → constructible, and g log g o(f ), then · ∈ DTime (g) ( DTime (f ) ∗ ∗ Nondeterministic Time Hierarchy Theorem 12.14 If f , g : N N are such that f → is time-constructible, and g(n + 1) o(f (n)), then A Hierarchy for Space ∈ NTime (g) ( NTime (f ) ∗ ∗ In particular, we find that P , ExpTime and NP , NExpTime: , L NL P NP PSpace ExpTime NExpTime ExpSpace ⊆ ⊆ ⊆ ⊆ ⊆ ⊆ ⊆ , Markus Krötzsch, 26th Dec 2018 Complexity Theory slide 3 of 19 Markus Krötzsch, 26th Dec 2018 Complexity Theory slide 4 of 19 Space Hierarchy Proving the Space Hierarchy Theorem (1) Space Hierarchy Theorem 13.1: If f , g : N N are such that f is space- → constructible, and g o(f ), then For space, we can always assume a single working tape: ∈ Tape reduction leads to a constant-factor increase in space • DSpace(g) ( DSpace(f ) Constant factors can be eliminated by space compression • Proof: Again, we construct a diagonalisation machine . We define a multi-tape TM Therefore, DSpacek(f ) = DSpace1(f ). D D for inputs of the form , w (other cases do not matter), assuming that , w = n hM i |hM i| Compute f (n) in unary to mark the available space on the working tape Space turns out to be easier to separate – we get: • Space Hierarchy Theorem 13.1: If f , g : N N are such that f is space- Initialise a separate countdown tape with the largest binary number that can be → • constructible, and g o(f ), then written in f (n) space ∈ Simulate on , w , making sure that only previously marked tape cells are • M hM i DSpace(g) ( DSpace(f ) used Time-bound the simulation using the content of the countdown tape by • Challenge: TMs can run forever even within bounded space. -
CS601 DTIME and DSPACE Lecture 5 Time and Space Functions: T, S
CS601 DTIME and DSPACE Lecture 5 Time and Space functions: t, s : N → N+ Definition 5.1 A set A ⊆ U is in DTIME[t(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M) ≡ w ∈ U M(w)=1 , and 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps. Definition 5.2 A set A ⊆ U is in DSPACE[s(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M), and 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells. (Input tape is “read-only” and not counted as space used.) Example: PALINDROMES ∈ DTIME[n], DSPACE[n]. In fact, PALINDROMES ∈ DSPACE[log n]. [Exercise] 1 CS601 F(DTIME) and F(DSPACE) Lecture 5 Definition 5.3 f : U → U is in F (DTIME[t(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. Definition 5.4 f : U → U is in F (DSPACE[s(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. (Input tape is “read-only”; Output tape is “write-only”. -
Ab Pcpchap.Pdf
i Computational Complexity: A Modern Approach Sanjeev Arora and Boaz Barak Princeton University http://www.cs.princeton.edu/theory/complexity/ [email protected] Not to be reproduced or distributed without the authors’ permission ii Chapter 11 PCP Theorem and Hardness of Approximation: An introduction “...most problem reductions do not create or preserve such gaps...To create such a gap in the generic reduction (cf. Cook)...also seems doubtful. The in- tuitive reason is that computation is an inherently unstable, non-robust math- ematical object, in the the sense that it can be turned from non-accepting to accepting by changes that would be insignificant in any reasonable metric.” Papadimitriou and Yannakakis [PY88] This chapter describes the PCP Theorem, a surprising discovery of complexity theory, with many implications to algorithm design. Since the discovery of NP-completeness in 1972 researchers had mulled over the issue of whether we can efficiently compute approxi- mate solutions to NP-hard optimization problems. They failed to design such approxima- tion algorithms for most problems (see Section 1.1 for an introduction to approximation algorithms). They then tried to show that computing approximate solutions is also hard, but apart from a few isolated successes this effort also stalled. Researchers slowly began to realize that the Cook-Levin-Karp style reductions do not suffice to prove any limits on ap- proximation algorithms (see the above quote from an influential Papadimitriou-Yannakakis paper that appeared a few years before the discoveries described in this chapter). The PCP theorem, discovered in 1992, gave a new definition of NP and provided a new starting point for reductions. -
Lecture 19,20 (Nov 15&17, 2011): Hardness of Approximation, PCP
,20 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 19,20 (Nov 15&17, 2011): Hardness of Approximation, PCP theorem Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 19.1 Hardness of Approximation So far we have been mostly talking about designing approximation algorithms and proving upper bounds. From no until the end of the course we will be talking about proving lower bounds (i.e. hardness of approximation). We are familiar with the theory of NP-completeness. When we prove that a problem is NP-hard it implies that, assuming P=NP there is no polynomail time algorithm that solves the problem (exactly). For example, for SAT, deciding between Yes/No is hard (again assuming P=NP). We would like to show that even deciding between those instances that are (almost) satisfiable and those that are far from being satisfiable is also hard. In other words, create a gap between Yes instances and No instances. These kinds of gaps imply hardness of approximation for optimization version of NP-hard problems. In fact the PCP theorem (that we will see next lecture) is equivalent to the statement that MAX-SAT is NP- hard to approximate within a factor of (1 + ǫ), for some fixed ǫ> 0. Most of hardness of approximation results rely on PCP theorem. For proving a hardness, for example for vertex cover, PCP shows that the existence of following reduction: Given a formula ϕ for SAT, we build a graph G(V, E) in polytime such that: 2 • if ϕ is a yes-instance, then G has a vertex cover of size ≤ 3 |V |; 2 • if ϕ is a no-instance, then every vertex cover of G has a size > α 3 |V | for some fixed α> 1. -
Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems*
Electronic Colloquium on Computational Complexity, Report No. 75 (2019) Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems* Lijie Chen Dylan M. McKay Cody D. Murray† R. Ryan Williams MIT MIT MIT Abstract A frontier open problem in circuit complexity is to prove PNP 6⊂ SIZE[nk] for all k; this is a neces- NP sary intermediate step towards NP 6⊂ P=poly. Previously, for several classes containing P , including NP NP NP , ZPP , and S2P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove k C 6⊂ SIZE[n ] for all k, we show that C ⊂ P=poly implies a “collapse” D = C for some larger class D, where we already know D 6⊂ SIZE[nk] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for PNP that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for PNP are equivalent to fixed-polynomial size circuit lower NP NP k NP bounds for P . That is, P 6⊂ SIZE[n ] for all k if and only if (NP ⊂ P=poly implies PH ⊂ i.o.-P=n ). Next, we present new consequences of the assumption NP ⊂ P=poly, towards proving similar re- sults for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses. -
Spring 2020 (Provide Proofs for All Answers) (120 Points Total)
Complexity Qual Spring 2020 (provide proofs for all answers) (120 Points Total) 1 Problem 1: Formal Languages (30 Points Total): For each of the following languages, state and prove whether: (i) the lan- guage is regular, (ii) the language is context-free but not regular, or (iii) the language is non-context-free. n n Problem 1a (10 Points): La = f0 1 jn ≥ 0g (that is, the set of all binary strings consisting of 0's and then 1's where the number of 0's and 1's are equal). ∗ Problem 1b (10 Points): Lb = fw 2 f0; 1g jw has an even number of 0's and at least two 1'sg (that is, the set of all binary strings with an even number, including zero, of 0's and at least two 1's). n 2n 3n Problem 1c (10 Points): Lc = f0 #0 #0 jn ≥ 0g (that is, the set of all strings of the form 0 repeated n times, followed by the # symbol, followed by 0 repeated 2n times, followed by the # symbol, followed by 0 repeated 3n times). Problem 2: NP-Hardness (30 Points Total): There are a set of n people who are the vertices of an undirected graph G. There's an undirected edge between two people if they are enemies. Problem 2a (15 Points): The people must be assigned to the seats around a single large circular table. There are exactly as many seats as people (both n). We would like to make sure that nobody ever sits next to one of their enemies. -
Pointer Quantum Pcps and Multi-Prover Games
Pointer Quantum PCPs and Multi-Prover Games Alex B. Grilo∗1, Iordanis Kerenidis†1,2, and Attila Pereszlényi‡1 1IRIF, CNRS, Université Paris Diderot, Paris, France 2Centre for Quantum Technologies, National University of Singapore, Singapore 1st March, 2016 Abstract The quantum PCP (QPCP) conjecture states that all problems in QMA, the quantum analogue of NP, admit quantum verifiers that only act on a constant number of qubits of a polynomial size quantum proof and have a constant gap between completeness and soundness. Despite an impressive body of work trying to prove or disprove the quantum PCP conjecture, it still remains widely open. The above-mentioned proof verification statement has also been shown equivalent to the QMA-completeness of the Local Hamiltonian problem with constant relative gap. Nevertheless, unlike in the classical case, no equivalent formulation in the language of multi-prover games is known. In this work, we propose a new type of quantum proof systems, the Pointer QPCP, where a verifier first accesses a classical proof that he can use as a pointer to which qubits from the quantum part of the proof to access. We define the Pointer QPCP conjecture, that states that all problems in QMA admit quantum verifiers that first access a logarithmic number of bits from the classical part of a polynomial size proof, then act on a constant number of qubits from the quantum part of the proof, and have a constant gap between completeness and soundness. We define a new QMA-complete problem, the Set Local Hamiltonian problem, and a new restricted class of quantum multi-prover games, called CRESP games. -
Probabilistic Proof Systems: a Primer
Probabilistic Proof Systems: A Primer Oded Goldreich Department of Computer Science and Applied Mathematics Weizmann Institute of Science, Rehovot, Israel. June 30, 2008 Contents Preface 1 Conventions and Organization 3 1 Interactive Proof Systems 4 1.1 Motivation and Perspective ::::::::::::::::::::::: 4 1.1.1 A static object versus an interactive process :::::::::: 5 1.1.2 Prover and Veri¯er :::::::::::::::::::::::: 6 1.1.3 Completeness and Soundness :::::::::::::::::: 6 1.2 De¯nition ::::::::::::::::::::::::::::::::: 7 1.3 The Power of Interactive Proofs ::::::::::::::::::::: 9 1.3.1 A simple example :::::::::::::::::::::::: 9 1.3.2 The full power of interactive proofs ::::::::::::::: 11 1.4 Variants and ¯ner structure: an overview ::::::::::::::: 16 1.4.1 Arthur-Merlin games a.k.a public-coin proof systems ::::: 16 1.4.2 Interactive proof systems with two-sided error ::::::::: 16 1.4.3 A hierarchy of interactive proof systems :::::::::::: 17 1.4.4 Something completely di®erent ::::::::::::::::: 18 1.5 On computationally bounded provers: an overview :::::::::: 18 1.5.1 How powerful should the prover be? :::::::::::::: 19 1.5.2 Computational Soundness :::::::::::::::::::: 20 2 Zero-Knowledge Proof Systems 22 2.1 De¯nitional Issues :::::::::::::::::::::::::::: 23 2.1.1 A wider perspective: the simulation paradigm ::::::::: 23 2.1.2 The basic de¯nitions ::::::::::::::::::::::: 24 2.2 The Power of Zero-Knowledge :::::::::::::::::::::: 26 2.2.1 A simple example :::::::::::::::::::::::: 26 2.2.2 The full power of zero-knowledge proofs :::::::::::: -
A Short History of Computational Complexity
The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns. -
Introduction to Complexity Theory Big O Notation Review Linear Function: R(N)=O(N)
GS019 - Lecture 1 on Complexity Theory Jarod Alper (jalper) Introduction to Complexity Theory Big O Notation Review Linear function: r(n)=O(n). Polynomial function: r(n)=2O(1) Exponential function: r(n)=2nO(1) Logarithmic function: r(n)=O(log n) Poly-log function: r(n)=logO(1) n Definition 1 (TIME) Let t : . Define the time complexity class, TIME(t(n)) to be ℵ−→ℵ TIME(t(n)) = L DTM M which decides L in time O(t(n)) . { |∃ } Definition 2 (NTIME) Let t : . Define the time complexity class, NTIME(t(n)) to be ℵ−→ℵ NTIME(t(n)) = L NDTM M which decides L in time O(t(n)) . { |∃ } Example 1 Consider the language A = 0k1k k 0 . { | ≥ } Is A TIME(n2)? Is A ∈ TIME(n log n)? Is A ∈ TIME(n)? Could∈ we potentially place A in a smaller complexity class if we consider other computational models? Theorem 1 If t(n) n, then every t(n) time multitape Turing machine has an equivalent O(t2(n)) time single-tape turing≥ machine. Proof: see Theorem 7:8 in Sipser (pg. 232) Theorem 2 If t(n) n, then every t(n) time RAM machine has an equivalent O(t3(n)) time multi-tape turing machine.≥ Proof: optional exercise Conclusion: Linear time is model specific; polynomical time is model indepedent. Definition 3 (The Class P ) k P = [ TIME(n ) k Definition 4 (The Class NP) GS019 - Lecture 1 on Complexity Theory Jarod Alper (jalper) k NP = [ NTIME(n ) k Equivalent Definition of NP NP = L L has a polynomial time verifier . -
Is It Easier to Prove Theorems That Are Guaranteed to Be True?
Is it Easier to Prove Theorems that are Guaranteed to be True? Rafael Pass∗ Muthuramakrishnan Venkitasubramaniamy Cornell Tech University of Rochester [email protected] [email protected] April 15, 2020 Abstract Consider the following two fundamental open problems in complexity theory: • Does a hard-on-average language in NP imply the existence of one-way functions? • Does a hard-on-average language in NP imply a hard-on-average problem in TFNP (i.e., the class of total NP search problem)? Our main result is that the answer to (at least) one of these questions is yes. Both one-way functions and problems in TFNP can be interpreted as promise-true distri- butional NP search problems|namely, distributional search problems where the sampler only samples true statements. As a direct corollary of the above result, we thus get that the existence of a hard-on-average distributional NP search problem implies a hard-on-average promise-true distributional NP search problem. In other words, It is no easier to find witnesses (a.k.a. proofs) for efficiently-sampled statements (theorems) that are guaranteed to be true. This result follows from a more general study of interactive puzzles|a generalization of average-case hardness in NP|and in particular, a novel round-collapse theorem for computationally- sound protocols, analogous to Babai-Moran's celebrated round-collapse theorem for information- theoretically sound protocols. As another consequence of this treatment, we show that the existence of O(1)-round public-coin non-trivial arguments (i.e., argument systems that are not proofs) imply the existence of a hard-on-average problem in NP=poly. -
On Exponential-Time Completeness of the Circularity Problem for Attribute Grammars
On Exponential-Time Completeness of the Circularity Problem for Attribute Grammars Pei-Chi Wu Department of Computer Science and Information Engineering National Penghu Institute of Technology Penghu, Taiwan, R.O.C. E-mail: [email protected] Abstract Attribute grammars (AGs) are a formal technique for defining semantics of programming languages. Existing complexity proofs on the circularity problem of AGs are based on automata theory, such as writing pushdown acceptor and alternating Turing machines. They reduced the acceptance problems of above automata, which are exponential-time (EXPTIME) complete, to the AG circularity problem. These proofs thus show that the circularity problem is EXPTIME-hard, at least as hard as the most difficult problems in EXPTIME. However, none has given a proof for the EXPTIME- completeness of the problem. This paper first presents an alternating Turing machine for the circularity problem. The alternating Turing machine requires polynomial space. Thus, the circularity problem is in EXPTIME and is then EXPTIME-complete. Categories and Subject Descriptors: D.3.1 [Programming Languages]: Formal Definitions and Theory ¾ semantics; D.3.4 [Programming Languages]: Processors ¾ compilers, translator writing systems and compiler generators; F.2.2 [Analysis of Algorithms and Problem Complexity] Nonnumerical Algorithms and Problems ¾ Computations on discrete structures; F.4.2 [Grammars and Other Rewriting Systems] decision problems Key Words: attribute grammars, alternating Turing machines, circularity problem, EXPTIME-complete. 1. INTRODUCTION Attribute grammars (AGs) [8] are a formal technique for defining semantics of programming languages. There are many AG systems (e.g., Eli [4] and Synthesizer Generator [10]) developed to assist the development of “language processors”, such as compilers, semantic checkers, language-based editors, etc.