Lecture 3: Bounded Error Quantum Polynomial Time (BQP) Contents 1
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Complexity Theory Lecture 9 Co-NP Co-NP-Complete
Complexity Theory 1 Complexity Theory 2 co-NP Complexity Theory Lecture 9 As co-NP is the collection of complements of languages in NP, and P is closed under complementation, co-NP can also be characterised as the collection of languages of the form: ′ L = x y y <p( x ) R (x, y) { |∀ | | | | → } Anuj Dawar University of Cambridge Computer Laboratory NP – the collection of languages with succinct certificates of Easter Term 2010 membership. co-NP – the collection of languages with succinct certificates of http://www.cl.cam.ac.uk/teaching/0910/Complexity/ disqualification. Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 3 Complexity Theory 4 NP co-NP co-NP-complete P VAL – the collection of Boolean expressions that are valid is co-NP-complete. Any language L that is the complement of an NP-complete language is co-NP-complete. Any of the situations is consistent with our present state of ¯ knowledge: Any reduction of a language L1 to L2 is also a reduction of L1–the complement of L1–to L¯2–the complement of L2. P = NP = co-NP • There is an easy reduction from the complement of SAT to VAL, P = NP co-NP = NP = co-NP • ∩ namely the map that takes an expression to its negation. P = NP co-NP = NP = co-NP • ∩ VAL P P = NP = co-NP ∈ ⇒ P = NP co-NP = NP = co-NP • ∩ VAL NP NP = co-NP ∈ ⇒ Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 5 Complexity Theory 6 Prime Numbers Primality Consider the decision problem PRIME: Another way of putting this is that Composite is in NP. -
The Shrinking Property for NP and Conp✩
Theoretical Computer Science 412 (2011) 853–864 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs The shrinking property for NP and coNPI Christian Glaßer a, Christian Reitwießner a,∗, Victor Selivanov b a Julius-Maximilians-Universität Würzburg, Germany b A.P. Ershov Institute of Informatics Systems, Siberian Division of the Russian Academy of Sciences, Russia article info a b s t r a c t Article history: We study the shrinking and separation properties (two notions well-known in descriptive Received 18 August 2009 set theory) for NP and coNP and show that under reasonable complexity-theoretic Received in revised form 11 November assumptions, both properties do not hold for NP and the shrinking property does not hold 2010 for coNP. In particular we obtain the following results. Accepted 15 November 2010 Communicated by A. Razborov P 1. NP and coNP do not have the shrinking property unless PH is finite. In general, Σn and P Πn do not have the shrinking property unless PH is finite. This solves an open question Keywords: posed by Selivanov (1994) [33]. Computational complexity 2. The separation property does not hold for NP unless UP ⊆ coNP. Polynomial hierarchy 3. The shrinking property does not hold for NP unless there exist NP-hard disjoint NP-pairs NP-pairs (existence of such pairs would contradict a conjecture of Even et al. (1984) [6]). P-separability Multivalued functions 4. The shrinking property does not hold for NP unless there exist complete disjoint NP- pairs. Moreover, we prove that the assumption NP 6D coNP is too weak to refute the shrinking property for NP in a relativizable way. -
Quantum Supremacy
Quantum Supremacy Practical QS: perform some computational task on a well-controlled quantum device, which cannot be simulated in a reasonable time by the best-known classical algorithms and hardware. Theoretical QS: perform a computational task efficiently on a quantum device, and prove that task cannot be efficiently classically simulated. Since proving seems to be beyond the capabilities of our current civilization, we lower the standards for theoretical QS. One seeks to provide formal evidence that classical simulation is unlikely. For example: 3-SAT is NP-complete, so it cannot be efficiently classical solved unless P = NP. Theoretical QS: perform a computational task efficiently on a quantum device, and prove that task cannot be efficiently classically simulated unless “the polynomial Heierarchy collapses to the 3nd level.” Quantum Supremacy A common feature of QS arguments is that they consider sampling problems, rather than decision problems. They allow us to characterize the complexity of sampling measurements of quantum states. Which is more difficult: Task A: deciding if a circuit outputs 1 with probability at least 2/3s, or at most 1/3s Task B: sampling from the output of an n-qubit circuit in the computational basis Sampling from distributions is generically more difficult than approximating observables, since we can use samples to estimate observables, but not the other way around. One can imagine quantum systems whose local observables are easy to classically compute, but for which sampling the full state is computationally complex. By moving from decision problems to sampling problems, we make the task of classical simulation much more difficult. -
Simple Doubly-Efficient Interactive Proof Systems for Locally
Electronic Colloquium on Computational Complexity, Revision 3 of Report No. 18 (2017) Simple doubly-efficient interactive proof systems for locally-characterizable sets Oded Goldreich∗ Guy N. Rothblumy September 8, 2017 Abstract A proof system is called doubly-efficient if the prescribed prover strategy can be implemented in polynomial-time and the verifier’s strategy can be implemented in almost-linear-time. We present direct constructions of doubly-efficient interactive proof systems for problems in P that are believed to have relatively high complexity. Specifically, such constructions are presented for t-CLIQUE and t-SUM. In addition, we present a generic construction of such proof systems for a natural class that contains both problems and is in NC (and also in SC). The proof systems presented by us are significantly simpler than the proof systems presented by Goldwasser, Kalai and Rothblum (JACM, 2015), let alone those presented by Reingold, Roth- blum, and Rothblum (STOC, 2016), and can be implemented using a smaller number of rounds. Contents 1 Introduction 1 1.1 The current work . 1 1.2 Relation to prior work . 3 1.3 Organization and conventions . 4 2 Preliminaries: The sum-check protocol 5 3 The case of t-CLIQUE 5 4 The general result 7 4.1 A natural class: locally-characterizable sets . 7 4.2 Proof of Theorem 1 . 8 4.3 Generalization: round versus computation trade-off . 9 4.4 Extension to a wider class . 10 5 The case of t-SUM 13 References 15 Appendix: An MA proof system for locally-chracterizable sets 18 ∗Department of Computer Science, Weizmann Institute of Science, Rehovot, Israel. -
Lecture 10: Space Complexity III
Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Lecture 10: Space Complexity III Arijit Bishnu 27.03.2010 Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Outline 1 Space Complexity Classes: NL and L 2 Reductions 3 NL-completeness 4 The Relation between NL and coNL 5 A Relation Among the Complexity Classes Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Outline 1 Space Complexity Classes: NL and L 2 Reductions 3 NL-completeness 4 The Relation between NL and coNL 5 A Relation Among the Complexity Classes Definition for Recapitulation S c NPSPACE = c>0 NSPACE(n ). The class NPSPACE is an analog of the class NP. Definition L = SPACE(log n). Definition NL = NSPACE(log n). Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Space Complexity Classes Definition for Recapitulation S c PSPACE = c>0 SPACE(n ). The class PSPACE is an analog of the class P. Definition L = SPACE(log n). Definition NL = NSPACE(log n). Space Complexity Classes: NL and L Reductions NL-completeness The Relation between NL and coNL A Relation Among the Complexity Classes Space Complexity Classes Definition for Recapitulation S c PSPACE = c>0 SPACE(n ). The class PSPACE is an analog of the class P. Definition for Recapitulation S c NPSPACE = c>0 NSPACE(n ). -
The Weakness of CTC Qubits and the Power of Approximate Counting
The weakness of CTC qubits and the power of approximate counting Ryan O'Donnell∗ A. C. Cem Sayy April 7, 2015 Abstract We present results in structural complexity theory concerned with the following interre- lated topics: computation with postselection/restarting, closed timelike curves (CTCs), and approximate counting. The first result is a new characterization of the lesser known complexity class BPPpath in terms of more familiar concepts. Precisely, BPPpath is the class of problems that can be efficiently solved with a nonadaptive oracle for the Approximate Counting problem. Similarly, PP equals the class of problems that can be solved efficiently with nonadaptive queries for the related Approximate Difference problem. Another result is concerned with the compu- tational power conferred by CTCs; or equivalently, the computational complexity of finding stationary distributions for quantum channels. Using the above-mentioned characterization of PP, we show that any poly(n)-time quantum computation using a CTC of O(log n) qubits may as well just use a CTC of 1 classical bit. This result essentially amounts to showing that one can find a stationary distribution for a poly(n)-dimensional quantum channel in PP. ∗Department of Computer Science, Carnegie Mellon University. Work performed while the author was at the Bo˘gazi¸ciUniversity Computer Engineering Department, supported by Marie Curie International Incoming Fellowship project number 626373. yBo˘gazi¸ciUniversity Computer Engineering Department. 1 Introduction It is well known that studying \non-realistic" augmentations of computational models can shed a great deal of light on the power of more standard models. The study of nondeterminism and the study of relativization (i.e., oracle computation) are famous examples of this phenomenon. -
A Short History of Computational Complexity
The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns. -
Subgrouping Reduces Complexity and Speeds up Learning in Recurrent Networks
638 Zipser Subgrouping Reduces Complexity and Speeds Up Learning in Recurrent Networks David Zipser Department of Cognitive Science University of California, San Diego La Jolla, CA 92093 1 INTRODUCTION Recurrent nets are more powerful than feedforward nets because they allow simulation of dynamical systems. Everything from sine wave generators through computers to the brain are potential candidates, but to use recurrent nets to emulate dynamical systems we need learning algorithms to program them. Here I describe a new twist on an old algorithm for recurrent nets and compare it to its predecessors. 2 BPTT In the beginning there was BACKPROPAGATION THROUGH TUvffi (BPTT) which was described by Rumelhart, Williams, and Hinton (1986). The idea is to add a copy of the whole recurrent net to the top of a growing feedforward network on each update cycle. Backpropa gating through this stack corrects for past mistakes by adding up all the weight changes from past times. A difficulty with this method is that the feedforward net gets very big. The obvious solution is to truncate it at a fixed number of copies by killing an old copy every time a new copy is added. The truncated-BPTT algorithm is illustrated in Figure 1. It works well, more about this later. 3RTRL It turns out that it is not necessary to keep an ever growing stack of copies of the recurrent net as BPTT does. A fixed number of parameters can record all of past time. This is done in the REAL TI!\.1E RECURRENT LEARNING (RTRL) algorithm of Williams and Zipser (1989). -
1 the Class Conp
80240233: Computational Complexity Lecture 3 ITCS, Tsinghua Univesity, Fall 2007 16 October 2007 Instructor: Andrej Bogdanov Notes by: Jialin Zhang and Pinyan Lu In this lecture, we introduce the complexity class coNP, the Polynomial Hierarchy and the notion of oracle. 1 The class coNP The complexity class NP contains decision problems asking this kind of question: On input x, ∃y : |y| = p(|x|) s.t.V (x, y), where p is a polynomial and V is a relation which can be computed by a polynomial time Turing Machine (TM). Some examples: SAT: given boolean formula φ, does φ have a satisfiable assignment? MATCHING: given a graph G, does it have a perfect matching? Now, consider the opposite problems of SAT and MATCHING. UNSAT: given boolean formula φ, does φ have no satisfiable assignment? UNMATCHING: given a graph G, does it have no perfect matching? This kind of problems are called coNP problems. Formally we have the following definition. Definition 1. For every L ⊆ {0, 1}∗, we say that L ∈ coNP if and only if L ∈ NP, i.e. L ∈ coNP iff there exist a polynomial-time TM A and a polynomial p, such that x ∈ L ⇔ ∀y, |y| = p(|x|) A(x, y) accepts. Then the natural question is how P, NP and coNP relate. First we have the following trivial relationships. Theorem 2. P ⊆ NP, P ⊆ coNP For instance, MATCHING ∈ coNP. And we already know that SAT ∈ NP. Is SAT ∈ coNP? If it is true, then we will have NP = coNP. This is the following theorem. Theorem 3. If SAT ∈ coNP, then NP = coNP. -
Computational Complexity
Computational Complexity The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Vadhan, Salil P. 2011. Computational complexity. In Encyclopedia of Cryptography and Security, second edition, ed. Henk C.A. van Tilborg and Sushil Jajodia. New York: Springer. Published Version http://refworks.springer.com/mrw/index.php?id=2703 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:33907951 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP Computational Complexity Salil Vadhan School of Engineering & Applied Sciences Harvard University Synonyms Complexity theory Related concepts and keywords Exponential time; O-notation; One-way function; Polynomial time; Security (Computational, Unconditional); Sub-exponential time; Definition Computational complexity theory is the study of the minimal resources needed to solve computational problems. In particular, it aims to distinguish be- tween those problems that possess efficient algorithms (the \easy" problems) and those that are inherently intractable (the \hard" problems). Thus com- putational complexity provides a foundation for most of modern cryptogra- phy, where the aim is to design cryptosystems that are \easy to use" but \hard to break". (See security (computational, unconditional).) Theory Running Time. The most basic resource studied in computational com- plexity is running time | the number of basic \steps" taken by an algorithm. (Other resources, such as space (i.e., memory usage), are also studied, but they will not be discussed them here.) To make this precise, one needs to fix a model of computation (such as the Turing machine), but here it suffices to informally think of it as the number of \bit operations" when the input is given as a string of 0's and 1's. -
Complexity Theory
Complexity Theory IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar Outline z Goals z Computation of Problems { Concepts and Definitions z Complexity { Classes and Problems z Polynomial Time Reductions { Examples and Proofs z Summary University at Buffalo Department of Industrial Engineering 2 Goals of Complexity Theory z To provide a method of quantifying problem difficulty in an absolute sense. z To provide a method comparing the relative difficulty of two different problems. z To be able to rigorously define the meaning of efficient algorithm. (e.g. Time complexity analysis of an algorithm). University at Buffalo Department of Industrial Engineering 3 Computation of Problems Concepts and Definitions Problems and Instances A problem or model is an infinite family of instances whose objective function and constraints have a specific structure. An instance is obtained by specifying values for the various problem parameters. Measurement of Difficulty Instance z Running time (Measure the total number of elementary operations). Problem z Best case (No guarantee about the difficulty of a given instance). z Average case (Specifies a probability distribution on the instances). z Worst case (Addresses these problems and is usually easier to analyze). University at Buffalo Department of Industrial Engineering 5 Time Complexity Θ-notation (asymptotic tight bound) fn( ) : there exist positive constants cc12, , and n 0 such that Θ=(())gn 0≤≤≤cg12 ( n ) f ( n ) cg ( n ) for all n ≥ n 0 O-notation (asymptotic upper bound) fn( ) : there -
Lecture 8: Complete Problems for Other Complexity Classes 1. NL
IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 8: Complete Problems for Other Complexity Classes David Mix Barrington and Alexis Maciel July 26, 2000 In our first week of lectures, we introduced the following complexity classes and established this sequence of inclusions: AC0 ⊆ NC1 ⊆ L ⊆ NL ⊆ AC1 ⊆ · · · ⊆ NC ⊆ P ⊆ NP ⊆ PSPACE ⊆ EXP: (Once again, all the circuit classes below L must be logspace uniform while those between NL and P can either be all logspace uniform or all P uniform.) We also pointed out that among all these inclusions, only one is known to be strict: the first one. In the preceding two lectures, we have examined the concept of NP-completeness and argued its relevance to the question of whether P equals NP. The NP- completeness of a problem can also be viewed as evidence that the problem is not in P, but this depends on how strongly one believes that P 6= NP. In this lecture, our goal will be to identify complete problems for some of the other complexity classes. 1. NL-Completeness Before defining the notion of NL-completeness, we first need to decide on the kind of reducibility that would be appropriate. Polynomial-time reducibility would not be very useful because NL ⊆ P, which implies that the reduction itself would be able to solve any problem in NL. As a consequence, every language in NL (except ; and Σ∗) would be NL-complete. A better choice is a kind of reducibility that corresponds to a subclass of NL. We also want this subclass X to be closed under function composition so that if A ≤X B and B 2 X, then A 2 X.