Interactive Proofs for Quantum Computations
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Database Theory
DATABASE THEORY Lecture 4: Complexity of FO Query Answering Markus Krotzsch¨ TU Dresden, 21 April 2016 Overview 1. Introduction | Relational data model 2. First-order queries 3. Complexity of query answering 4. Complexity of FO query answering 5. Conjunctive queries 6. Tree-like conjunctive queries 7. Query optimisation 8. Conjunctive Query Optimisation / First-Order Expressiveness 9. First-Order Expressiveness / Introduction to Datalog 10. Expressive Power and Complexity of Datalog 11. Optimisation and Evaluation of Datalog 12. Evaluation of Datalog (2) 13. Graph Databases and Path Queries 14. Outlook: database theory in practice See course homepage [) link] for more information and materials Markus Krötzsch, 21 April 2016 Database Theory slide 2 of 41 How to Measure Query Answering Complexity Query answering as decision problem { consider Boolean queries Various notions of complexity: • Combined complexity (complexity w.r.t. size of query and database instance) • Data complexity (worst case complexity for any fixed query) • Query complexity (worst case complexity for any fixed database instance) Various common complexity classes: L ⊆ NL ⊆ P ⊆ NP ⊆ PSpace ⊆ ExpTime Markus Krötzsch, 21 April 2016 Database Theory slide 3 of 41 An Algorithm for Evaluating FO Queries function Eval(', I) 01 switch (') f I 02 case p(c1, ::: , cn): return hc1, ::: , cni 2 p 03 case : : return :Eval( , I) 04 case 1 ^ 2 : return Eval( 1, I) ^ Eval( 2, I) 05 case 9x. : 06 for c 2 ∆I f 07 if Eval( [x 7! c], I) then return true 08 g 09 return false 10 g Markus Krötzsch, 21 April 2016 Database Theory slide 4 of 41 FO Algorithm Worst-Case Runtime Let m be the size of ', and let n = jIj (total table sizes) • How many recursive calls of Eval are there? { one per subexpression: at most m • Maximum depth of recursion? { bounded by total number of calls: at most m • Maximum number of iterations of for loop? { j∆Ij ≤ n per recursion level { at most nm iterations I • Checking hc1, ::: , cni 2 p can be done in linear time w.r.t. -
Interactive Proof Systems and Alternating Time-Space Complexity
Theoretical Computer Science 113 (1993) 55-73 55 Elsevier Interactive proof systems and alternating time-space complexity Lance Fortnow” and Carsten Lund** Department of Computer Science, Unicersity of Chicago. 1100 E. 58th Street, Chicago, IL 40637, USA Abstract Fortnow, L. and C. Lund, Interactive proof systems and alternating time-space complexity, Theoretical Computer Science 113 (1993) 55-73. We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following: . All of NC has interactive proofs, with a log-space polynomial-time public-coin verifier vastly improving the best previous lower bound of LOGCFL for this model (Fortnow and Sipser, 1988). All languages in P have interactive proofs with a polynomial-time public-coin verifier using o(log’ n) space. l All exponential-time languages have interactive proof systems with public-coin polynomial-space exponential-time verifiers. To achieve better bounds, we show how to reduce a k-tape alternating Turing machine to a l-tape alternating Turing machine with only a constant factor increase in time and space. 1. Introduction In 1981, Chandra et al. [4] introduced alternating Turing machines, an extension of nondeterministic computation where the Turing machine can make both existential and universal moves. In 1985, Goldwasser et al. [lo] and Babai [l] introduced interactive proof systems, an extension of nondeterministic computation consisting of two players, an infinitely powerful prover and a probabilistic polynomial-time verifier. The prover will try to convince the verifier of the validity of some statement. -
Simulating Quantum Field Theory with a Quantum Computer
Simulating quantum field theory with a quantum computer John Preskill Lattice 2018 28 July 2018 This talk has two parts (1) Near-term prospects for quantum computing. (2) Opportunities in quantum simulation of quantum field theory. Exascale digital computers will advance our knowledge of QCD, but some challenges will remain, especially concerning real-time evolution and properties of nuclear matter and quark-gluon plasma at nonzero temperature and chemical potential. Digital computers may never be able to address these (and other) problems; quantum computers will solve them eventually, though I’m not sure when. The physics payoff may still be far away, but today’s research can hasten the arrival of a new era in which quantum simulation fuels progress in fundamental physics. Frontiers of Physics short distance long distance complexity Higgs boson Large scale structure “More is different” Neutrino masses Cosmic microwave Many-body entanglement background Supersymmetry Phases of quantum Dark matter matter Quantum gravity Dark energy Quantum computing String theory Gravitational waves Quantum spacetime particle collision molecular chemistry entangled electrons A quantum computer can simulate efficiently any physical process that occurs in Nature. (Maybe. We don’t actually know for sure.) superconductor black hole early universe Two fundamental ideas (1) Quantum complexity Why we think quantum computing is powerful. (2) Quantum error correction Why we think quantum computing is scalable. A complete description of a typical quantum state of just 300 qubits requires more bits than the number of atoms in the visible universe. Why we think quantum computing is powerful We know examples of problems that can be solved efficiently by a quantum computer, where we believe the problems are hard for classical computers. -
If Np Languages Are Hard on the Worst-Case, Then It Is Easy to Find Their Hard Instances
IF NP LANGUAGES ARE HARD ON THE WORST-CASE, THEN IT IS EASY TO FIND THEIR HARD INSTANCES Dan Gutfreund, Ronen Shaltiel, and Amnon Ta-Shma Abstract. We prove that if NP 6⊆ BPP, i.e., if SAT is worst-case hard, then for every probabilistic polynomial-time algorithm trying to decide SAT, there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the ¯rst worst-case to average-case reduction for NP of any kind. We stress however, that this does not mean that there exists one ¯xed samplable distribution that is hard for all probabilistic polynomial-time algorithms, which is a pre-requisite assumption needed for one-way func- tions and cryptography (even if not a su±cient assumption). Neverthe- less, we do show that there is a ¯xed distribution on instances of NP- complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case). Our results are based on the following lemma that may be of independent interest: Given the description of an e±cient (probabilistic) algorithm that fails to solve SAT in the worst case, we can e±ciently generate at most three Boolean formulae (of increasing lengths) such that the algorithm errs on at least one of them. Keywords. Average-case complexity, Worst-case to average-case re- ductions, Foundations of cryptography, Pseudo classes Subject classi¯cation. 68Q10 (Modes of computation (nondetermin- istic, parallel, interactive, probabilistic, etc.) 68Q15 Complexity classes (hierarchies, relations among complexity classes, etc.) 68Q17 Compu- tational di±culty of problems (lower bounds, completeness, di±culty of approximation, etc.) 94A60 Cryptography 2 Gutfreund, Shaltiel & Ta-Shma 1. -
On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs*
On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs* Benny Applebaum† Eyal Golombek* Abstract We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, CV , that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communica- tion from the prover by R−F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2R−F . Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero- knowledge holds against a “semi-malicious” verifier that maliciously selects its random tape and then plays honestly. -
Week 1: an Overview of Circuit Complexity 1 Welcome 2
Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes). -
The Complexity Zoo
The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/. -
Lecture 9 1 Interactive Proof Systems/Protocols
CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky Lecture 9 Lecture date: March 7-9, 2005 Scribe: S. Bhattacharyya, R. Deak, P. Mirzadeh 1 Interactive Proof Systems/Protocols 1.1 Introduction The traditional mathematical notion of a proof is a simple passive protocol in which a prover P outputs a complete proof to a verifier V who decides on its validity. The interaction in this traditional sense is minimal and one-way, prover → verifier. The observation has been made that allowing the verifier to interact with the prover can have advantages, for example proving the assertion faster or proving more expressive languages. This extension allows for the idea of interactive proof systems (protocols). The general framework of the interactive proof system (protocol) involves a prover P with an exponential amount of time (computationally unbounded) and a verifier V with a polyno- mial amount of time. Both P and V exchange multiple messages (challenges and responses), usually dependent upon outcomes of fair coin tosses which they may or may not share. It is easy to see that since V is a poly-time machine (PPT), only a polynomial number of messages may be exchanged between the two. P ’s objective is to convince (prove to) the verifier the truth of an assertion, e.g., claimed knowledge of a proof that x ∈ L. V either accepts or rejects the interaction with the P . 1.2 Definition of Interactive Proof Systems An interactive proof system for a language L is a protocol PV for communication between a computationally unbounded (exponential time) machine P and a probabilistic poly-time (PPT) machine V such that the protocol satisfies the properties of completeness and sound- ness. -
CS286.2 Lectures 5-6: Introduction to Hamiltonian Complexity, QMA-Completeness of the Local Hamiltonian Problem
CS286.2 Lectures 5-6: Introduction to Hamiltonian Complexity, QMA-completeness of the Local Hamiltonian problem Scribe: Jenish C. Mehta The Complexity Class BQP The complexity class BQP is the quantum analog of the class BPP. It consists of all languages that can be decided in quantum polynomial time. More formally, Definition 1. A language L 2 BQP if there exists a classical polynomial time algorithm A that ∗ maps inputs x 2 f0, 1g to quantum circuits Cx on n = poly(jxj) qubits, where the circuit is considered a sequence of unitary operators each on 2 qubits, i.e Cx = UTUT−1...U1 where each 2 2 Ui 2 L C ⊗ C , such that: 2 i. Completeness: x 2 L ) Pr(Cx accepts j0ni) ≥ 3 1 ii. Soundness: x 62 L ) Pr(Cx accepts j0ni) ≤ 3 We say that the circuit “Cx accepts jyi” if the first output qubit measured in Cxjyi is 0. More j0i specifically, letting P1 = j0ih0j1 be the projection of the first qubit on state j0i, j0i 2 Pr(Cx accepts jyi) =k (P1 ⊗ In−1)Cxjyi k2 The Complexity Class QMA The complexity class QMA (or BQNP, as Kitaev originally named it) is the quantum analog of the class NP. More formally, Definition 2. A language L 2 QMA if there exists a classical polynomial time algorithm A that ∗ maps inputs x 2 f0, 1g to quantum circuits Cx on n + q = poly(jxj) qubits, such that: 2q i. Completeness: x 2 L ) 9jyi 2 C , kjyik2 = 1, such that Pr(Cx accepts j0ni ⊗ 2 jyi) ≥ 3 2q 1 ii. -
Taking the Quantum Leap with Machine Learning
Introduction Quantum Algorithms Current Research Conclusion Taking the Quantum Leap with Machine Learning Zack Barnes University of Washington Bellevue College Mathematics and Physics Colloquium Series [email protected] January 15, 2019 Zack Barnes University of Washington UW Quantum Machine Learning Introduction Quantum Algorithms Current Research Conclusion Overview 1 Introduction What is Quantum Computing? What is Machine Learning? Quantum Power in Theory 2 Quantum Algorithms HHL Quantum Recommendation 3 Current Research Quantum Supremacy(?) 4 Conclusion Zack Barnes University of Washington UW Quantum Machine Learning Introduction Quantum Algorithms Current Research Conclusion What is Quantum Computing? \Quantum computing focuses on studying the problem of storing, processing and transferring information encoded in quantum mechanical systems.\ [Ciliberto, Carlo et al., 2018] Unit of quantum information is the qubit, or quantum binary integer. Zack Barnes University of Washington UW Quantum Machine Learning Supervised Uses labeled examples to predict future events Unsupervised Not classified or labeled Introduction Quantum Algorithms Current Research Conclusion What is Machine Learning? \Machine learning is the scientific study of algorithms and statistical models that computer systems use to progressively improve their performance on a specific task.\ (Wikipedia) Zack Barnes University of Washington UW Quantum Machine Learning Uses labeled examples to predict future events Unsupervised Not classified or labeled Introduction Quantum -
Interactive Proofs 1 1 Pspace ⊆ IP
CS294: Probabilistically Checkable and Interactive Proofs January 24, 2017 Interactive Proofs 1 Instructor: Alessandro Chiesa & Igor Shinkar Scribe: Mariel Supina 1 Pspace ⊆ IP The first proof that Pspace ⊆ IP is due to Shamir, and a simplified proof was given by Shen. These notes discuss the simplified version in [She92], though most of the ideas are the same as those in [Sha92]. Notes by Katz also served as a reference [Kat11]. Theorem 1 ([Sha92]) Pspace ⊆ IP. To show the inclusion of Pspace in IP, we need to begin with a Pspace-complete language. 1.1 True Quantified Boolean Formulas (tqbf) Definition 2 A quantified boolean formula (QBF) is an expression of the form 8x19x28x3 ::: 9xnφ(x1; : : : ; xn); (1) where φ is a boolean formula on n variables. Note that since each variable in a QBF is quantified, a QBF is either true or false. Definition 3 tqbf is the language of all boolean formulas φ such that if φ is a formula on n variables, then the corresponding QBF (1) is true. Fact 4 tqbf is Pspace-complete (see section 2 for a proof). Hence to show that Pspace ⊆ IP, it suffices to show that tqbf 2 IP. Claim 5 tqbf 2 IP. In order prove claim 5, we will need to present a complete and sound interactive protocol that decides whether a given QBF is true. In the sum-check protocol we used an arithmetization of a 3-CNF boolean formula. Likewise, here we will need a way to arithmetize a QBF. 1.2 Arithmetization of a QBF We begin with a boolean formula φ, and we let n be the number of variables and m the number of clauses of φ. -
Quantum Supremacy
Quantum Supremacy Practical QS: perform some computational task on a well-controlled quantum device, which cannot be simulated in a reasonable time by the best-known classical algorithms and hardware. Theoretical QS: perform a computational task efficiently on a quantum device, and prove that task cannot be efficiently classically simulated. Since proving seems to be beyond the capabilities of our current civilization, we lower the standards for theoretical QS. One seeks to provide formal evidence that classical simulation is unlikely. For example: 3-SAT is NP-complete, so it cannot be efficiently classical solved unless P = NP. Theoretical QS: perform a computational task efficiently on a quantum device, and prove that task cannot be efficiently classically simulated unless “the polynomial Heierarchy collapses to the 3nd level.” Quantum Supremacy A common feature of QS arguments is that they consider sampling problems, rather than decision problems. They allow us to characterize the complexity of sampling measurements of quantum states. Which is more difficult: Task A: deciding if a circuit outputs 1 with probability at least 2/3s, or at most 1/3s Task B: sampling from the output of an n-qubit circuit in the computational basis Sampling from distributions is generically more difficult than approximating observables, since we can use samples to estimate observables, but not the other way around. One can imagine quantum systems whose local observables are easy to classically compute, but for which sampling the full state is computationally complex. By moving from decision problems to sampling problems, we make the task of classical simulation much more difficult.