A Complete Problem for Statistical Zero Knowledge
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
On the Complexity of Numerical Analysis
On the Complexity of Numerical Analysis Eric Allender Peter B¨urgisser Rutgers, the State University of NJ Paderborn University Department of Computer Science Department of Mathematics Piscataway, NJ 08854-8019, USA DE-33095 Paderborn, Germany [email protected] [email protected] Johan Kjeldgaard-Pedersen Peter Bro Miltersen PA Consulting Group University of Aarhus Decision Sciences Practice Department of Computer Science Tuborg Blvd. 5, DK 2900 Hellerup, Denmark IT-parken, DK 8200 Aarhus N, Denmark [email protected] [email protected] Abstract In Section 1.3 we discuss our main technical contribu- tions: proving upper and lower bounds on the complexity We study two quite different approaches to understand- of PosSLP. In Section 1.4 we present applications of our ing the complexity of fundamental problems in numerical main result with respect to the Euclidean Traveling Sales- analysis. We show that both hinge on the question of under- man Problem and the Sum-of-Square-Roots problem. standing the complexity of the following problem, which we call PosSLP: Given a division-free straight-line program 1.1 Polynomial Time Over the Reals producing an integer N, decide whether N>0. We show that PosSLP lies in the counting hierarchy, and combining The Blum-Shub-Smale model of computation over the our results with work of Tiwari, we show that the Euclidean reals provides a very well-studied complexity-theoretic set- Traveling Salesman Problem lies in the counting hierarchy ting in which to study the computational problems of nu- – the previous best upper bound for this important problem merical analysis. -
The Limits of Post-Selection Generalization
The Limits of Post-Selection Generalization Kobbi Nissim∗ Adam Smithy Thomas Steinke Georgetown University Boston University IBM Research – Almaden [email protected] [email protected] [email protected] Uri Stemmerz Jonathan Ullmanx Ben-Gurion University Northeastern University [email protected] [email protected] Abstract While statistics and machine learning offers numerous methods for ensuring gener- alization, these methods often fail in the presence of post selection—the common practice in which the choice of analysis depends on previous interactions with the same dataset. A recent line of work has introduced powerful, general purpose algorithms that ensure a property called post hoc generalization (Cummings et al., COLT’16), which says that no person when given the output of the algorithm should be able to find any statistic for which the data differs significantly from the population it came from. In this work we show several limitations on the power of algorithms satisfying post hoc generalization. First, we show a tight lower bound on the error of any algorithm that satisfies post hoc generalization and answers adaptively chosen statistical queries, showing a strong barrier to progress in post selection data analysis. Second, we show that post hoc generalization is not closed under composition, despite many examples of such algorithms exhibiting strong composition properties. 1 Introduction Consider a dataset X consisting of n independent samples from some unknown population P. How can we ensure that the conclusions drawn from X generalize to the population P? Despite decades of research in statistics and machine learning on methods for ensuring generalization, there is an increased recognition that many scientific findings do not generalize, with some even declaring this to be a “statistical crisis in science” [14]. -
The Computational Complexity of Nash Equilibria in Concisely Represented Games∗
The Computational Complexity of Nash Equilibria in Concisely Represented Games¤ Grant R. Schoenebeck y Salil P. Vadhanz August 26, 2009 Abstract Games may be represented in many di®erent ways, and di®erent representations of games a®ect the complexity of problems associated with games, such as ¯nding a Nash equilibrium. The traditional method of representing a game is to explicitly list all the payo®s, but this incurs an exponential blowup as the number of agents grows. We study two models of concisely represented games: circuit games, where the payo®s are computed by a given boolean circuit, and graph games, where each agent's payo® is a function of only the strategies played by its neighbors in a given graph. For these two models, we study the complexity of four questions: determining if a given strategy is a Nash equilibrium, ¯nding a Nash equilibrium, determining if there exists a pure Nash equilibrium, and determining if there exists a Nash equilibrium in which the payo®s to a player meet some given guarantees. In many cases, we obtain tight results, showing that the problems are complete for various complexity classes. 1 Introduction In recent years, there has been a surge of interest at the interface between computer science and game theory. On one hand, game theory and its notions of equilibria provide a rich framework for modeling the behavior of sel¯sh agents in the kinds of distributed or networked environments that often arise in computer science and o®er mechanisms to achieve e±cient and desirable global outcomes in spite of the sel¯sh behavior. -
Interactions of Computational Complexity Theory and Mathematics
Interactions of Computational Complexity Theory and Mathematics Avi Wigderson October 22, 2017 Abstract [This paper is a (self contained) chapter in a new book on computational complexity theory, called Mathematics and Computation, whose draft is available at https://www.math.ias.edu/avi/book]. We survey some concrete interaction areas between computational complexity theory and different fields of mathematics. We hope to demonstrate here that hardly any area of modern mathematics is untouched by the computational connection (which in some cases is completely natural and in others may seem quite surprising). In my view, the breadth, depth, beauty and novelty of these connections is inspiring, and speaks to a great potential of future interactions (which indeed, are quickly expanding). We aim for variety. We give short, simple descriptions (without proofs or much technical detail) of ideas, motivations, results and connections; this will hopefully entice the reader to dig deeper. Each vignette focuses only on a single topic within a large mathematical filed. We cover the following: • Number Theory: Primality testing • Combinatorial Geometry: Point-line incidences • Operator Theory: The Kadison-Singer problem • Metric Geometry: Distortion of embeddings • Group Theory: Generation and random generation • Statistical Physics: Monte-Carlo Markov chains • Analysis and Probability: Noise stability • Lattice Theory: Short vectors • Invariant Theory: Actions on matrix tuples 1 1 introduction The Theory of Computation (ToC) lays out the mathematical foundations of computer science. I am often asked if ToC is a branch of Mathematics, or of Computer Science. The answer is easy: it is clearly both (and in fact, much more). Ever since Turing's 1936 definition of the Turing machine, we have had a formal mathematical model of computation that enables the rigorous mathematical study of computational tasks, algorithms to solve them, and the resources these require. -
Pseudodistributions That Beat All Pseudorandom Generators
Electronic Colloquium on Computational Complexity, Report No. 19 (2021) Pseudodistributions That Beat All Pseudorandom Generators Edward Pyne∗ Salil Vadhany Harvard University Harvard University [email protected] [email protected] February 17, 2021 Abstract A recent paper of Braverman, Cohen, and Garg (STOC 2018) introduced the concept of a pseudorandom pseudodistribution generator (PRPG), which amounts to a pseudorandom gen- erator (PRG) whose outputs are accompanied with real coefficients that scale the acceptance probabilities of any potential distinguisher. They gave an explicit construction of PRPGs for ordered branching programs whose seed length has a better dependence on the error parameter " than the classic PRG construction of Nisan (STOC 1990 and Combinatorica 1992). In this work, we give an explicit construction of PRPGs that achieve parameters that are impossible to achieve by a PRG. In particular, we construct a PRPG for ordered permuta- tion branching programs of unbounded width with a single accept state that has seed length O~(log3=2 n) for error parameter " = 1= poly(n), where n is the input length. In contrast, re- cent work of Hoza et al. (ITCS 2021) shows that any PRG for this model requires seed length Ω(log2 n) to achieve error " = 1= poly(n). As a corollary, we obtain explicit PRPGs with seed length O~(log3=2 n) and error " = 1= poly(n) for ordered permutation branching programs of width w = poly(n) with an arbi- trary number of accept states. Previously, seed length o(log2 n) was only known when both the width and the reciprocal of the error are subpolynomial, i.e. -
Hardness of Non-Interactive Differential Privacy from One-Way
Hardness of Non-Interactive Differential Privacy from One-Way Functions Lucas Kowalczyk* Tal Malkin† Jonathan Ullman‡ Daniel Wichs§ May 30, 2018 Abstract A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset D Xn consisting of 2 some small number of elements n from some large data universe X, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family Q. Ignoring computational constraints, this problem can be solved even when X and Q are exponentially large and n is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation (iO), no efficient differentially private algorithm exists when X and Q can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption. In this work we show that, if one-way functions exist, there is no general purpose differen- tially private algorithm that works when X and Q are exponentially large, and n is an arbitrary polynomial. In fact, we show that this result holds even if X is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey [Vad16]. *Columbia University Department of Computer Science. -
Zero Knowledge and Soundness Are Symmetric∗
Zero Knowledge and Soundness are Symmetric∗ Shien Jin Ong Salil Vadhan Division of Engineering and Applied Sciences Harvard University Cambridge, Massachusetts, USA. E-mail: {shienjin,salil}@eecs.harvard.edu November 14, 2006 Abstract We give a complexity-theoretic characterization of the class of problems in NP having zero- knowledge argument systems that is symmetric in its treatment of the zero knowledge and the soundness conditions. From this, we deduce that the class of problems in NP ∩ coNP having zero-knowledge arguments is closed under complement. Furthermore, we show that a problem in NP has a statistical zero-knowledge argument system if and only if its complement has a computational zero-knowledge proof system. What is novel about these results is that they are unconditional, i.e. do not rely on unproven complexity assumptions such as the existence of one-way functions. Our characterization of zero-knowledge arguments also enables us to prove a variety of other unconditional results about the class of problems in NP having zero-knowledge arguments, such as equivalences between honest-verifier and malicious-verifier zero knowledge, private coins and public coins, inefficient provers and efficient provers, and non-black-box simulation and black-box simulation. Previously, such results were only known unconditionally for zero-knowledge proof systems, or under the assumption that one-way functions exist for zero-knowledge argument systems. Keywords: zero-knowledge argument systems, statistical zero knowledge, complexity classes, clo- sure under complement, distributional one-way functions. ∗Both the authors were supported by NSF grant CNS-0430336 and ONR grant N00014-04-1-0478. -
Quantum Computational Complexity Theory Is to Un- Derstand the Implications of Quantum Physics to Computational Complexity Theory
Quantum Computational Complexity John Watrous Institute for Quantum Computing and School of Computer Science University of Waterloo, Waterloo, Ontario, Canada. Article outline I. Definition of the subject and its importance II. Introduction III. The quantum circuit model IV. Polynomial-time quantum computations V. Quantum proofs VI. Quantum interactive proof systems VII. Other selected notions in quantum complexity VIII. Future directions IX. References Glossary Quantum circuit. A quantum circuit is an acyclic network of quantum gates connected by wires: the gates represent quantum operations and the wires represent the qubits on which these operations are performed. The quantum circuit model is the most commonly studied model of quantum computation. Quantum complexity class. A quantum complexity class is a collection of computational problems that are solvable by a cho- sen quantum computational model that obeys certain resource constraints. For example, BQP is the quantum complexity class of all decision problems that can be solved in polynomial time by a arXiv:0804.3401v1 [quant-ph] 21 Apr 2008 quantum computer. Quantum proof. A quantum proof is a quantum state that plays the role of a witness or certificate to a quan- tum computer that runs a verification procedure. The quantum complexity class QMA is defined by this notion: it includes all decision problems whose yes-instances are efficiently verifiable by means of quantum proofs. Quantum interactive proof system. A quantum interactive proof system is an interaction between a verifier and one or more provers, involving the processing and exchange of quantum information, whereby the provers attempt to convince the verifier of the answer to some computational problem. -
Pseudorandomness and Combinatorial Constructions
Pseudorandomness and Combinatorial Constructions Luca Trevisan∗ September 20, 2018 Abstract In combinatorics, the probabilistic method is a very powerful tool to prove the existence of combinatorial objects with interesting and useful properties. Explicit constructions of objects with such properties are often very difficult, or unknown. In computer science, probabilistic al- gorithms are sometimes simpler and more efficient than the best known deterministic algorithms for the same problem. Despite this evidence for the power of random choices, the computational theory of pseu- dorandomness shows that, under certain complexity-theoretic assumptions, every probabilistic algorithm has an efficient deterministic simulation and a large class of applications of the the probabilistic method can be converted into explicit constructions. In this survey paper we describe connections between the conditional “derandomization” re- sults of the computational theory of pseudorandomness and unconditional explicit constructions of certain combinatorial objects such as error-correcting codes and “randomness extractors.” 1 Introduction 1.1 The Probabilistic Method in Combinatorics In extremal combinatorics, the probabilistic method is the following approach to proving existence of objects with certain properties: prove that a random object has the property with positive probability. This simple idea has beem amazingly successful, and it gives the best known bounds for arXiv:cs/0601100v1 [cs.CC] 23 Jan 2006 most problems in extremal combinatorics. The idea was introduced (and, later, greatly developed) by Paul Erd˝os [18], who originally applied it to the following question: define R(k, k) to be the minimum value n such that every graph on n vertices has either an independent set of size at least k or a clique of size at least k;1 It was known that R(k, k) is finite and that it is at most 4k, and the question was to prove a lower bound. -
Algorithmic Complexity in Computational Biology: Basics, Challenges and Limitations
Algorithmic complexity in computational biology: basics, challenges and limitations Davide Cirillo#,1,*, Miguel Ponce-de-Leon#,1, Alfonso Valencia1,2 1 Barcelona Supercomputing Center (BSC), C/ Jordi Girona 29, 08034, Barcelona, Spain 2 ICREA, Pg. Lluís Companys 23, 08010, Barcelona, Spain. # Contributed equally * corresponding author: [email protected] (Tel: +34 934137971) Key points: ● Computational biologists and bioinformaticians are challenged with a range of complex algorithmic problems. ● The significance and implications of the complexity of the algorithms commonly used in computational biology is not always well understood by users and developers. ● A better understanding of complexity in computational biology algorithms can help in the implementation of efficient solutions, as well as in the design of the concomitant high-performance computing and heuristic requirements. Keywords: Computational biology, Theory of computation, Complexity, High-performance computing, Heuristics Description of the author(s): Davide Cirillo is a postdoctoral researcher at the Computational biology Group within the Life Sciences Department at Barcelona Supercomputing Center (BSC). Davide Cirillo received the MSc degree in Pharmaceutical Biotechnology from University of Rome ‘La Sapienza’, Italy, in 2011, and the PhD degree in biomedicine from Universitat Pompeu Fabra (UPF) and Center for Genomic Regulation (CRG) in Barcelona, Spain, in 2016. His current research interests include Machine Learning, Artificial Intelligence, and Precision medicine. Miguel Ponce de León is a postdoctoral researcher at the Computational biology Group within the Life Sciences Department at Barcelona Supercomputing Center (BSC). Miguel Ponce de León received the MSc degree in bioinformatics from University UdelaR of Uruguay, in 2011, and the PhD degree in Biochemistry and biomedicine from University Complutense de, Madrid (UCM), Spain, in 2017. -
Computational Complexity
Computational Complexity The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Vadhan, Salil P. 2011. Computational complexity. In Encyclopedia of Cryptography and Security, second edition, ed. Henk C.A. van Tilborg and Sushil Jajodia. New York: Springer. Published Version http://refworks.springer.com/mrw/index.php?id=2703 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:33907951 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP Computational Complexity Salil Vadhan School of Engineering & Applied Sciences Harvard University Synonyms Complexity theory Related concepts and keywords Exponential time; O-notation; One-way function; Polynomial time; Security (Computational, Unconditional); Sub-exponential time; Definition Computational complexity theory is the study of the minimal resources needed to solve computational problems. In particular, it aims to distinguish be- tween those problems that possess efficient algorithms (the \easy" problems) and those that are inherently intractable (the \hard" problems). Thus com- putational complexity provides a foundation for most of modern cryptogra- phy, where the aim is to design cryptosystems that are \easy to use" but \hard to break". (See security (computational, unconditional).) Theory Running Time. The most basic resource studied in computational com- plexity is running time | the number of basic \steps" taken by an algorithm. (Other resources, such as space (i.e., memory usage), are also studied, but they will not be discussed them here.) To make this precise, one needs to fix a model of computation (such as the Turing machine), but here it suffices to informally think of it as the number of \bit operations" when the input is given as a string of 0's and 1's. -
Pseudorandomness for Regular Branching Programs Via Fourier Analysis
Electronic Colloquium on Computational Complexity, Report No. 86 (2013) Pseudorandomness for Regular Branching Programs via Fourier Analysis Omer Reingold∗ Thomas Steinkey Salil Vadhanz [email protected] [email protected] [email protected] June 2013 Abstract We present an explicit pseudorandom generator for oblivious, read-once, permutation branching pro- grams of constant width that can read their input bits in any order. The seed length is O(log2 n), where n is the length of the branching program. The previous best seed length known for this model was n1=2+o(1), which follows as a special case of a generator due to Impagliazzo, Meka, and Zuckerman (FOCS 2012) (which gives a seed length of s1=2+o(1) for arbitrary branching programs of size s). Our techniques also give seed length n1=2+o(1) for general oblivious, read-once branching programs of width o(1) 2n , which is incomparable to the results of Impagliazzo et al. Our pseudorandom generator is similar to the one used by Gopalan et al. (FOCS 2012) for read-once CNFs, but the analysis is quite different; ours is based on Fourier analysis of branching programs. In particular, we show that an oblivious, read-once, regular branching program of width w has Fourier mass at most (2w)2k at level k, independent of the length of the program. ∗Microsoft Research Silicon Valley, 1065 La Avenida, Mountain View, CA. ySchool of Engineering and Applied Sciences, Harvard University, 33 Oxford Street, Cambridge MA. Work done in part while at Stanford University. Supported by NSF grant CCF-1116616 and the Lord Rutherford Memorial Research Fellowship.