Natural Proofs

Total Page:16

File Type:pdf, Size:1020Kb

Natural Proofs Journal of Computer and System Sciences SS1494 journal of computer and system sciences 55, 2435 (1997) article no. SS971494 Natural Proofs Alexander A. Razborov* School of Mathematics, Institute for Advanced Study, Princeton, New Jersey 08540; and Steklov Mathematical Institute, Vavilova 42, 117966, GSP-1, Moscow, Russia and Steven Rudich- Computer Science Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15212 Received December 1, 1994; revised December 2, 1996 Baker, Gill, and Solovay [7], who used oracle separation We introduce the notion of natural proof. We argue that the known results for many major complexity classes to argue that proofs of lower bounds on the complexity of explicit Boolean functions relativizing proof techniques could not solve these in nonmonotone models fall within our definition of natural. We show, problems. Since relativizing proof techniques involving based on a hardness assumption, that natural proofs can not prove superpolynomial lower bounds for general circuits. Without the hard- diagonalization and simulation were the only available ness assumption, we are able to show that they can not prove exponen- tools at the time of their work, progress along known lines tial lower bounds (for general circuits) for the discrete logarithm was ruled out. 0 problem. We show that the weaker class of AC -natural proofs which Because of this, people began to study these problems is sufficient to prove the parity lower bounds of Furst, Saxe, and Sipser, from the vantage of Boolean circuit complexity, rather than Yao, and Ha# stad is inherently incapable of proving the bounds of Razborov and Smolensky. We give some formal evidence that natural machines. The new goal is to prove a stronger, nonuniform proofs are indeed natural by showing that every formal complexity version of P{NP, namely that SAT (or some other measure, which can prove superpolynomial lower bounds for a single problem in NP) does not have polynomial-size circuits. function, can do so for almost all functions, which is one of the two Many new proof techniques have been discovered and suc- ] requirements of a natural proof in our sense. 1997 Academic Press cesfully applied to prove lower bounds in circuit complexity, as exemplified by [11, 1, 40, 14, 27, 28, 3, 2, 37, 4, 29, 36, 8, 5, 23, 24, 15, 13, 17, 26, 6] among others, although the 1. INTRODUCTION lower bounds have not come up near the level of P or even It is natural to ask what makes lower bound questions NC. These techniques are highly combinatorial, and in prin- such as P =? PSPACE, P =? NP, and P =? NC so difficult to ciple they are not subject to relativization. They exist in a solve. A nontechnical reason for thinking they are difficult much larger variety than their recursion-theoretic prede- cessors. Even so, in this paper we give evidence of a general might be that some very bright people have tried and ? failedbut this is hardly satisfactory. A technical reason limitation on their ability to resolve P = NP and other hard along the same lines would be provided by a reduction to problems. these questions from another problem known to be really Section 2 introduces and formalizes the notion of a hard such as the Riemann hypothesis. Perhaps the ultimate natural proof. We argue that all lower bound proofs known to demonstration that P =? NP is a hard problem would be to date against nonmonotone Boolean circuits are natural, or show it to be independent of set theory (ZFC). can be represented as natural. In Section 3 we present diverse Another way to answer this question is to demonstrate examples of circuit lower bound proofs and show why they that known methods are inherently too weak to solve are natural in our sense. While Section 5 gives some general problems such as P =? NP. This approach was taken in theoretical reasons why proofs against circuits tend to be natural, Section 4 gives evidence that ``naturalizable'' proof techniques cannot prove strong lower bounds on circuit size. In particular, we show modulo a widely believed crypto- * Supported by Grant 93-6-6 of the Alfred P. Sloan Foundation, by Grant 93-011-16015 of the Russian Foundation for Fundamental graphic assumption that no natural proof can prove super- Research, and by an AMS-FSU grant. polynomial lower bounds for general circuits, and we show - Partially supported by NSF Grant CCR-9119319. unconditionally that no natural proof can prove exponential 0022-0000Â97 25.00 24 Copyright 1997 by Academic Press All rights of reproduction in any form reserved. File: 571J 149401 . By:CV . Date:28:07:01 . Time:05:43 LOP8M. V8.0. Page 01:01 Codes: 6463 Signs: 4540 . Length: 60 pic 11 pts, 257 mm NATURAL PROOFS 25 lower bounds on the circuit size of the discrete logarithm Usefulness. The circuit size of any sequence of functions problem. f1 , f2 , ..., fn , ..., where fn # Cn , is super-polynomial; i.e., for Natural proofs form a hierarchy according to the com- any constant k, for sufficiently large n, the circuit size of fn plexity of the combinatorial property involved in the proof. is greater than nk. We show without using any cryptographic assumption that A proof that some function does not have polynomial-sized AC 0-natural proofs, which are sufficient to prove the parity circuits is natural against PÂpoly if the proof contains, more lower bounds of [11, 40, 14], are inherently incapable of or less explicitly, the definition of a natural combinatorial proving the bounds for AC 0[q]-circuits of [29, 36, 8]. property C which is useful against PÂpoly. One application of natural proofs was given in [33]. It n Note that the definition of a natural proof, unlike that of was shown there that in certain fragments of bounded a natural combinatorial property, is not precise. This is arithmetic any proof of superpolynomial lower bounds for because while the notion of a property being explicitly general circuits would naturalize, i.e., could be recast as a defined in a journal paper is perfectly clear to the working natural proof. Combined with the material contained in mathematician, it is a bit slippery to formalize. This lack Section 4 of this paper, this leads to the independence of of precision will not affect the precision of our general such lower bounds from these theories (assuming our cryp- statements about natural proofs (see Section 4) because tographic hardness assumption). See also [19, 34] for inter- they will appear only in the form ``there exists (no) natural pretations of this approach in terms of the propositional proof...'' and should be understood as equivalent to ``there calculus, [10, 25] for further results in this direction, and exists (no) natural combinatorial property C ....'' [35] for an informal survey. n The definitions of natural property and natural proof can 1.1. Notation and definitions. We denote by Fn the set of be explained much less formally. First, a proof that some all Boolean functions in n variables. Most of the time, it will explicit function [gn] does not have polynomial-sized cir- be convenient to think of fn # Fn as a binary string of length cuits must plainly identify some combinatorial property Cn n 2 , called the truth-table of fn . fn is a randomly chosen func- of gn that is used in the proof. That is, the proof will show tion from Fn , and in general, we reserve the bold face in our that all functions fn that have this property, including gn formulae for random objects. itself, are hard to compute. In other words, Cn is useful.If k k The notation AC , NC is used in the standard sense to [gn]#NP; then the proof concludes P{NP. Our main denote nonuniform classes. AC 0[m], TC 0, and PÂpoly are contention, backed by evidence in the next section, is that the classes of functions computable by polynomial-size current proof techniques would strongly tend to make this bounded-depth circuits allowing MOD-m gates, bounded- Cn large and constructive as defined above. (Or at least these depth circuits allowing threshold gates, and unbounded- two conditions would hold for some subproperty C*ofn Cn.) depth circuits over a complete basis, respectively. In order to understand the definition of large more n intuitively, let N=2 . Largeness requires that |Cn*|Â|Fn| k 2. NATURAL PROOFS 1ÂN for some fixed k>0; i.e., fn has a nonnegligible chance of having property Cn . 2.1. Natural Combinatorial Properties Constructively is a more subtle notion to understand and justify. We take as our basic benchmark of ``constructive'' We start by defining what we mean by a ``natural com- O(n) that fn # Cn be decidable in time 2 , i.e., polynomial as a binatorial property''; natural proofs will be those that use a n natural combinatorial property. function of 2 . Now, this is exponential in the number n of Formally, by a combinatorial property of Boolean variables in fn , and this makes our concept somewhat functions we will mean a set of Boolean functions mysterious, especially since we are going to employ it for [C F | n # |]. Thus, a Boolean function f will possess studying computations which are polynomial in n! The best n n n justification we have is empirical: the vast majority of property Cn if and only if fn # Cn . (Alternatively, we will sometimes find it convenient to use function notation: properties of Boolean functions or n-vertex graphs (etc.) C ( f )=1 if f # C ; and C ( f )=0 if f  C .) The com- that one encounters in combinatorics are at worst exponen- n n n n n n n n tial-time decidable, and, as a matter of fact, known lower binatorial property Cn is natural if it contains a subset Cn* with the following two conditions: bounds proofs operate only with such properties.
Recommended publications
  • Adaptive Garbled RAM from Laconic Oblivious Transfer
    Adaptive Garbled RAM from Laconic Oblivious Transfer Sanjam Garg?1, Rafail Ostrovsky??2, and Akshayaram Srinivasan1 1 University of California, Berkeley fsanjamg,[email protected] 2 UCLA [email protected] Abstract. We give a construction of an adaptive garbled RAM scheme. In the adaptive setting, a client first garbles a \large" persistent database which is stored on a server. Next, the client can provide garbling of mul- tiple adaptively and adversarially chosen RAM programs that execute and modify the stored database arbitrarily. The garbled database and the garbled program should reveal nothing more than the running time and the output of the computation. Furthermore, the sizes of the garbled database and the garbled program grow only linearly in the size of the database and the running time of the executed program respectively (up to poly logarithmic factors). The security of our construction is based on the assumption that laconic oblivious transfer (Cho et al., CRYPTO 2017) exists. Previously, such adaptive garbled RAM constructions were only known using indistinguishability obfuscation or in random oracle model. As an additional application, we note that this work yields the first constant round secure computation protocol for persistent RAM pro- grams in the malicious setting from standard assumptions. Prior works did not support persistence in the malicious setting. 1 Introduction Over the years, garbling methods [Yao86,LP09,AIK04,BHR12b,App17] have been extremely influential and have engendered an enormous number of applications ? Research supported in part from DARPA/ARL SAFEWARE Award W911NF15C0210, AFOSR Award FA9550-15-1-0274, AFOSR YIP Award, DARPA and SPAWAR under contract N66001-15-C-4065, a Hellman Award and research grants by the Okawa Foundation, Visa Inc., and Center for Long-Term Cybersecurity (CLTC, UC Berkeley).
    [Show full text]
  • Securely Obfuscating Re-Encryption
    Securely Obfuscating Re-Encryption Susan Hohenberger¤ Guy N. Rothblumy abhi shelatz Vinod Vaikuntanathanx December 1, 2008 Abstract We present a positive obfuscation result for a traditional cryptographic functionality. This posi- tive result stands in contrast to well-known impossibility results [3] for general obfuscation and recent impossibility and improbability [13] results for obfuscation of many cryptographic functionalities. Whereas other positive obfuscation results in the standard model apply to very simple point func- tions, our obfuscation result applies to the signi¯cantly more complex and widely-used re-encryption functionality. This functionality takes a ciphertext for message m encrypted under Alice's public key and transforms it into a ciphertext for the same message m under Bob's public key. To overcome impossibility results and to make our results meaningful for cryptographic functionalities, our scheme satis¯es a de¯nition of obfuscation which incorporates more security-aware provisions. ¤Johns Hopkins University, [email protected]. Research partially performed at IBM Zurich Research Laboratory, Switzer- land. yMIT CSAIL, [email protected]. Research supported by NSF grant CNS-0430450 and NSF grant CFF-0635297. zUniversity of Virginia, [email protected]. Research performed at IBM Zurich Research Laboratory, Switzerland. xMIT CSAIL, [email protected] 1 1 Introduction A recent line of research in theoretical cryptography aims to understand whether it is possible to obfuscate programs so that a program's code becomes unintelligible while its functionality remains unchanged. A general method for obfuscating programs would lead to the solution of many open problems in cryptography. Unfortunately, Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan and Yang [3] show that for many notions of obfuscation, a general program obfuscator does not exist|i.e., they exhibit a class of circuits which cannot be obfuscated.
    [Show full text]
  • Prediction from Partial Information and Hindsight, with Application to Circuit Lower Bounds
    comput. complex. c Springer Nature Switzerland AG 2019 DOI 10.1007/s00037-019-00177-4 computational complexity PREDICTION FROM PARTIAL INFORMATION AND HINDSIGHT, WITH APPLICATION TO CIRCUIT LOWER BOUNDS Or Meir and Avi Wigderson Abstract. Consider a random sequence of n bits that has entropy at least n − k, where k n. A commonly used observation is that an average coordinate of this random sequence is close to being uniformly distributed, that is, the coordinate “looks random.” In this work, we prove a stronger result that says, roughly, that the average coordinate n looks random to an adversary that is allowed to query ≈ k other co- ordinates of the sequence, even if the adversary is non-deterministic. This implies corresponding results for decision trees and certificates for Boolean functions. As an application of this result, we prove a new result on depth-3 cir- cuits, which recovers as a direct corollary the known lower bounds for the parity and majority functions, as well as a lower bound on sensi- tive functions due to Boppana (Circuits Inf Process Lett 63(5):257–261, 1997). An interesting feature of this proof is that it works in the frame- work of Karchmer and Wigderson (SIAM J Discrete Math 3(2):255–265, 1990), and, in particular, it is a “top-down” proof (H˚astad et al. in Computat Complex 5(2):99–112, 1995). Finally, it yields a new kind of a random restriction lemma for non-product distributions, which may be of independent interest. Keywords. Certificate complexity, Circuit complexity, Circuit com- plexity lower bounds, Decision tree complexity, Information theoretic, Query complexity, Sensitivity Subject classification.
    [Show full text]
  • Statistical Queries and Statistical Algorithms: Foundations and Applications∗
    Statistical Queries and Statistical Algorithms: Foundations and Applications∗ Lev Reyzin Department of Mathematics, Statistics, and Computer Science University of Illinois at Chicago [email protected] Abstract We give a survey of the foundations of statistical queries and their many applications to other areas. We introduce the model, give the main definitions, and we explore the fundamental theory statistical queries and how how it connects to various notions of learnability. We also give a detailed summary of some of the applications of statistical queries to other areas, including to optimization, to evolvability, and to differential privacy. 1 Introduction Over 20 years ago, Kearns [1998] introduced statistical queries as a framework for designing machine learning algorithms that are tolerant to noise. The statistical query model restricts a learning algorithm to ask certain types of queries to an oracle that responds with approximately correct answers. This framework has has proven useful, not only for designing noise-tolerant algorithms, but also for its connections to other noise models, for its ability to capture many of our current techniques, and for its explanatory power about the hardness of many important problems. Researchers have also found many connections between statistical queries and a variety of modern topics, including to evolvability, differential privacy, and adaptive data analysis. Statistical queries are now both an important tool and remain a foundational topic with many important questions. The aim of this survey is to illustrate these connections and bring researchers to the forefront of our understanding of this important area. We begin by formally introducing the model and giving the main definitions (Section 2), we then move to exploring the fundamental theory of learning statistical queries and how how it connects to other notions of arXiv:2004.00557v2 [cs.LG] 14 May 2020 learnability (Section 3).
    [Show full text]
  • Download This PDF File
    T G¨ P 2012 C N Deadline: December 31, 2011 The Gödel Prize for outstanding papers in the area of theoretical computer sci- ence is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery, Special Inter- est Group on Algorithms and Computation Theory (ACM-SIGACT). The award is presented annually, with the presentation taking place alternately at the Inter- national Colloquium on Automata, Languages, and Programming (ICALP) and the ACM Symposium on Theory of Computing (STOC). The 20th prize will be awarded at the 39th International Colloquium on Automata, Languages, and Pro- gramming to be held at the University of Warwick, UK, in July 2012. The Prize is named in honor of Kurt Gödel in recognition of his major contribu- tions to mathematical logic and of his interest, discovered in a letter he wrote to John von Neumann shortly before von Neumann’s death, in what has become the famous P versus NP question. The Prize includes an award of USD 5000. AWARD COMMITTEE: The winner of the Prize is selected by a committee of six members. The EATCS President and the SIGACT Chair each appoint three members to the committee, to serve staggered three-year terms. The committee is chaired alternately by representatives of EATCS and SIGACT. The 2012 Award Committee consists of Sanjeev Arora (Princeton University), Josep Díaz (Uni- versitat Politècnica de Catalunya), Giuseppe Italiano (Università a˘ di Roma Tor Vergata), Mogens Nielsen (University of Aarhus), Daniel Spielman (Yale Univer- sity), and Eli Upfal (Brown University). ELIGIBILITY: The rule for the 2011 Prize is given below and supersedes any di fferent interpretation of the parametric rule to be found on websites on both SIGACT and EATCS.
    [Show full text]
  • View This Volume's Front and Back Matter
    Computational Complexity Theory This page intentionally left blank https://doi.org/10.1090//pcms/010 IAS/PARK CIT Y MATHEMATICS SERIES Volume 1 0 Computational Complexity Theory Steven Rudic h Avi Wigderson Editors American Mathematical Societ y Institute for Advanced Stud y IAS/Park Cit y Mathematics Institute runs mathematics educatio n programs that brin g together hig h schoo l mathematic s teachers , researcher s i n mathematic s an d mathematic s education, undergraduat e mathematic s faculty , graduat e students , an d undergraduate s t o participate i n distinc t bu t overlappin g program s o f researc h an d education . Thi s volum e contains th e lectur e note s fro m th e Graduat e Summe r Schoo l progra m o n Computationa l Complexity Theor y hel d i n Princeto n i n the summe r o f 2000 . 2000 Mathematics Subject Classification. Primar y 68Qxx ; Secondar y 03D15 . Library o f Congress Cataloging-in-Publicatio n Dat a Computational complexit y theor y / Steve n Rudich , Av i Wigderson, editors , p. cm . — (IAS/Park Cit y mathematic s series , ISS N 1079-563 4 ; v. 10) "Volume contain s the lecture note s fro m th e Graduate Summe r Schoo l progra m o n Computa - tional Complexit y Theor y hel d i n Princeton i n the summer o f 2000"—T.p. verso . Includes bibliographica l references . ISBN 0-8218-2872- X (hardcove r : acid-fre e paper ) 1. Computational complexity . I . Rudich, Steven .
    [Show full text]
  • ERIC W. ALLENDER Department of Computer Science
    ERIC W. ALLENDER Department of Computer Science, Rutgers University 110 Frelinghuysen Road Piscataway, New Jersey 08854-8019 (848) 445-7296 (office) [email protected] http://www.cs.rutgers.edu/ ˜ allender CURRENT POSITION: 2008 to present Distinguished Professor, Department of Computer Science, Rutgers University, New Brunswick, New Jersey. Member of the Graduate Faculty, Member of the DIMACS Center for Discrete Mathematics and Theoretical Computer Science (since 1989), Member of the Graduate Faculty of the Mathematics Department (since 1993). 1997 to 2008 Professor, Department of Computer Science, Rutgers University, New Brunswick, New Jersey. 1991 to 1997 Associate Professor, Department of Computer Science, Rutgers Uni- versity, New Brunswick, New Jersey. 1985 to 1991 Assistant Professor, Department of Computer Science, Rutgers Univer- sity, New Brunswick, New Jersey. VISITING POSITIONS: Aug.–Dec. 2018 Long-Term Visitor, Simons Institute for the Theory of Computing, Uni- versity of California, Berkeley. Jan.–Mar. 2015 Visiting Scholar, University of California, San Diego. Sep.–Oct. 2011 Visiting Scholar, Institute for Theoretical Computer Science, Tsinghua University, Beijing, China. Jan.–Apr. 2010 Visiting Scholar, Department of Decision Sciences, University of South Africa, Pretoria. Oct.–Dec. 2009 Visiting Scholar, Department of Mathematics, University of Cape Town, South Africa. Mar.–June 1997 Gastprofessor, Wilhelm-Schickard-Institut f¨ur Informatik, Universit¨at T¨ubingen, Germany. Dec. 96–Feb. 97 Visiting Scholar, Department of Theoretical Computer Science, Insti- tute of Mathematical Sciences, Chennai (Madras), India. 1992–1993 Visiting Research Scientist, Department of Computer Science, Prince- ton University. May – July 1989 Gastdozent, Institut f¨ur Informatik, Universit¨at W¨urzburg, West Ger- many. 1 RESEARCH INTERESTS: My research interests lie in the area of computational complexity, with particular emphasis on parallel computation, circuit complexity, Kol- mogorov complexity, and the structure of complexity classes.
    [Show full text]
  • Manuel Blum's CV
    Manuel Blum [email protected] MANUEL BLUM Bruce Nelson Professor of Computer Science Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Telephone: Office: (412) 268-3742, Fax: (412) 268-5576 Home: (412) 687-8730, Mobile: (412) 596-4063 Email: [email protected] Personal Born: 26 April1938 in Caracas, Venezuela. Citizenship: Venezuela and USA. Naturalized Citizen of USA, January 2000. Wife: Lenore Blum, Distinguished Career Professor of Computer Science, CMU Son: Avrim Blum, Professor of Computer Science, CMU. Interests: Computational Complexity; Automata Theory; Algorithms; Inductive Inference: Cryptography; Program Result-Checking; Human Interactive Proofs. Employment History Research Assistant and Research Associate for Dr. Warren S. McCulloch, Research Laboratory of Electronics, MIT, 1960-1965. Assistant Professor, Department of Mathematics, MIT, 1966-68. Visiting Assistant Professor, Associate Professor, Professor, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, 1968-2001. Associate Chair for Computer Science, U.C. Berkeley, 1977-1980. Arthur J. Chick Professor of Computer Science, U.C. Berkeley, 1995-2001. Group in Logic and Methodology of Science, U.C. Berkeley, 1974-present. Visiting Professor of Computer Science. City University of Hong Kong, 1997-1999. Bruce Nelson Professor of Computer Science, Carnegie Mellon University, 2001-present. Education B.S., Electrical Engineering, MIT, 1959; M.S., Electrical Engineering, MIT. 1961. Ph.D., Mathematics, MIT, 1964, Professor Marvin Minsky, supervisor. Manuel Blum [email protected] Honors MIT Class VIB (Honor Sequence in EE), 1958-61 Sloan Foundation Fellowship, 1972-73. U.C. Berkeley Distinguished Teaching Award, 1977. Fellow of the IEEE "for fundamental contributions to the abstract theory of computational complexity,” 1982.
    [Show full text]
  • P Versus NP Problem 1 P Versus NP Problem
    P versus NP problem 1 P versus NP problem List of unsolved problems in computer science If the solution to a problem can be quickly verified by a computer, can the computer also solve that problem quickly? Millennium Prize Problems • P versus NP problem • Hodge conjecture • Poincaré conjecture (solved) • Riemann hypothesis • Yang–Mills existence and mass gap • Navier–Stokes existence and smoothness • Birch and Swinnerton-Dyer conjecture • v • t [1] • e The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer. It was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field.[3] It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution. Diagram of complexity classes provided that P ≠ NP. The existence of problems within NP but outside both P and NP-complete, under [2] The informal term quickly used above means the that assumption, was established by Ladner's theorem. existence of an algorithm for the task that runs in polynomial time. The general class of questions for which some algorithm can provide an answer in polynomial time is called "class P" or just "P". For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it may be possible to verify the answer quickly.
    [Show full text]
  • Towards an Algebraic Natural Proofs Barrier Via Polynomial Identity Testing
    Electronic Colloquium on Computational Complexity, Report No. 9 (2017) Towards an algebraic natural proofs barrier via polynomial identity testing Joshua A. Grochow,∗ Mrinal Kumar,y Michael Saks,z and Shubhangi Sarafx January 9, 2017 Abstract We observe that a certain kind of algebraic proof|which covers essentially all known algebraic circuit lower bounds to date|cannot be used to prove lower bounds against VP if and only if what we call succinct hitting sets exist for VP. This is analogous to the Razborov{Rudich natural proofs barrier in Boolean circuit complexity, in that we rule out a large class of lower bound techniques under a derandomization assumption. We also discuss connections between this algebraic natural proofs barrier, geometric complexity theory, and (algebraic) proof complexity. 1 Introduction The natural proofs barrier [51] showed that a large class of circuit-based proof techniques could not separate P from NP, assuming the existence of pseudo-random generators of a certain strength. In light of the recent advances in techniques for lower bounds on algebraic circuits [5{7, 11, 15, 20, 21, 23{42, 47, 52, 55], it is natural to wonder whether our current algebraic techniques could plausibly separate VP from VNP, or whether there is some barrier in this setting as well. People often hand-wave about an \algebraic natural proofs barrier," by analogy to Razborov{Rudich, but it has not been clear what this means precisely, and to date no such barrier is known in a purely algebraic setting (see below for a discussion of the related work by Aaronson and Drucker [2, 3] in a partially algebraic, partially Boolean setting).
    [Show full text]
  • Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower
    Succinct Hitting Sets and Barriers to Proving Algebraic Circuits Lower Bounds Michael A. Forbes∗ Amir Shpilka† Ben Lee Volk† Abstract We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich [RR97] for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support “cryptography” secure against algebraic circuits. Following a similar result of Williams [Wil16] in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct deran- domization of the polynomial identity testing problem. That is, whether the coefficient vec- tors of polylog(N)-degree polylog(N)-size circuits is a hitting set for the class of poly(N)-degree poly(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmu- ley and Sohoni [MS01], except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry.
    [Show full text]
  • Program of the Thirty-Second Annual ACM Symposium on Theory of Computing
    Program of the Thirty-Second Annual ACM Symposium on Theory of Computing May 21-23, 2000, Portland, Oregon, USA Important Note There will be a workshop on “Challenges for Theoretical Computer Science” on Saturday, May 20, the day before the STOC conference starts. Details will be posted on the SIGACT web page as they become available. Sunday, May 21, 2000 Session 1A Chair: Frances Yao 8:30 am Extractors and pseudo-random generators with optimal seed length Russell Impagliazzo, Ronen Shaltiel, and Avi Wigderson 8:55 am Pseudo-random functions and factoring Moni Naor, Omer Reingold, and Alon Rosen 9:20 am Satisfiability of equations in free groups is in PSPACE Claudio Guti´errez 9:45 am Setting 2 variables at a time yields a new lower bound for random 3-SAT Dimitris Achlioptas Sessions 1B Chair: Ding-Zhu Du 8:30 am A new algorithmic approach to the general Lovasz Local Lemma with applications to scheduling and satisfiability problems Artur Czumaj and Christian Scheideler 8:55 am A deterministic polynomial-time algorithm for approximating mixed discriminant and mixed volume Leonid Gurvits and Alex Samorodnitsky 9:20 am Randomized metarounding Robert Carr and Santosh Vempala 9:45 am Isomorphism testing for embeddable graphs through definability Martin Grohe Coffee Break 10:10 am - 10:30 am Session 2A Chair: Frances Yao 10:30 am Circuit minimization problem Valentine Kabanets and Jin-Yi Cai 10:55 am On the efficiency of local decoding procedures for error-correcting codes Jonathan Katz and Luca Trevisan 11:20 am Statistical mechanics, three-dimensionality
    [Show full text]