Circuit Complexity Before the Dawn of the New Millennium 62

Total Page:16

File Type:pdf, Size:1020Kb

Circuit Complexity Before the Dawn of the New Millennium 62 Circuit Complexity b efore the Dawn of the New Millenniu m ? Eric Allender Department of Computer Science Rutgers University P.O. Box 1179 Piscataway, NJ 08855-1179 USA [email protected] http://www.cs.rutgers.edu/ allender Abstract. The 1980's saw rapid and exciting development of techniques for proving lower b ounds in circuit complexity. This pace has slowed re- cently, and there has even b een work indicating that quite di erent pro of techniques must b e employed to advance b eyond the current frontier of circuit lower b ounds. Although this has engendered p essimism in some quarters, there have in fact b een many p ositive developments in the past few years showing that signi cant progress is p ossible on many fronts. This pap er is a necessarily incomplete survey of the state of circuit complexityasweawait the dawn of the new millenni um. 1 Sup erp olynomial Size Lower Bounds Complexity theory long ago achieved its goal of presenting interesting and im- p ortant computational problems that, although computable, nonetheless require suchhuge circuits to compute that they are computationally intractable. In fact, in Sto ckmeyer's thesis, the unusual step was taken of translating an asymptotic result into concrete terms: Theorem 1. [Sto74 ] Any circuit that takes as input a formula in the language of WS1S with up to 616 symbols and produces as output a correct answer saying 123 whether the formula is valid or not, requires at least 10 gates. To quote from [Sto87]: Even if gates were the size of a proton and were connected by in nitely thin wires, the network would densely ll the known universe. In the intervening years complexity theory has made some progress proving that other problems A require circuits of sup erp olynomial size in symb ols: A 62 P/p oly, but no such A has b een shown to exist in nondeterministic exp onential ? Supp orted in part by NSF grant CCR-9509603. O 1 O 1 n n NP time NTIME 2 or even in the p otentially larger class DTIME2 . Where can we nd sets that are not in P/p oly? A straightforward diagonaliza- tion shows that for any sup erp olynomial time-b ound T , there is a problem in DSPACET n P/p oly. Recall that deterministic space complexity is roughly the same as alternating time complexity [CKS81]. It turns out that the full p ower of alternation is not needed to obtain sets outside of P/p oly { two alternations suce, as can b e shown using techniques of [Kan82] see also [BH92]. Combined with To da's theorem [To d91]we obtain the following. Theorem 2. [Kan82 , BH92, Tod91] Let T be a time-constructible superpolyno- mial function. Then NP { NTIMET n 6 P/p oly. PP { DTIMET n 6 P/p oly. A further improvementwas rep orted byKobler and Watanab e, who showed NP that even ZPTIME T n is not contained in P/p oly [KW]. Here, ZPTIME T n is zero-error probabilistic time T n. Is this the b est that we can do? To the b est of my knowledge, it is not known O 1n log if the classes PrTIME 2 unb ounded error probabilistic quasip olynomial O 1 n C P = time and DTIME2 are contained in P/p oly even relative to an ora- O 1 n NP cle. There are oracles relative to which DTIME2 has p olynomial-size circuits [Hel86, Wil85], thus showing that relativizable techniques cannot b e used O 1 n to present sup erp olynomial circuit size b ounds for NTIME 2 . Note, how- ever that nonrelativizing techniques have b een used on closely-related problems [BFNW93]. More to the p oint, as rep orted in [KW], Buhrman and Fortnow and also Thierauf have shown that the exp onential-time version of the complexity class MA contains problems outside of P/p oly, although this is false relative to O 1 n some oracles. In particular, this shows that PrTIME2 is not contained P/p oly. One can hop e that further insights will lead to more progress on this front. In the mean time, it has turned out to b e very worthwhile to consider some imp ortant sub classes of P/p oly. 2 Smaller Circuit Classes 2 We will fo cus our attention on ve imp ortant circuit complexity classes: 0 1. AC is the class of problems solvable by p olynomial-size, constant-depth 0 circuits of AND, OR, and NOT gates of unb ounded fan-in. AC corresp onds to O 1-time computation on a parallel computer, and it also consists exactly of the languages that can b e sp eci ed in rst-order logic [Imm89, BIS90]. 0 AC circuits are p owerful enough to add and subtract n-bit numb ers. 2 Thus this survey will ignore the large b o dy of b eautiful work on the circuit complexity of larger sub classes of P and NC. 1 2. NC is the class of problems solvable by circuits of AND, OR, and NOT gates 1 of fan-in two and depth O log n. NC circuits capture exactly the circuit complexity required to evaluate a Bo olean formula [Bus93], and to recognize a regular set [Bar89]. There are deep connections b etween circuit complexity 1 and algebra, and NC corresp onds to computation over any non-solvable algebra [Bar89]. 0 3. ACC is the class of problems solvable by p olynomial-size, constant-depth circuits of unb ounded fan-in AND, OR, NOT, and MODm gates. A MODm gate takes inputs x ;:::;x and determines if the numb er of 1's among 1 n 0 these inputs is a multiple of m. To b e more precise, AC m is the class of problems solvable by p olynomial-size, constant-depth circuits of unb ounded S 0 0 AC m. In fan-in AND, OR, NOT, and MODm gates, and ACC = m 0 the algebraic theory mentioned ab ove, ACC corresp onds to computation 0 over any solvable algebra [BT88]. Thus in the algebraic theory,ACC is the 1 most natural sub class of NC . 0 4. TC is the class of problems solvable by p olynomial-size, constant-depth 0 threshold circuits. TC captures exactly the complexityofinteger multipli- 0 cation and division, and sorting [CSV84]. Also, TC is a go o d complexity- theoretic mo del for \neural net" computation [PS88, PS89]. 0 5. NC is the class of problems solvable by circuits of AND, OR, and NOT gates of fan-in two and depth O 1. Note that each output bit can only dep end on 0 O 1 input bits in such a circuit. Thus any function in NC is computed by 0 depth twoAC circuits, merely using DNF or CNF expansion. 0 NC is obviously extremely limited; such circuits cannot even compute the logical OR of n input bits. One of the surprises of circuit complexity is that, in spite of 0 0 its severe limitations, NC is in some sense quite \close" to AC in computational power. 0 Quite a few p owerful techniques are known for proving lower b ounds for AC 0 0 circuits; it is known that AC is prop erly contained in ACC . It is not hard to 0 0 1 see that ACC TC NC .Aswe shall see b elow, weak lower b ounds have 0 0 1 b een proven for ACC and TC , whereas almost nothing is known for NC . 0 3 AC A dramatic series of pap ers in the 1980's [Ajt83, FSS84, Cai89,Yao85,Has87] 0 gave us a pro of that AC circuits require exp onential size even to determine if the numb er of 1's in the input is o dd or even. See also the excellent tuto- 0 rial [BS90]. The main to ol in proving this and other lower b ounds for AC is Hastad's Switching Lemma, one version of which states that most of the \sub- 0 0 functions" of anyAC function f are in NC . A sub-function of f is obtained by setting most of the n input bits to 0 or 1, leaving a function of the n remaining unset bits. Such a sub-function is called a restriction of f . An interesting new pro of of the Switching Lemma was presented by [Raz95] see also [FL95, AAR], and further extensions were presented by [Bea], the latter motivated in partic- ular by the usefulness of the Switching Lemma as a to ol in proving b ounds on the length of prop ositional pro ofs. Although the switching lemma is the most p owerful to ol wehave for prov- 0 ing lower b ounds for AC , it is not the only one. Lower b ound arguments were presented in [Rad94, HJP93] for depth three circuits, and a notion of determin- istic restriction was presented in [CR96] that is useful for proving nonlinear size b ounds. It is imp ortant to note that, although the Switching Lemma tells us that 0 any function f in AC is \close to" functions computed by depth two circuits since most restrictions of f are computed in depth two, it also provides the to ols to show that for all k , there are depth k + 1 circuits of linear size that require exp onential size to simulate with depth k circuits [Has87]. This is in sharp contrast to the class of circuits considered in the next section, where ecient depth reduction is p ossible.
Recommended publications
  • Notes for Lecture 2
    Notes on Complexity Theory Last updated: December, 2011 Lecture 2 Jonathan Katz 1 Review The running time of a Turing machine M on input x is the number of \steps" M takes before it halts. Machine M is said to run in time T (¢) if for every input x the running time of M(x) is at most T (jxj). (In particular, this means it halts on all inputs.) The space used by M on input x is the number of cells written to by M on all its work tapes1 (a cell that is written to multiple times is only counted once); M is said to use space T (¢) if for every input x the space used during the computation of M(x) is at most T (jxj). We remark that these time and space measures are worst-case notions; i.e., even if M runs in time T (n) for only a fraction of the inputs of length n (and uses less time for all other inputs of length n), the running time of M is still T . (Average-case notions of complexity have also been considered, but are somewhat more di±cult to reason about. We may cover this later in the semester; or see [1, Chap. 18].) Recall that a Turing machine M computes a function f : f0; 1g¤ ! f0; 1g¤ if M(x) = f(x) for all x. We will focus most of our attention on boolean functions, a context in which it is more convenient to phrase computation in terms of languages. A language is simply a subset of f0; 1g¤.
    [Show full text]
  • On Uniformity Within NC
    On Uniformity Within NC David A Mix Barrington Neil Immerman HowardStraubing University of Massachusetts University of Massachusetts Boston Col lege Journal of Computer and System Science Abstract In order to study circuit complexity classes within NC in a uniform setting we need a uniformity condition which is more restrictive than those in common use Twosuch conditions stricter than NC uniformity RuCo have app eared in recent research Immermans families of circuits dened by rstorder formulas ImaImb and a unifor mity corresp onding to Buss deterministic logtime reductions Bu We show that these two notions are equivalent leading to a natural notion of uniformity for lowlevel circuit complexity classes Weshow that recent results on the structure of NC Ba still hold true in this very uniform setting Finallyweinvestigate a parallel notion of uniformity still more restrictive based on the regular languages Here we givecharacterizations of sub classes of the regular languages based on their logical expressibility extending recentwork of Straubing Therien and Thomas STT A preliminary version of this work app eared as BIS Intro duction Circuit Complexity Computer scientists have long tried to classify problems dened as Bo olean predicates or functions by the size or depth of Bo olean circuits needed to solve them This eort has Former name David A Barrington Supp orted by NSF grant CCR Mailing address Dept of Computer and Information Science U of Mass Amherst MA USA Supp orted by NSF grants DCR and CCR Mailing address Dept of
    [Show full text]
  • Week 1: an Overview of Circuit Complexity 1 Welcome 2
    Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes).
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • When Subgraph Isomorphism Is Really Hard, and Why This Matters for Graph Databases Ciaran Mccreesh, Patrick Prosser, Christine Solnon, James Trimble
    When Subgraph Isomorphism is Really Hard, and Why This Matters for Graph Databases Ciaran Mccreesh, Patrick Prosser, Christine Solnon, James Trimble To cite this version: Ciaran Mccreesh, Patrick Prosser, Christine Solnon, James Trimble. When Subgraph Isomorphism is Really Hard, and Why This Matters for Graph Databases. Journal of Artificial Intelligence Research, Association for the Advancement of Artificial Intelligence, 2018, 61, pp.723 - 759. 10.1613/jair.5768. hal-01741928 HAL Id: hal-01741928 https://hal.archives-ouvertes.fr/hal-01741928 Submitted on 26 Mar 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Journal of Artificial Intelligence Research 61 (2018) 723-759 Submitted 11/17; published 03/18 When Subgraph Isomorphism is Really Hard, and Why This Matters for Graph Databases Ciaran McCreesh [email protected] Patrick Prosser [email protected] University of Glasgow, Glasgow, Scotland Christine Solnon [email protected] INSA-Lyon, LIRIS, UMR5205, F-69621, France James Trimble [email protected] University of Glasgow, Glasgow, Scotland Abstract The subgraph isomorphism problem involves deciding whether a copy of a pattern graph occurs inside a larger target graph.
    [Show full text]
  • The Polynomial Hierarchy
    ij 'I '""T', :J[_ ';(" THE POLYNOMIAL HIERARCHY Although the complexity classes we shall study now are in one sense byproducts of our definition of NP, they have a remarkable life of their own. 17.1 OPTIMIZATION PROBLEMS Optimization problems have not been classified in a satisfactory way within the theory of P and NP; it is these problems that motivate the immediate extensions of this theory beyond NP. Let us take the traveling salesman problem as our working example. In the problem TSP we are given the distance matrix of a set of cities; we want to find the shortest tour of the cities. We have studied the complexity of the TSP within the framework of P and NP only indirectly: We defined the decision version TSP (D), and proved it NP-complete (corollary to Theorem 9.7). For the purpose of understanding better the complexity of the traveling salesman problem, we now introduce two more variants. EXACT TSP: Given a distance matrix and an integer B, is the length of the shortest tour equal to B? Also, TSP COST: Given a distance matrix, compute the length of the shortest tour. The four variants can be ordered in "increasing complexity" as follows: TSP (D); EXACTTSP; TSP COST; TSP. Each problem in this progression can be reduced to the next. For the last three problems this is trivial; for the first two one has to notice that the reduction in 411 j ;1 17.1 Optimization Problems 413 I 412 Chapter 17: THE POLYNOMIALHIERARCHY the corollary to Theorem 9.7 proving that TSP (D) is NP-complete can be used with DP.
    [Show full text]
  • Collapsing Exact Arithmetic Hierarchies
    Electronic Colloquium on Computational Complexity, Report No. 131 (2013) Collapsing Exact Arithmetic Hierarchies Nikhil Balaji and Samir Datta Chennai Mathematical Institute fnikhil,[email protected] Abstract. We provide a uniform framework for proving the collapse of the hierarchy, NC1(C) for an exact arith- metic class C of polynomial degree. These hierarchies collapses all the way down to the third level of the AC0- 0 hierarchy, AC3(C). Our main collapsing exhibits are the classes 1 1 C 2 fC=NC ; C=L; C=SAC ; C=Pg: 1 1 NC (C=L) and NC (C=P) are already known to collapse [1,18,19]. We reiterate that our contribution is a framework that works for all these hierarchies. Our proof generalizes a proof 0 1 from [8] where it is used to prove the collapse of the AC (C=NC ) hierarchy. It is essentially based on a polynomial degree characterization of each of the base classes. 1 Introduction Collapsing hierarchies has been an important activity for structural complexity theorists through the years [12,21,14,23,18,17,4,11]. We provide a uniform framework for proving the collapse of the NC1 hierarchy over an exact arithmetic class. Using 0 our method, such a hierarchy collapses all the way down to the AC3 closure of the class. 1 1 1 Our main collapsing exhibits are the NC hierarchies over the classes C=NC , C=L, C=SAC , C=P. Two of these 1 1 hierarchies, viz. NC (C=L); NC (C=P), are already known to collapse ([1,19,18]) while a weaker collapse is known 0 1 for a third one viz.
    [Show full text]
  • On the Power of Parity Polynomial Time Jin-Yi Cai1 Lane A
    On the Power of Parity Polynomial Time Jin-yi Cai1 Lane A. Hemachandra2 December. 1987 CUCS-274-87 I Department of Computer Science. Yale University, New Haven. CT 06520 1Department of Computer Science. Columbia University. New York. NY 10027 On the Power of Parity Polynomial Time Jin-yi Gaia Lane A. Hemachandrat Department of Computer Science Department of Computer Science Yale University Columbia University New Haven, CT 06520 New York, NY 10027 December, 1987 Abstract This paper proves that the complexity class Ef)P, parity polynomial time [PZ83], contains the class of languages accepted by NP machines with few ac­ cepting paths. Indeed, Ef)P contains a. broad class of languages accepted by path-restricted nondeterministic machines. In particular, Ef)P contains the polynomial accepting path versions of NP, of the counting hierarchy, and of ModmNP for m > 1. We further prove that the class of nondeterministic path-restricted languages is closed under bounded truth-table reductions. 1 Introduction and Overview One of the goals of computational complexity theory is to classify the inclusions and separations of complexity classes. Though nontrivial separation of complexity classes is often a challenging problem (e.g. P i: NP?), the inclusion structure of complexity classes is progressively becoming clearer [Sim77,Lau83,Zac86,Sch87]. This paper proves that Ef)P contains a broad range of complexity classes. 1.1 Parity Polynomial Time The class Ef)P, parity polynomial time, was defined and studied by Papadimitriou and . Zachos as a "moderate version" of Valiant's counting class #P . • Research supported by NSF grant CCR-8709818. t Research supported in part by a Hewlett-Packard Corporation equipment grant.
    [Show full text]
  • Glossary of Complexity Classes
    App endix A Glossary of Complexity Classes Summary This glossary includes selfcontained denitions of most complexity classes mentioned in the b o ok Needless to say the glossary oers a very minimal discussion of these classes and the reader is re ferred to the main text for further discussion The items are organized by topics rather than by alphab etic order Sp ecically the glossary is partitioned into two parts dealing separately with complexity classes that are dened in terms of algorithms and their resources ie time and space complexity of Turing machines and complexity classes de ned in terms of nonuniform circuits and referring to their size and depth The algorithmic classes include timecomplexity based classes such as P NP coNP BPP RP coRP PH E EXP and NEXP and the space complexity classes L NL RL and P S P AC E The non k uniform classes include the circuit classes P p oly as well as NC and k AC Denitions and basic results regarding many other complexity classes are available at the constantly evolving Complexity Zoo A Preliminaries Complexity classes are sets of computational problems where each class contains problems that can b e solved with sp ecic computational resources To dene a complexity class one sp ecies a mo del of computation a complexity measure like time or space which is always measured as a function of the input length and a b ound on the complexity of problems in the class We follow the tradition of fo cusing on decision problems but refer to these problems using the terminology of promise problems
    [Show full text]
  • IBM Research Report Derandomizing Arthur-Merlin Games And
    H-0292 (H1010-004) October 5, 2010 Computer Science IBM Research Report Derandomizing Arthur-Merlin Games and Approximate Counting Implies Exponential-Size Lower Bounds Dan Gutfreund, Akinori Kawachi IBM Research Division Haifa Research Laboratory Mt. Carmel 31905 Haifa, Israel Research Division Almaden - Austin - Beijing - Cambridge - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g. , payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . Derandomization Implies Exponential-Size Lower Bounds 1 DERANDOMIZING ARTHUR-MERLIN GAMES AND APPROXIMATE COUNTING IMPLIES EXPONENTIAL-SIZE LOWER BOUNDS Dan Gutfreund and Akinori Kawachi Abstract. We show that if Arthur-Merlin protocols can be deran- domized, then there is a Boolean function computable in deterministic exponential-time with access to an NP oracle, that cannot be computed by Boolean circuits of exponential size. More formally, if prAM ⊆ PNP then there is a Boolean function in ENP that requires circuits of size 2Ω(n).
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • Lower Bounds from Learning Algorithms
    Circuit Lower Bounds from Nontrivial Learning Algorithms Igor C. Oliveira Rahul Santhanam Charles University in Prague University of Oxford Motivation and Background Previous Work Lemma 1 [Speedup Phenomenon in Learning Theory]. From PSPACE BPTIME[exp(no(1))], simple padding argument implies: DSPACE[nω(1)] BPEXP. Some connections between algorithms and circuit lower bounds: Assume C[poly(n)] can be (weakly) learned in time 2n/nω(1). Lower bounds Lemma [Diagonalization] (3) (the proof is sketched later). “Fast SAT implies lower bounds” [KL’80] against C ? Let k N and ε > 0 be arbitrary constants. There is L DSPACE[nω(1)] that is not in C[poly]. “Nontrivial” If Circuit-SAT can be solved efficiently then EXP ⊈ P/poly. learning algorithm Then C-circuits of size nk can be learned to accuracy n-k in Since DSPACE[nω(1)] BPEXP, we get BPEXP C[poly], which for a circuit class C “Derandomization implies lower bounds” [KI’03] time at most exp(nε). completes the proof of Theorem 1. If PIT NSUBEXP then either (i) NEXP ⊈ P/poly; or 0 0 Improved algorithmic ACC -SAT ACC -Learning It remains to prove the following lemmas. upper bounds ? (ii) Permanent is not computed by poly-size arithmetic circuits. (1) Speedup Lemma (relies on recent work [CIKK’16]). “Nontrivial SAT implies lower bounds” [Wil’10] Nontrivial: 2n/nω(1) ? (Non-uniform) Circuit Classes: If Circuit-SAT for poly-size circuits can be solved in time (2) PSPACE Simulation Lemma (follows [KKO’13]). 2n/nω(1) then NEXP ⊈ P/poly. SETH: 2(1-ε)n ? ? (3) Diagonalization Lemma [Folklore].
    [Show full text]