Proof Complexity and SAT Solving Sam Buss and Jakob Nordstr¨Om

Total Page:16

File Type:pdf, Size:1020Kb

Proof Complexity and SAT Solving Sam Buss and Jakob Nordstr¨Om ✐ ✐ “ProofComplexityChapter” — 2020/10/20 — 20:48 — page 233 — #1 ✐ ✐ Handbook of Satisfiability 233 Armin Biere, Marijn Heule, Hans van Maaren and Toby Walsh IOS Press, 2021 © 2021 Sam Buss and Jakob Nordstr¨om. All rights reserved. Chapter 7 Proof Complexity and SAT Solving Sam Buss and Jakob Nordstr¨om 7.1. Introduction The satisfiability problem (SAT) — i.e., to determine whether a given a formula in propositional logic has a satisfying assignment or not — is of central importance to both the theory of computer science and the practice of automatic theorem proving and proof search. Proof complexity — i.e., the study of the complexity of proofs and the difficulty of searching for proofs — joins the theoretical and practical aspects of satisfiability. For theoretical computer science, SAT is the canonical NP-complete problem, even for conjunctive normal form (CNF) formulas [Coo71, Lev73]. In fact, SAT is very efficient at expressing problems in NP in that many of the standard NP- complete problems, including the question of whether a (nondeterministic) Turing machine halts within n steps, have very efficient, almost linear time, reductions to the satisfiability of CNF formulas.1 A popular hypothesis in the computational complexity community is the Strong Exponential Time Hypothesis (SETH), which says that any algorithm for solving CNF SAT must have worst-case running time (roughly) 2n on instances with n variables [IP01, CIP09]. This hypothesis has been widely studied in recent years, and has served as a basis for proving conditional hardness results for many other problems. In other words, CNF SAT serves as the canonical hard decision problem, and is frequently conjectured to require exponential time to solve. In contrast, for practical theorem proving, CNF SAT is the core method for encoding and solving problems. On one hand, the expressiveness of CNF formulas means that a large variety of problems can be faithfully and straightforwardly translated into CNF SAT problems. On the other hand, the message that SAT is supposed to be hard to solve does not seem to have reached practitioners of SAT solving; instead, there has been enormous improvements in performance in SAT algorithms over the last decades. Amazingly, state-of-the-art algorithms for deciding satisfiability — so-called SAT solvers — can routinely handle real-world 1Formally, these reductions run in quasilinear time, i.e. time at most n(log n)k for some constant k. For these quasilinear time reductions of the Turing machine halting problem to CNF SAT, see [Sch78, PF79, Rob79, Rob91, Coo88]. ✐ ✐ ✐ ✐ ✐ ✐ “ProofComplexityChapter” — 2020/10/20 — 20:48 — page 234 — #2 ✐ ✐ 234 Chapter 7. Proof Complexity and SAT Solving instances involving hundreds of thousands or even millions of variables. It is a dramatic development that SAT solvers can often run in (close to) linear time! Thus, theoreticians view SAT as being infeasible, while practitioners view it as being (often) feasible. There is no contradiction here. First, it is possible construct tiny formulas with just a few hundred variables that are totally beyond reach for even the best of today’s solvers. Conversely, the large instances which are solved by SAT solvers are based on problems that seem to be “easy” in some sense, although they are very large. However, we currently lack a good theoretical explanation for what makes these problems “easy”. Most SAT solvers are general-purpose and written in a very generic way that does not seem to exploit special properties of the underlying problem. Nonetheless, although SAT solvers will sometimes fail miserably, they succeed much more frequently than might be expected. This raises the questions of how practical SAT solvers can perform so well on many large problems and of what distinguishes problems that can be solved by SAT solvers from problems that cannot. The best current SAT solvers are based on conflict-driven clause learning (CDCL) [MS99, MMZ+01].2 Some solvers also incorporate elements of algebraic reasoning (e.g., Gaussian elimination) and/or geometric reasoning (e.g., linear in- equalities), or use algebraic or geometric methods as the foundation rather than CDCL. Another augmentation of CDCL that has attracted much interest is ex- tended resolution (ER). How can we analyze the power of such algorithms? Our best approach is to study the underlying methods of reasoning and what they are able or unable to do in principle. This leads to the study of proof systems such as resolution, extended resolution, Nullstellensatz, polynomial calculus, cutting planes, et cetera. Proof complexity, as initiated in modern form by [CR79, Rec75], studies these system mostly from the viewpoint of the complexity of static, com- pleted proofs. With a few exceptions (perhaps most notably automatability3 as defined in [BPR00]), research in proof complexity ignores the constructive, algo- rithmic aspects of SAT solvers. Nonetheless, proof complexity has turned out to be a very useful tool for studying practical SAT solvers, in particular, because it is a way to obtain mathematically rigorous bounds on solver performance. Lower bounds on the complexity of proofs in a proof systems show fundamen- tal limitations on what one can hope to achieve using SAT solvers based on the corresponding method of reasoning. Conversely, upper bounds can be viewed as indications of what should be possible to achieve using a method of reasoning, if only proof search using this method could be implemented efficiently enough. We want to stress, though, that the tight connections to proof complexity come with a price. Since proof complexity studies what reasoning can be achieved by a method in principle, ignoring the algorithmic challenge of actually imple- menting such reasoning, it becomes very hard to say anything meaningful about satisfiable formulas. In principle, it is very hard to rule out that a solver could simply zoom in on a satisfying assignment right away, and this will of course only take linear time. For unsatisfiable formulas, however, a (complete) solver will have to certify that there is no satisfying assignment, and for many SAT 2A similar technique for constraint satisfaction problems (CSPs) was independently devel- oped in [BS97]. 3Automatability is sometimes also called automatizability. ✐ ✐ ✐ ✐ ✐ ✐ “ProofComplexityChapter” — 2020/10/20 — 20:48 — page 235 — #3 ✐ ✐ Chapter 7. Proof Complexity and SAT Solving 235 solving methods used in practice this is a computational task that is amenable to mathematical analysis. This article, therefore, will focus almost exclusively on unsatisfiable formulas. This is, admittedly, a severe limitation, but it is dictated by the limitations of our current mathematical knowledge. There has been some work on proof complexity of satisfiable formulas by reducing this to problems about unsatisfiable subformulas (e.g., in [AHI05]), but the literature here is very limited. Having said this, we want to point out that the setting of unsatisfiable formulas is nevertheless very interesting, since in many real-world applications proving unsatisfiability is the objective, and conventional wisdom is that such instances are often the hardest ones. This chapter is intended as an overview of the connection between SAT solv- ing and proof complexity aimed at readers who wish to become more familiar with either (or both) of these areas. We focus on the proof systems underlying current approaches to SAT solving. Our goal is first to explain how SAT solvers correspond to proof systems and second to review some of the complexity results known for these proof systems. We will discuss resolution (corresponding to basic CDCL proof search), Nullstellensatz and polynomial calculus (corresponding to algebraic approaches such as Gr¨obner basis computations), cutting planes (cor- responding to pseudo-Boolean solving), extended resolution (corresponding to the DRAT proof logging system used for CDCL solvers with pre-/inprocessing), and will also briefly touch on Frege systems and bounded-depth Frege systems. We want to emphasize that we do not have space to cover more than a small part of the research being done in proof complexity. Some useful mate- rial for further reading are the survey articles [BP98a, Seg07] and the recent book [Kra19]. Additionally, we would like to mention the authors’ own surveys [Bus99, Bus12, Nor13]. The present chapter is adapted from and partially over- laps with the second author’s survey [Nor15], but has been thoroughly rewritten and substantially expanded with new material. 7.1.1. Outline of This Survey Chapter The rest of this chapter is organized as follows. Section 7.2 presents a quick review of preliminaries. We discuss the resolution proof system and describe the connection to CDCL SAT solvers in Section 7.3, and then give an overview of some of the proof complexity results known for resolution in Section 7.4. In Section 7.5 we consider the algebraic proof systems Nullstellensatz and polynomial calculus, and also briefly touch on algebraic SAT solving. In Section 7.6 we move on to the geometric proof system cutting planes and the connections to conflict-driven pseudo-Boolean solving, after which we give an overview of what is known in proof complexity about different flavours of the cutting planes proof system in Section 7.7. We review extended resolution and DRAT in Section 7.8, and then continue to Frege and extended Frege proof systems and bounded-depth Frege systems in Sections 7.9 and 7.10. Section 7.11 gives some concluding remarks. 7.2. Preliminaries We use the notation [n] = 1, 2,...,n for n a positive integer. We write N = 0, 1, 2, 3,... to denote the{ set of all natural} numbers and N+ = N 0 for the { } \{ } ✐ ✐ ✐ ✐ ✐ ✐ “ProofComplexityChapter” — 2020/10/20 — 20:48 — page 236 — #4 ✐ ✐ 236 Chapter 7. Proof Complexity and SAT Solving set of positive integers. 7.2.1. Propositional Logic A Boolean variable x ranges over values true and false; unless otherwise stated, we identify 1 with true and 0 with false.
Recommended publications
  • Circuit Complexity, Proof Complexity and Polynomial Identity Testing
    Electronic Colloquium on Computational Complexity, Report No. 52 (2014) Circuit Complexity, Proof Complexity and Polynomial Identity Testing Joshua A. Grochow and Toniann Pitassi April 12, 2014 Abstract We introduce a new and very natural algebraic proof system, which has tight connections to (algebraic) circuit complexity. In particular, we show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial- size algebraic circuits (VNP 6= VP). As a corollary to the proof, we also show that super- polynomial lower bounds on the number of lines in Polynomial Calculus proofs (as opposed to the usual measure of number of monomials) imply the Permanent versus Determinant Conjecture. Note that, prior to our work, there was no proof system for which lower bounds on an arbitrary tautology implied any computational lower bound. Our proof system helps clarify the relationships between previous algebraic proof systems, and begins to shed light on why proof complexity lower bounds for various proof systems have been so much harder than lower bounds on the corresponding circuit classes. In doing so, we highlight the importance of polynomial identity testing (PIT) for understanding proof complex- ity. More specifically, we introduce certain propositional axioms satisfied by any Boolean circuit computing PIT. (The existence of efficient proofs for our PIT axioms appears to be somewhere in between the major conjecture that PIT2 P and the known result that PIT2 P=poly.) We use these PIT axioms to shed light on AC0[p]-Frege lower bounds, which have been open for nearly 30 years, with no satisfactory explanation as to their apparent difficulty.
    [Show full text]
  • Automating Cutting Planes Is NP-Hard
    Automating Cutting Planes is NP-Hard Mika G¨o¨os† Sajin Koroth Ian Mertz Toniann Pitassi Stanford University Simon Fraser University Uni. of Toronto Uni. of Toronto & IAS April 20, 2020 Abstract We show that Cutting Planes (CP) proofs are hard to find: Given an unsatisfiable formula F , (1) it is NP-hard to find a CP refutation of F in time polynomial in the length of the shortest such refutation; and (2) unless Gap-Hitting-Set admits a nontrivial algorithm, one cannot find a tree-like CP refutation of F in time polynomial in the length of the shortest such refutation. The first result extends the recent breakthrough of Atserias and M¨uller (FOCS 2019) that established an analogous result for Resolution. Our proofs rely on two new lifting theorems: (1) Dag-like lifting for gadgets with many output bits. (2) Tree-like lifting that simulates an r-round protocol with gadgets of query complexity O(log r) independent of input length. Contents 1 Introduction1 4.1 Subcubes from simplices . .9 1.1 Cutting Planes . .1 4.2 Simplified proof . 11 1.2 Dag-like result . .2 4.3 Accounting for error . 12 1.3 Tree-like result . .2 5 Tree-like definitions 13 2 Overview of proofs3 5.1 Tree-like dags/proofs . 13 2.1 Dag-like case . .3 5.2 Real protocols . 14 arXiv:2004.08037v1 [cs.CC] 17 Apr 2020 2.2 Tree-like case . .4 6 Tree-like lifting 14 3 Dag-like definitions6 6.1 Proof of real lifting . 14 3.1 Standard models .
    [Show full text]
  • Reflection Principles, Propositional Proof Systems, and Theories
    Reflection principles, propositional proof systems, and theories In memory of Gaisi Takeuti Pavel Pudl´ak ∗ July 30, 2020 Abstract The reflection principle is the statement that if a sentence is provable then it is true. Reflection principles have been studied for first-order theories, but they also play an important role in propositional proof complexity. In this paper we will revisit some results about the reflection principles for propositional proofs systems using a finer scale of reflection principles. We will use the result that proving lower bounds on Resolution proofs is hard in Resolution. This appeared first in the recent article of Atserias and M¨uller [2] as a key lemma and was generalized and simplified in some spin- off papers [11, 13, 12]. We will also survey some results about arithmetical theories and proof systems associated with them. We will show a connection between a conjecture about proof complexity of finite consistency statements and a statement about proof systems associated with a theory. 1 Introduction arXiv:2007.14835v1 [math.LO] 29 Jul 2020 This paper is essentially a survey of some well-known results in proof complexity supple- mented with some observations. In most cases we will also sketch or give an idea of the proofs. Our aim is to focus on some interesting results rather than giving a complete ac- count of known results. We presuppose knowledge of basic concepts and theorems in proof complexity. An excellent source is Kraj´ıˇcek’s last book [19], where you can find the necessary definitions and read more about the results mentioned here.
    [Show full text]
  • Contents Part 1. a Tour of Propositional Proof Complexity. 418 1. Is There A
    The Bulletin of Symbolic Logic Volume 13, Number 4, Dec. 2007 THE COMPLEXITY OF PROPOSITIONAL PROOFS NATHAN SEGERLIND Abstract. Propositional proof complexity is the study of the sizes of propositional proofs, and more generally, the resources necessary to certify propositional tautologies. Questions about proof sizes have connections with computational complexity, theories of arithmetic, and satisfiability algorithms. This is article includes a broad survey of the field, and a technical exposition of some recently developed techniques for proving lower bounds on proof sizes. Contents Part 1. A tour of propositional proof complexity. 418 1. Is there a way to prove every tautology with a short proof? 418 2. Satisfiability algorithms and theories of arithmetic 420 3. A menagerie of Frege-like proof systems 427 4. Reverse mathematics of propositional principles 434 5. Feasible interpolation 437 6. Further connections with satisfiability algorithms 438 7. Beyond the Frege systems 441 Part 2. Some lower bounds on refutation sizes. 444 8. The size-width trade-off for resolution 445 9. The small restriction switching lemma 450 10. Expansion clean-up and random 3-CNFs 459 11. Resolution pseudowidth and very weak pigeonhole principles 467 Part 3. Open problems, further reading, acknowledgments. 471 Appendix. Notation. 472 References. 473 Received June 27, 2007. This survey article is based upon the author’s Sacks Prize winning PhD dissertation, however, the emphasis here is on context. The vast majority of results discussed are not the author’s, and several results from the author’s dissertation are omitted. c 2007, Association for Symbolic Logic 1079-8986/07/1304-0001/$7.50 417 418 NATHAN SEGERLIND Part 1.
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • Some Hardness Escalation Results in Computational Complexity Theory Pritish Kamath
    Some Hardness Escalation Results in Computational Complexity Theory by Pritish Kamath B.Tech. Indian Institute of Technology Bombay (2012) S.M. Massachusetts Institute of Technology (2015) Submitted to Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering & Computer Science at Massachusetts Institute of Technology February 2020 ⃝c Massachusetts Institute of Technology 2019. All rights reserved. Author: ............................................................. Department of Electrical Engineering and Computer Science September 16, 2019 Certified by: ............................................................. Ronitt Rubinfeld Professor of Electrical Engineering and Computer Science, MIT Thesis Supervisor Certified by: ............................................................. Madhu Sudan Gordon McKay Professor of Computer Science, Harvard University Thesis Supervisor Accepted by: ............................................................. Leslie A. Kolodziejski Professor of Electrical Engineering and Computer Science, MIT Chair, Department Committee on Graduate Students Some Hardness Escalation Results in Computational Complexity Theory by Pritish Kamath Submitted to Department of Electrical Engineering and Computer Science on September 16, 2019, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science & Engineering Abstract In this thesis, we prove new hardness escalation results in computational complexity theory; a phenomenon where hardness results against seemingly weak models of computation for any problem can be lifted, in a black box manner, to much stronger models of computation by considering a simple gadget composed version of the original problem. For any unsatisfiable CNF formula F that is hard to refute in the Resolution proof system, we show that a gadget-composed version of F is hard to refute in any proof system whose lines are computed by efficient communication protocols.
    [Show full text]
  • IAN MERTZ Contact Research Theory Group [email protected] Department of Computer Science University of Toronto
    IAN MERTZ Contact Research www.cs.toronto.edu/∼mertz Theory Group [email protected] Department of Computer Science University of Toronto EDUCATION University of Toronto, Toronto, ON ................................................................... Winter 2018- Ph.D. in Computer Science Advisor: Toniann Pitassi University of Toronto, Toronto, ON .......................................................... Fall 2016-Winter 2018 Masters in Science, Computer Science Advisor: Toniann Pitassi, Stephen Cook Rutgers University, Piscataway, NJ ........................................................... Fall 2012-Spring 2016 B.S. in Computer Science, B.A. in Mathematics, B.A. in East Asian Language Studies (concentration in Japanese) Summa cum laude, honors program, Dean's List PUBLICATIONS Lifting is as easy as 1,2,3 Ian Mertz, Toniann Pitassi In submission, STOC '21. Automating Cutting Planes is NP-hard Mika G¨o¨os,Sajin Koroth, Ian Mertz, Toniann Pitassi In Proc. 52nd ACM Symposium on Theory of Computing (STOC '20), Association for Computing Machinery (ACM), 84 (132) pp. 68-77, 2020. Catalytic Approaches to the Tree Evaluation Problem James Cook, Ian Mertz In Proc. 52nd ACM Symposium on Theory of Computing (STOC '20), Association for Computing Machinery (ACM), 84 (132) pp. 752-760, 2020. Short Proofs Are Hard to Find Ian Mertz, Toniann Pitassi, Yuanhao Wei In Proc. 46th International Colloquium on Automata, Languages and Programming (ICALP '19), Leibniz International Proceedings in Informatics (LIPIcs), 84 (132) pp. 1-16, 2019. Dual VP Classes Eric Allender, Anna G´al,Ian Mertz computational complexity, 25 pp. 1-43, 2016. An earlier version appeared in Proc. 40th International Symposium on Mathematical Foundations of Computer Science (MFCS '15), Lecture Notes in Computer Science, 9235 pp. 14-25, 2015.
    [Show full text]
  • Proof Complexity in Algebraic Systems and Bounded Depth Frege Systems with Modular Counting
    PROOF COMPLEXITY IN ALGEBRAIC SYSTEMS AND BOUNDED DEPTH FREGE SYSTEMS WITH MODULAR COUNTING S. Buss, R. Impagliazzo, J. Kraj¶³cek,· P. Pudlak,¶ A. A. Razborov and J. Sgall Abstract. We prove a lower bound of the form N ­(1) on the degree of polynomials in a Nullstellensatz refutation of the Countq polynomials over Zm, where q is a prime not dividing m. In addition, we give an explicit construction of a degree N ­(1) design for the Countq principle over Zm. As a corollary, using Beame et al. (1994) we obtain ­(1) a lower bound of the form 2N for the number of formulas in a constant-depth N Frege proof of the modular counting principle Countq from instances of the counting M principle Countm . We discuss the polynomial calculus proof system and give a method of converting tree-like polynomial calculus derivations into low degree Nullstellensatz derivations. Further we show that a lower bound for proofs in a bounded depth Frege system in the language with the modular counting connective MODp follows from a lower bound on the degree of Nullstellensatz proofs with a constant number of levels of extension axioms, where the extension axioms comprise a formalization of the approximation method of Razborov (1987), Smolensky (1987) (in fact, these two proof systems are basically equivalent). Introduction A propositional proof system is intuitively a system for establishing the validity of propo- sitional tautologies in some ¯xed complete language. The formal de¯nition of propositional proof system is that it is a polynomial time function f which maps strings over an alphabet § onto the set of propositional tautologies (Cook & Reckhow 1979).
    [Show full text]
  • Query-To-Communication Lifting for PNP∗†
    Query-to-Communication Lifting for PNP∗† Mika Göös1, Pritish Kamath2, Toniann Pitassi3, and Thomas Watson4 1 Harvard University, Cambridge, MA, USA [email protected] 2 Massachusetts Institute of Technology, Cambridge, MA, USA [email protected] 3 University of Toronto, Toronto, ON, Canada [email protected] 4 University of Memphis, Memphis, TN, USA [email protected] Abstract We prove that the PNP-type query complexity (alternatively, decision list width) of any boolean function f is quadratically related to the PNP-type communication complexity of a lifted version of f. As an application, we show that a certain “product” lower bound method of Impagliazzo and Williams (CCC 2010) fails to capture PNP communication complexity up to polynomial factors, which answers a question of Papakonstantinou, Scheder, and Song (CCC 2014). 1998 ACM Subject Classification F.1.3 Complexity Measures and Classes Keywords and phrases Communication Complexity, Query Complexity, Lifting Theorem, PNP Digital Object Identifier 10.4230/LIPIcs.CCC.2017.12 1 Introduction Broadly speaking, a query-to-communication lifting theorem (a.k.a. communication- to-query simulation theorem) translates, in a black-box fashion, lower bounds on some type of query complexity (a.k.a. decision tree complexity) [38, 6, 19] of a boolean function f : {0, 1}n → {0, 1} into lower bounds on a corresponding type of communication complexity [23, 19, 27] of a two-party version of f. Table 1 lists several known results in this vein. In this work, we provide a lifting theorem for PNP-type query/communication complexity. PNP decision trees. Recall that a deterministic (i.e., P-type) decision tree computes an n-bit boolean function f by repeatedly querying, at unit cost, individual bits xi ∈ {0, 1} of the input x until the value f(x) is output at a leaf of the tree.
    [Show full text]
  • On Monotone Circuits with Local Oracles and Clique Lower Bounds
    On monotone circuits with local oracles and clique lower bounds Jan Kraj´ıˇcek Igor C. Oliveira Faculty of Mathematics and Physics Charles University in Prague December 17, 2019 Abstract We investigate monotone circuits with local oracles [K., 2016], i.e., circuits containing additional inputs yi = yi(~x) that can perform unstructured computations on the input string ~x. Let µ [0, 1] be the locality of the circuit, a parameter that bounds the combined ∈ m strength of the oracle functions yi(~x), and Un,k, Vn,k 0, 1 be the set of k-cliques and the set of complete (k 1)-partite graphs, respectively⊆ (similarly { } to [Razborov, 1985]). Our results can be informally− stated as follows. (i) For an appropriate extension of depth-2 monotone circuits with local oracles, we show that the size of the smallest circuits separating Un,3 (triangles) and Vn,3 (complete bipartite graphs) undergoes two phase transitions according to µ. (ii) For 5 k(n) n1/4, arbitrary depth, and µ 1/50, we prove that the monotone ≤ ≤ ≤ Θ(√k) circuit size complexity of separating the sets Un,k and Vn,k is n , under a certain restrictive assumption on the local oracle gates. The second result, which concerns monotone circuits with restricted oracles, extends and provides a matching upper bound for the exponential lower bounds on the monotone circuit size complexity of k-clique obtained by Alon and Boppana (1987). 1 Introduction and motivation arXiv:1704.06241v3 [cs.CC] 16 Dec 2019 We establish initial lower bounds on the power of monotone circuits with local oracles (mono- tone CLOs), an extension of monotone circuits introduced in [Kra16] motivated by problems in proof complexity.
    [Show full text]
  • Propositional Proof Complexity Past Present and Future
    Prop ositional Pro of Complexity Past Present and Future Paul Beame and Toniann Pitassi Abstract Pro of complexity the study of the lengths of pro ofs in prop ositional logic is an area of study that is fundamentally connected b oth to ma jor op en questions of computational complexity theory and to practical prop erties of automated theorem provers In the last decade there have b een a number of signicant advances in pro of complexity lower b ounds Moreover new connections b etween pro of complexity and circuit complexity have b een uncovered and the interplay b etween these two areas has b ecome quite rich In addition attempts to extend existing lower b ounds to ever stronger systems of pro of have spurred the introduction of new and interesting pro of systems adding b oth to the practical asp ects of pro of complexity as well as to a rich theory This note attempts to survey these developments and to lay out some of the op en problems in the area Introduction One of the most basic questions of logic is the following Given a uni versally true statement tautology what is the length of the shortest pro of of the statement in some standard axiomatic pro of system The prop osi tional logic version of this question is particularly imp ortant in computer science for b oth theorem proving and complexity theory Imp ortant related algorithmic questions are Is there an ecient algorithm that will pro duce a pro of of any tautology Is there an ecient algorithm to pro duce the shortest pro of of any tautology Such questions of theorem proving and
    [Show full text]
  • An Exponential Separation Between Regular and General Resolution
    THEORY OF COMPUTING, Volume 3 (2007), pp. 81–102 http://theoryofcomputing.org An Exponential Separation between Regular and General Resolution Michael Alekhnovich Jan Johannsen∗ Toniann Pitassi† Alasdair Urquhart‡ Received: August 15, 2006; published: May 1, 2007. Dedicated to the memory of Misha Alekhnovich Abstract: This paper gives two distinct proofs of an exponential separation between regular resolution and unrestricted resolution. The previous best known separation between these systems was quasi-polynomial. ACM Classification: F.2.2, F.2.3 AMS Classification: 03F20, 68Q17 Key words and phrases: resolution, proof complexity, lower bounds 1 Introduction Propositional proof complexity is currently a very active area of research. In the realm of the theory of feasible proofs it plays a role analogous to the role played by Boolean circuit complexity in the theory of computational complexity. ∗Supported by the DFG Emmy Noether-Programme grant No. Jo 291/2-1. †Supported by NSERC. ‡Supported by NSERC. Authors retain copyright to their work and grant Theory of Computing unlimited rights to publish the work electronically and in hard copy. Use of the work is permitted as long as the author(s) and the journal are properly acknowledged. For the detailed copyright statement, see http://theoryofcomputing.org/copyright.html. c 2007 Michael Alekhnovich, Jan Johannsen, Toniann Pitassi, and Alasdair Urquhart DOI: 10.4086/toc.2007.v003a005 M. ALEKHNOVICH, J. JOHANNSEN, T. PITASSI, AND A. URQUHART The motivation to study the complexity of propositional proof systems comes from two sides. First, it was shown in the seminal paper of Cook and Reckhow [10] that the existence of “strong” systems in which all tautologies have proofs of polynomial size is tightly connected to the NP vs.
    [Show full text]