New Methods in Parameterized Algorithms and Complexity

Total Page:16

File Type:pdf, Size:1020Kb

New Methods in Parameterized Algorithms and Complexity New Methods in Parameterized Algorithms and Complexity Daniel Lokshtanov Dissertation for the degree of Philosophiae Doctor (PhD) University of Bergen Norway April 2009 i Acknowledgements First of all, I would like to thank my supervisor Pinar Heggernes for her help and guidance, both with my reasearch and with all other aspects of being a graduate student. This thesis would not have existed without her. Already in advance, I would like to thank my PhD committee, Maria Chudnovsky, Rolf Niedermeier and Sverre Storøy. I also want to thank all my co-authors, Noga Alon, Oren Ben-Zwi, Hans Bod- laender, Michael Dom, Michael Fellows, Henning Fernau, Fedor V. Fomin, Petr A. Golovach, Fabrizio Grandoni, Pinar Heggernes, Danny Hermelin, Jan Kra- tochv´ıl, Elena Losievskaja, Federico Mancini, Rodica Mihai, Neeldhara Misra, Matthias Mnich, Ilan Newman, Charis Papadopoulos, Eelko Penninkx, Daniel Raible, Venkatesh Raman, Frances Rosamond, Saket Saurabh, Somnath Sikdar, Christian Sloper, Stefan Szeider, Jan Arne Telle, Dimitrios Thilikos, Carsten Thomassen and Yngve Villanger. A special thanks goes to Saket Saurabh, with whom I have shared many sleepless nights and cups of coffee preparing this work. Thanks to my family, especially to my mother and father. In more than one way, i am here largely because of you. I would also like to thank my friends for filling my life with things other than discrete mathematics. You mean a lot to me, hugs to all of you and kisses to one in particular. Thanks also to the Algorithms group at the University of Bergen, you are a great lot. An extra thanks goes to my table-tennis opponents, even though this thesis exists despite of your efforts to keep me away from it. Finally, thanks to Pasha for being furry and having whiskers. Bergen, April 2009 Daniel Lokshtanov ii Contents 1 Introduction 1 1.1 OverviewoftheThesis ........................ 2 1.2 NotationandDefinitions . 3 I Overview of the Field 11 2 Algorithmic Techniques 13 2.1 BoundedSearchTrees ........................ 13 2.2 Reduction Rules and Branching Recurrences . 14 2.3 IterativeCompression. 16 2.4 Dynamic Programming and Courcelle’s Theorem . 19 2.5 WellQuasiOrdering ......................... 21 2.6 Win/Wins .............................. 22 2.6.1 Bidimensionality . 23 2.7 ColorCoding ............................. 24 2.8 IntegerLinearProgramming . 26 3 Running Time Lower Bounds 31 3.1 NP-hardness vs. ParaNP-hardness . 31 3.2 HardnessforW[1]........................... 32 3.3 HardnessforW[2]........................... 34 4 Kernelization 37 4.1 ReductionRules............................ 38 4.2 CrownReductions .......................... 39 4.3 LPbasedkernelization . 41 5 Kernelization Lower Bounds 43 5.1 CompositionAlgorithms . 43 5.2 Polynomial Parameter Transformations . 46 iii iv II New Methods 49 6 Algorithmic Method: Chromatic Coding 51 6.1 TournamentFeedbackArcSet . 52 6.1.1 ColorandConquer . .. .. 52 6.1.2 Kernelization ......................... 53 6.1.3 ProbabilityofaGoodColoring . 54 6.1.4 SolvingaColoredInstance . 54 6.2 Universal Coloring Families . 57 7 Explicit Identification in W[1]-Hardness Reductions 61 7.1 Hardness for Treewidth-Parameterizations . ... 61 7.1.1 EquitableColoring . 62 7.1.2 CapacitatedDominatingSet . 67 7.2 Hardness for Cliquewidth-Parameterizations . .... 71 7.2.1 GraphColoring—ChromaticNumber . 71 7.2.2 EdgeDominatingSet. 74 7.2.3 HamiltonianCycle . 79 7.2.4 Conclusion........................... 83 8 Kernelization Techniques 85 8.1 Meta-TheoremsforKernelization . 85 8.1.1 ReductionRules ....................... 87 8.1.2 DecompositionTheorems. 94 8.1.3 Kernels ............................ 98 8.1.4 ImplicationsofOurResults . 101 8.1.5 Practicalconsiderations: . 109 8.2 TuringKernels ............................ 110 8.2.1 The Rooted k-Leaf Out-Branching Problem . 111 8.2.2 Bounding Entry and Exit Points . 115 8.2.3 BoundingtheLengthofaPath . 118 9 Kernelization Lower Bounds 125 9.1 Polynomial Parameter Transformations with Turing Kernels . 125 9.2 Explicit Identification in Composition Algorithms . ..... 128 9.2.1 Steiner Tree, Variants of Vertex Cover, and Bounded Rank SetCover ........................... 129 9.2.2 UniqueCoverage . 134 9.2.3 BoundedRankDisjointSets . 136 9.2.4 DominationandTransversals . 137 9.2.5 Numeric Problem: Small Subset Sum . 140 10 Concluding Remarks and Open Problems 143 Chapter 1 Introduction Parameterized Algorithms and Complexity is a natural way to cope with problems that are considered intractable according to the classical P versus NP dichotomy. In practice, large instances of NP-complete problems are solved exactly every day. The reason for why this is possible is that problem instances that occur naturally often have a hidden structure. The basic idea of Parameterized Algo- rithms and Complexity is to extract and harness the power of this structure. In Parameterized Algorithms and Complexity every problem instance comes with a relevant secondary measurement k, called a parameter. The parameter measures how difficult the instance is, the larger the parameter, the harder the instance. The hope is that at least for small values of the parameter, the problem instances can be solved efficiently. This naturally leads to the definition of fixed param- eter tractability. We say that a problem is fixed parameter tractable (FPT) if problem instances of size n can be solved in f(k)nO(1) time for some function f independent of n. Over the last two decades, Parameterized Algorithms and Complexity has established itself as an important subfield of Theoretical Computer Science. Pa- rameterized Algorithms has its roots in the Graph Minors project of Robertson and Seymour [122], but in fact, such algorithms have been made since the early seventees [54, 108]. In the late eightees several FPT algorithms were discov- ered [61, 59] but despite considerable efforts no fixed parameter tractable algo- rithm was found for problems like Independent Set and Dominating Set. In the early ninetees, Downey and Fellows [51, 50] developed a framework for showing infeasibility of FPT algorithms for parameterized problems under cer- tain complexity theoretic assumptions, and showed that the existence of FPT algorithms for Independent Set and Dominating Set is unlikely. The possibility of showing lower as well as upper bounds sparked interest in the field, and considerable progress has been made in developing methods to show tractability and intractability of parameterized problems. Notable methods that have been developed include Bounded Search Trees, Kernelization, Color Coding 1 2 and Iterative Compression. Over the last few years Kernelization has grown into a subfield of its own, due to a connection to polynomial time preprocessing and the techniques that have been developed over the last year for showing kernelization lower bounds [70, 20]. In this thesis we develop new methods for showing upper and lower bounds, both for Parameterized Algorithms and for Kernelization. 1.1 Overview of the Thesis In Section 1.2 we give the necessary notation and definitions for the thesis. In Part I we give an overview of existing techniques in Parameterized Algorithms and Complexity. This part of the thesis is organized as follows. First we cover techniques for algorithm design, then we review existing medthods for show- ing running time lower bounds up to certain complexity-theoretic assumptions. Then we turn our attention to kernelization or polynomial time preprocessing. In Chapter 4 we describe the most well-known techniques for proving kerneliza- tion results. In Chapter 5 we survey the recent methods developed for giving kernelization lower bounds. In Part II we describe new methods in Parameterized Algorithms and Com- plexity that were developed as part of the author’s PhD research. The part is organized similarly to Part I, that is, we first consider algorithmic upper and lower bounds, before turning our attention to upper and lower bounds for ker- nelization. In Chapter 6 we describe a new color-coding based scheme to give subexponential time algorithms for problems on dense structures. In Chapter 7 we illustrate how explicit identification makes it easier to construct hardness re- ductions, and use this kind of reductions to show a trade-off between expressive power and computational tractability. In Chapter 8 we give two results. Our first result is a meta-theorem for kernelization of graph problems restricted to graphs embeddable into surfaces with constant genus. Our second result is a demonstra- tion that a relaxed notion of kernelization called Turing kernelization indeed is more powerful than kernelization in the traditional sense. Finally, in Chapter 9 we show that explicit identification is useful not only in hardness reductions, but also to show kernelization lower bounds. Several such lower bounds are derived by applying the proposed method. Parts of this thesis have been published as conference articles and some parts are excerpts from articles currently under peer review. Below we give an overview of the articles that were used as a basis for this work. Section 2.3 is based on the manuscript Simpler Parameterized Algorithm • for OCT, coauthored with S. Saurabh and S. Sikdar. Section 2.8 is based on an excerpt from the article Graph Layout problems • 3 Parameterized by Vertex Cover, coauthored with M. Fellows, N. Misra, F. A. Rosamond and S. Saurabh. Proceedings of ISAAC 2008, pages 294-305. Section 4.2 is based on an excerpt from the manuscript Hypergraph Coloring • meets Hypergraph Spanning Tree, coauthored with S. Saurabh.
Recommended publications
  • Fixed-Parameter Tractability Marko Samer and Stefan Szeider
    Handbook of Satisfiability 363 Armin Biere, Marijn Heule, Hans van Maaren and Toby Walsch IOS Press, 2008 c 2008 Marko Samer and Stefan Szeider. All rights reserved. Chapter 13 Fixed-Parameter Tractability Marko Samer and Stefan Szeider 13.1. Introduction The propositional satisfiability problem (SAT) is famous for being the first prob- lem shown to be NP-complete—we cannot expect to find a polynomial-time al- gorithm for SAT. However, over the last decade, SAT-solvers have become amaz- ingly successful in solving formulas with thousands of variables that encode prob- lems arising from various application areas. Theoretical performance guarantees, however, are far from explaining this empirically observed efficiency. Actually, theorists believe that the trivial 2n time bound for solving SAT instances with n variables cannot be significantly improved, say to 2o(n) (see the end of Sec- tion 13.2). This enormous discrepancy between theoretical performance guaran- tees and the empirically observed performance of SAT solvers can be explained by the presence of a certain “hidden structure” in instances that come from applica- tions. This hidden structure greatly facilitates the propagation and simplification mechanisms of SAT solvers. Thus, for deriving theoretical performance guaran- tees that are closer to the actual performance of solvers one needs to take this hidden structure of instances into account. The literature contains several sug- gestions for making the vague term of a hidden structure explicit. For example, the hidden structure can be considered as the “tree-likeness” or “Horn-likeness” of the instance (below we will discuss how these notions can be made precise).
    [Show full text]
  • Introduction to Parameterized Complexity
    Introduction to Parameterized Complexity Ignasi Sau CNRS, LIRMM, Montpellier, France UFMG Belo Horizonte, February 2018 1/37 Outline of the talk 1 Why parameterized complexity? 2 Basic definitions 3 Kernelization 4 Some techniques 2/37 Next section is... 1 Why parameterized complexity? 2 Basic definitions 3 Kernelization 4 Some techniques 3/37 Karp (1972): list of 21 important NP-complete problems. Nowadays, literally thousands of problems are known to be NP-hard: unlessP = NP, they cannot be solved in polynomial time. But what does it mean for a problem to be NP-hard? No algorithm solves all instances optimally in polynomial time. Some history of complexity: NP-completeness Cook-Levin Theorem (1971): the SAT problem is NP-complete. 4/37 Nowadays, literally thousands of problems are known to be NP-hard: unlessP = NP, they cannot be solved in polynomial time. But what does it mean for a problem to be NP-hard? No algorithm solves all instances optimally in polynomial time. Some history of complexity: NP-completeness Cook-Levin Theorem (1971): the SAT problem is NP-complete. Karp (1972): list of 21 important NP-complete problems. 4/37 But what does it mean for a problem to be NP-hard? No algorithm solves all instances optimally in polynomial time. Some history of complexity: NP-completeness Cook-Levin Theorem (1971): the SAT problem is NP-complete. Karp (1972): list of 21 important NP-complete problems. Nowadays, literally thousands of problems are known to be NP-hard: unlessP = NP, they cannot be solved in polynomial time. 4/37 Some history of complexity: NP-completeness Cook-Levin Theorem (1971): the SAT problem is NP-complete.
    [Show full text]
  • Exploiting C-Closure in Kernelization Algorithms for Graph Problems
    Exploiting c-Closure in Kernelization Algorithms for Graph Problems Tomohiro Koana Technische Universität Berlin, Algorithmics and Computational Complexity, Germany [email protected] Christian Komusiewicz Philipps-Universität Marburg, Fachbereich Mathematik und Informatik, Marburg, Germany [email protected] Frank Sommer Philipps-Universität Marburg, Fachbereich Mathematik und Informatik, Marburg, Germany [email protected] Abstract A graph is c-closed if every pair of vertices with at least c common neighbors is adjacent. The c-closure of a graph G is the smallest number such that G is c-closed. Fox et al. [ICALP ’18] defined c-closure and investigated it in the context of clique enumeration. We show that c-closure can be applied in kernelization algorithms for several classic graph problems. We show that Dominating Set admits a kernel of size kO(c), that Induced Matching admits a kernel with O(c7k8) vertices, and that Irredundant Set admits a kernel with O(c5/2k3) vertices. Our kernelization exploits the fact that c-closed graphs have polynomially-bounded Ramsey numbers, as we show. 2012 ACM Subject Classification Theory of computation → Parameterized complexity and exact algorithms; Theory of computation → Graph algorithms analysis Keywords and phrases Fixed-parameter tractability, kernelization, c-closure, Dominating Set, In- duced Matching, Irredundant Set, Ramsey numbers Funding Frank Sommer: Supported by the Deutsche Forschungsgemeinschaft (DFG), project MAGZ, KO 3669/4-1. 1 Introduction Parameterized complexity [9, 14] aims at understanding which properties of input data can be used in the design of efficient algorithms for problems that are hard in general. The properties of input data are encapsulated in the notion of a parameter, a numerical value that can be attributed to each input instance I.
    [Show full text]
  • Lower Bounds Based on the Exponential Time Hypothesis
    Lower bounds based on the Exponential Time Hypothesis Daniel Lokshtanov∗ Dániel Marxy Saket Saurabhz Abstract In this article we survey algorithmic lower bound results that have been obtained in the field of exact exponential time algorithms and pa- rameterized complexity under certain assumptions on the running time of algorithms solving CNF-Sat, namely Exponential time hypothesis (ETH) and Strong Exponential time hypothesis (SETH). 1 Introduction The theory of NP-hardness gives us strong evidence that certain fundamen- tal combinatorial problems, such as 3SAT or 3-Coloring, are unlikely to be polynomial-time solvable. However, NP-hardness does not give us any information on what kind of super-polynomial running time is possible for NP-hard problems. For example, according to our current knowledge, the complexity assumption P 6= NP does not rule out the possibility of having an nO(log n) time algorithm for 3SAT or an nO(log log k) time algorithm for k- Clique, but such incredibly efficient super-polynomial algorithms would be still highly surprising. Therefore, in order to obtain qualitative lower bounds that rule out such algorithms, we need a complexity assumption stronger than P 6= NP. Impagliazzo, Paturi, and Zane [38, 37] introduced the Exponential Time Hypothesis (ETH) and the stronger variant, the Strong Exponential Time Hypothesis (SETH). These complexity assumptions state lower bounds on how fast satisfiability problems can be solved. These assumptions can be used as a basis for qualitative lower bounds for other concrete computational problems. ∗Dept. of Comp. Sc. and Engineering, University of California, USA. [email protected] yInstitut für Informatik, Humboldt-Universiät , Berlin, Germany.
    [Show full text]
  • 12. Exponential Time Hypothesis COMP6741: Parameterized and Exact Computation
    12. Exponential Time Hypothesis COMP6741: Parameterized and Exact Computation Serge Gaspers Semester 2, 2017 Contents 1 SAT and k-SAT 1 2 Subexponential time algorithms 2 3 ETH and SETH 2 4 Algorithmic lower bounds based on ETH 2 5 Algorithmic lower bounds based on SETH 3 6 Further Reading 4 1 SAT and k-SAT SAT SAT Input: A propositional formula F in conjunctive normal form (CNF) Parameter: n = jvar(F )j, the number of variables in F Question: Is there an assignment to var(F ) satisfying all clauses of F ? k-SAT Input: A CNF formula F where each clause has length at most k Parameter: n = jvar(F )j, the number of variables in F Question: Is there an assignment to var(F ) satisfying all clauses of F ? Example: (x1 _ x2) ^ (:x2 _ x3 _:x4) ^ (x1 _ x4) ^ (:x1 _:x3 _:x4) Algorithms for SAT • Brute-force: O∗(2n) • ... after > 50 years of SAT solving (SAT association, SAT conference, JSAT journal, annual SAT competitions, ...) • fastest known algorithm for SAT: O∗(2n·(1−1=O(log m=n))), where m is the number of clauses [Calabro, Impagli- azzo, Paturi, 2006] [Dantsin, Hirsch, 2009] • However: no O∗(1:9999n) time algorithm is known 1 • fastest known algorithms for 3-SAT: O∗(1:3303n) deterministic [Makino, Tamaki, Yamamoto, 2013] and O∗(1:3071n) randomized [Hertli, 2014] • Could it be that 3-SAT cannot be solved in 2o(n) time? • Could it be that SAT cannot be solved in O∗((2 − )n) time for any > 0? 2 Subexponential time algorithms NP-hard problems in subexponential time? • Are there any NP-hard problems that can be solved in 2o(n) time? • Yes.
    [Show full text]
  • On Miniaturized Problems in Parameterized Complexity Theory
    On miniaturized problems in parameterized complexity theory Yijia Chen Institut fur¨ Informatik, Humboldt-Universitat¨ zu Berlin, Unter den Linden 6, 10099 Berlin, Germany. [email protected] Jorg¨ Flum Abteilung fur¨ Mathematische Logik, Universitat¨ Freiburg, Eckerstr. 1, 79104 Freiburg, Germany. [email protected] Abstract We introduce a general notion of miniaturization of a problem that comprises the different miniaturizations of concrete problems considered so far. We develop parts of the basic theory of miniaturizations. Using the appropriate logical formalism, we show that the miniaturization of a definable problem in W[t] lies in W[t], too. In particular, the miniaturization of the dominating set problem is in W[2]. Furthermore we investigate the relation between f(k) · no(k) time and subexponential time algorithms for the dominating set problem and for the clique problem. 1. Introduction Parameterized complexity theory provides a framework for a refined complexity analysis of algorithmic problems that are intractable in general. Central to the theory is the notion of fixed-parameter tractability, which relaxes the classical notion of tractability, polynomial time computability, by admitting algorithms whose runtime is exponential, but only in terms of some parameter that is usually expected to be small. Let FPT denote the class of all fixed-parameter tractable problems. A well-known example of a problem in FPT is the vertex cover problem, the parameter being the size of the vertex cover we ask for. As a complexity theoretic counterpart, a theory of parameterized intractability has been developed. In classical complex- ity, the notion of NP-completeness is central to a nice and simple theory for intractable problems.
    [Show full text]
  • Exact Complexity and Satisfiability
    Exact Complexity and Satisfiability (Invited Talk) Russell Impagliazzo and Ramamohan Paturi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92093-0404, USA {russell,paturi}@cs.ucsd.edu All NP-complete problems are equivalent as far as polynomial time solv- ability is concerned. However, their exact complexities (worst-case complex- ity of algorithms that solve every instance correctly and exactly) differ widely. Starting with Bellman [1], Tarjan and Trojanowski [9], Karp [5], and Monien and Speckenmeyer [7], the design of improved exponential time al- gorithms for NP-complete problems has been a tremendously fruitful en- deavor, and one that has recently been accelerating both in terms of the number of results and the increasing sophistication of algorithmic techniques. There are a vast variety of problems where progress has been made, e.g., k-sat, k-Colorability, Maximum Independent Set, Hamiltonian Path, Chromatic Number,andCircuit Sat for limited classes of circuits. The “current champion” algorithms for these problems deploy a vast variety of al- gorithmic techniques, e.g., back-tracking search (with its many refinements and novel analysis techniques), divide-and-conquer, dynamic programming, random- ized search, algebraic transforms, inclusion-exclusion, color coding, split and list, and algebraic sieving. In many ways, this is analogous to the diversity of approx- imation ratios and approximation algorithms known for different NP-complete problems. In view of this diversity, it is tempting to focus on the distinctions between problems rather than the interconnections between them. However, over the past two decades, there has been a wave of research showing that such con- nections do exist.
    [Show full text]
  • Parameterized Approximation of Dominating Set Problems
    Parameterized Approximation of Dominating Set Problems Rodney G. Downey1, Michael R. Fellows2, Catherine McCartin3, and Frances Rosamond2 1 Victoria University, Wellington, New Zealand, [email protected] 2 The University of Newcastle, Callaghan, Australia, {michael.fellows,frances.rosamond}@newcastle.edu.au 3 Massey University, Palmerston North, New Zealand, [email protected] Abstract. A problem open for many years is whether there is an FPT algorithm that given a graph G and parameter k, either: (1) determines that G has no k-Dominating Set, or (2) produces a dominating set of size at most g(k), where g(k) is some fixed function of k. Such an outcome is termed an FPT approximation algorithm. We describe some results that begin to provide some answers. We show that there is no such FPT algorithm for g(k) of the form k + c (where c is a fixed con- stant, termed an additive FPT approximation), unless FPT = W [2]. We answer the analogous problem completely for the related Independent Dominating Set (IDS) problem, showing that IDS does not admit an FPT approximation algorithm, for any g(k), unless FPT = W [2]. 1 Introduction Most of the work in the area of parameterized complexity and algorithmics to date has focused on exact algorithms for decision problems. However, the theory of parameterized complexity, which derives from the contrast between timecost functions of the form f(k)nc (fixed-parameter tractability, FPT,) and those of the form ng(k), where n is the overall input size, and k is a relevant secondary parameter of the situation, can clearly be deployed in a variety of ways in analyzing computational complexity.
    [Show full text]
  • Fixed-Parameter Tractability and Parameterized Complexity, Applied to Problems from Computational Social Choice — Mathematical Programming Glossary Supplement —
    Fixed-Parameter Tractability and Parameterized Complexity, Applied to Problems From Computational Social Choice — Mathematical Programming Glossary Supplement — Claudia Lindner and J¨org Rothe Institut f¨ur Informatik Heinrich-Heine-Universit¨at D¨usseldorf 40225 D¨usseldorf, Germany October 17, 2008 Abstract This supplement provides a brief introduction to the field of fixed-parameter tractability and parameterized complexity. Some basic notions are explained and some related results are presented, with a focus on problems arising in the field of computational social choice. 1 Fixed-Parameter Tractability and Parameterized Complexity The study of fixed-parameter tractability and parameterized complexity has emerged as a new field within computational complexity theory in the late 1980s and the early 1990s. Since the early pioneering work of Downey, Fellows, and other researchers this area has established plenty of results, notions, and methods, and it provides a useful framework for dealing in practice with problems considered intractable by classical complexity theory. This supplement gives only a rough sketch of some basic notions and techniques within parameterized complexity theory; for a deeper and more comprehensive treatise, the textbooks by Downey and Fellows [DF99], Flum and Grohe [FG06], and Niedermeier [Nie06] and the survey by Buss and Islam [BI08] are highly recommendable. Let P and NP, respectively, denote the classical (worst-case) complexity classes deterministic polynomial time and nondeterministic polynomial time. Given any two decision problems, A and B, ¡ p we say A polynomial-time many-one reduces to B (denoted A m B) if there is a polynomial-time £ ¤ ¢ computable function f such that, for each input x, x ¢ A if and only if f x B.
    [Show full text]
  • Complexity with Rod
    Complexity with Rod Lance Fortnow Georgia Institute of Technology Abstract. Rod Downey and I have had a fruitful relationship though direct and indirect collaboration. I explore two research directions, the limitations of distillation and instance compression, and whether or not we can create NP-incomplete problems without punching holes in NP- complete problems. 1 Introduction I first met Rod Downey at the first Dagstuhl seminar on \Structure and Com- plexity Theory" in February 1992. Even though we heralded from different com- munities, me as a computer scientist working on computational complexity, and Rod as a mathematician working primarily in computability, our paths have crossed many times on many continents, from Germany to Chicago, from Sin- gapore to Honolulu. While we only have had one joint publication [1], we have profoundly affected each other's research careers. In 2000 I made my first pilgrimage to New Zealand, to the amazingly beau- tiful town of Kaikoura on the South Island. Rod Downey had invited me to give three lectures [2] in the summer New Zealand Mathematics Research Institute graduate seminar on Kolmogorov complexity, the algorithmic study of infor- mation and randomness. Those lectures helped get Rod and Denis Hirschfeldt interested in Kolmogorov complexity, and their interest spread to much of the computability community. Which led to a US National Science Foundation Foun- dation Focused Research Group grant among several US researchers in the area including myself. What comes around goes around. In 2010 Rod Downey and De- nis Hirschfeldt published an 855 page book Algorithmic Randomness and Com- plexity [3] on this line of research.
    [Show full text]
  • Implications of the Exponential Time Hypothesis
    Implications of the Exponential Time Hypothesis Anant Dhayal May 14, 2018 Abstract The Exponential Time Hypothesis (ETH) [14] states that for k ≥ 3, k-SAT doesn't δn have a sub-exponential time algorithm. Let sk = inffδ j 9 2 time algorithm for solving k-SATg, then ETH is equivalent to s3 > 0. Strong ETH (SETH) [13] is a stronger conjecture which states that lim sk = 1. k!1 In this report we survey lower bounds based on ETH. We describe a reduction from k-SAT to NP-complete CSP's [5]. In particular this implies that no NP-complete CSP has a sub-exponential time algorithm. We also survey lower bounds implied by SETH. 1 Introduction CNF-SAT is the canonical NP-complete problem [9, 17]. A literal is a boolean variable or its negation, and a clause is a disjunction of literals. The input of CNF-SAT is a boolean formula in conjunctive normal form (CNF), i.e., conjunction of clauses. The goal is to check if there is an assignment to the variables which satisfies all the clauses. k-SAT is a special case of CNF-SAT where the clause width (the number of literals in a clause) is bounded by the constant k. When k = 2 the problem has a polynomial time algorithm. However for k ≥ 3 it is proved to be NP-complete [9, 17], hence there is no polynomial time algorithm unless P = NP. When k ≥ 4 there is a simple clause width reduction from k-SAT to (k−1)-SAT wherein we replace a k-clause (x1 _x2 _:::_xk) by two clauses (x1 _x2 _:::_xk−2 _y)^(xk−1 _xk _y) by introducing a new variable y.
    [Show full text]
  • Parameterized Complexity Via Combinatorial Circuits (Extended Abstract) MICHAEL FELLOWS, 1 JORG¨ FLUM,DANNY HERMELIN,MORITZ MULLER¨ , and FRANCES ROSAMOND 2
    Parameterized Complexity via Combinatorial Circuits (Extended Abstract) MICHAEL FELLOWS, 1 JORG¨ FLUM,DANNY HERMELIN,MORITZ MULLER¨ , AND FRANCES ROSAMOND 2 ABSTRACT. The classes of the W-hierarchy are the most important classes of intractable problems in parameterized complexity. These classes were orig- inally defined via the weighted satisfiability problem for Boolean circuits. The analysis of the parameterized majority vertex cover problem and other pa- rameterized problems led us to study circuits that contain connectives such as majority, not-all-equal, and unique, instead of (or in addition to) the Boolean connectives. For example, a gate labelled by the majority connective outputs TRUE if more than half of the inputs are TRUE. For any finite set C of connec- tives we construct the corresponding W(C)-hierarchy. We derive some general conditions which guarantee that the W-hierarchy and the W( C)-hierarchy co- incide levelwise. Surprisingly, if C contains only the majority connective (i.e., no boolean connectives), then the first levels coincide. We use this to show that the majority vertex cover problem is W[1]-complete. 1 Introduction Parameterized complexity is a refinement of classical complexity theory, in which one takes into account not only the total input length n, but also other aspects of the problem codified as the parameter k. In doing so, one attempts to confine the exponential running time needed for solving many natural problems strictly to the parameter. For example, the classical V ERTEX-COVER problem can be solved in O(2k · n) time, when parameterized by the size k of the solution sought [8] (significant improvements to this algorithm are surveyed in [9]).
    [Show full text]