Kernelization: New Upper and Lower Bound Techniques

Total Page:16

File Type:pdf, Size:1020Kb

Kernelization: New Upper and Lower Bound Techniques Kernelization: New Upper and Lower Bound Techniques Hans L. Bodlaender Department of Information and Computing Sciences, Utrecht University, P.O. Box 80.089, 3508 TB Utrecht, the Netherlands [email protected] Abstract. In this survey, we look at kernelization: algorithms that transform in polynomial time an input to a problem to an equivalent input, whose size is bounded by a function of a parameter. Several re- sults of recent research on kernelization are mentioned. This survey looks at some recent results where a general technique shows the existence of kernelization algorithms for large classes of problems, in particular for planar graphs and generalizations of planar graphs, and recent lower bound techniques that give evidence that certain types of kernelization algorithms do not exist. Keywords: fixed parameter tractability, kernel, kernelization, prepro- cessing, data reduction, combinatorial problems, algorithms. 1 Introduction In many cases, combinatorial problems that arise in practical situations are NP-hard. As we teach our students in algorithms class, there are a number of approaches: we can give up optimality and design approximation algorithms or heuristics; we can look at special cases or make assumptions about the input that one or more variables are small; or we can design algorithms that sometimes take exponential time, but are as fast as possible. In the latter case, a common approach is to start the algorithm with preprocessing. So, consider some hard (say, NP-hard) combinatorial problem. We start our algorithm with a preprocessing or data reduction phase, in which we transform the input I to an equivalent input I that is (hopefully) smaller (but never larger). Then, we solve the smaller input I optimally, with some (exponential time) algorithm. E.g., in practical settings, we can use an ILP-solver, branch and bound or branch and reduce algorithm, or a satisfiability-solver. After we obtained an optimal solution S for S, we transform this solution back to an optimal solution for I. In this overview paper, we want to focus on the following question for given com- binatorial problems: suppose the preprocessing phase takes polynomial time; what can we say about the size of the reduced instance, as a function of some parameter of the input? This question is nowadays phrased as: does the problem we consider have a kernel, and if so, how large is the kernel? So, kernelization gives us quanti- tative insights in what can be achieved by polynomial time preprocessing. In this J. Chen and F.V. Fomin (Eds.): IWPEC 2009, LNCS 5917, pp. 17–37, 2009. c Springer-Verlag Berlin Heidelberg 2009 18 H.L. Bodlaender paper, we aim to first give a general introduction to the field of kernelization, and then survey a number of very recent general techniques from the field. As a simple example, let us look at the Vertex Cover problem. Here, we are given a graph G =(V,E)andsomeintegerk, and ask if there is a set W ⊆ V of at most k vertices, such that for each edge {v, w}∈E at least one endpoint belongs to W (v ∈ W or w ∈ W ). We can use the following ’kernelization’ algorithm, due to Buss, see [26]: while there is at least one vertex v ∈ V with degree at least k +1,removev and its incident edges, and set k to k − 1. This gives equivalent instances: v must belong to an optimal solution, because if we do not take v,wemusttakeallneighborsofv, which are more than k vertices. Also, remove all vertices of degree 0, without changing k. If at some point, k<0, we can decide no: there clearly is no solution. Now, if we have more than k2 edges, we decide no. Each remaining vertex has degree at most k,sowithk vertices, we cannot cover more than k2 edges. If we did not return no,weend with an equivalent instance with at most k2 edges (and less than k2 vertices). The simple algorithm given above is not the best (in terms of ’kernel sizes’) kernelization algorithm for Vertex Cover: a clever algorithm by Nemhauser and Trotter [72] gives reduced instances with 2k vertices. The algorithm above, however, does give a nice example of a methodology that is used in many ker- nelization algorithms: we have a set of ’safe reduction rules’, i.e., rules that give a smaller, equivalent instance, and we have a mathematical analysis on the size of yes-instances when no rule applies. When our input is larger than this size, we return no; otherwise, we have a small reduced instance. For the analysis of kernels, we can fortunately make use of a large toolbox from the field of fixed parameter algorithms, which was pioneered by Downey and Fellows. For background in this field, we refer to [44,52,73]. We use a number of definitions from this field in a form that is useful for this exposition. A parameterized problem is a subset of Σ∗ × N for some fixed alphabet Σ. I.e., we look at decision problems where some specific part of the input, called the parameter, is an integer. The theory of fixed parameter complexity is used to distinguish between the running time for parameterized problem, where we pay attention how this time depends on the parameter and on the input size. Three important types of behavior can be seen: –NP-complete: the problem is NP-complete for some fixed values of k. E.g., Graph Coloring is NP-complete, even when the number of colors is 3. – XP: for every fixed k, there is a polynomial time algorithm, but the exponent of the running time grows with k, i.e., the running time is Θ(nf(k))forsome function f with limk→∞ f(k)=∞. – FPT: there is an algorithm, that solves the problem in time O(f(k)nc)for some function f on inputs of size n with parameter k,withc a constant. FPT is defined as the class of all parameterized problems that have such a kernel. FPT is short for fixed parameter tractable. A kernelization algorithm for a parameterized problem P is an algorithm A,that transforms inputs (I,k)ofP to inputs (I,k)ofP , such that Kernelization: New Upper and Lower Bound Techniques 19 1. the algorithm uses time polynomial in |I| + k; 2. the algorithm transforms inputs to equivalent inputs: (I,k) ∈ P ⇔ A(I,k) ∈ P ; 3. k ≤ k; 4. |I|≤f(k) for some function f: the value of the new parameter and the size of the new input are bounded by a function of the value of the old parameter. We say that P has a kernel of size f. Throughout this paper, we focus on the question for given problems P :doesithaveakernel,andifso,ofwhatsize.Of course, we prefer kernels of small size, and asks ourselves for problems: do they have polynomial kernels. A variant of the definition, giving a slightly different notion of kernelization, has the condition k ≤ f(k) instead of k ≤ k. The topic of kernelization has become a very active of research. An excellent survey on the field was made by Guo and Niedermeier in 2007 [59]. In the past two years, more work has been done, and this paper aims at focussing at some recent developments, where general methods were obtained. A little technical remark. In several cases, we can decide the problem directly. Applying the notion of kernelization would require that we instead transform the problem to an equivalent instance. This can be easily resolved as follows: we take a yes-instance and a no-instance, both of small (constant bounded) size, and instead of deciding, we transform the input to the yes- or no-instance of constant bounded size. This little trick also shows that when a problem (seen as decision problem, we take the parameter in unary) belongs to P ,thenithasa kernel of O(1) size. Also, when a problem has an O(1) kernel, say of size at most c, it belongs to P : we first make a kernel, and then check if this belongs to the set Pc of yes-instances with size at most c. This latter set does not depend on the input, and thus can be hardwired in our algorithm. Thus, when the problem is NP-hard, then it does not have an O(1) kernel, unless P = NP. The theory of fixed parameter complexity gives us an excellent tool to see if a problem has a kernel (i.e., without considering its size.) First, consider the following result, which is nowadays folklore. The main idea of the result is due to Cai et al. [28], while its first statement is due to Neal Koblitz [47] and appeared in [45] in 1997. In a few cases in the literature, the result or its proof are slightly incorrectly stated, see the discussion below. The following proof is given by Flum and Grohe [52] (for the variant of strongly uniformly FPT), see also [73]. Theorem 1. Let P be a parameterized problem. Then P belongs to the class FPT, if and only if P is decidable and P has a kernel. Proof. If P is decidable and has a kernel, then we can use the following algorithm for P . If we have an input (I,k)ofsizen, then we can build in p(n + k)timefor some polynomial p an equivalent instance (I,k), with max(|I|,k) ≤ f(k)for some function f, by using the kernelization algorithm. Then, decide on (I,k) by any algorithm for P ; thus, for some function g, this costs in total O(p(n + k)+g(f(k))) time.
Recommended publications
  • Introduction to Parameterized Complexity
    Introduction to Parameterized Complexity Ignasi Sau CNRS, LIRMM, Montpellier, France UFMG Belo Horizonte, February 2018 1/37 Outline of the talk 1 Why parameterized complexity? 2 Basic definitions 3 Kernelization 4 Some techniques 2/37 Next section is... 1 Why parameterized complexity? 2 Basic definitions 3 Kernelization 4 Some techniques 3/37 Karp (1972): list of 21 important NP-complete problems. Nowadays, literally thousands of problems are known to be NP-hard: unlessP = NP, they cannot be solved in polynomial time. But what does it mean for a problem to be NP-hard? No algorithm solves all instances optimally in polynomial time. Some history of complexity: NP-completeness Cook-Levin Theorem (1971): the SAT problem is NP-complete. 4/37 Nowadays, literally thousands of problems are known to be NP-hard: unlessP = NP, they cannot be solved in polynomial time. But what does it mean for a problem to be NP-hard? No algorithm solves all instances optimally in polynomial time. Some history of complexity: NP-completeness Cook-Levin Theorem (1971): the SAT problem is NP-complete. Karp (1972): list of 21 important NP-complete problems. 4/37 But what does it mean for a problem to be NP-hard? No algorithm solves all instances optimally in polynomial time. Some history of complexity: NP-completeness Cook-Levin Theorem (1971): the SAT problem is NP-complete. Karp (1972): list of 21 important NP-complete problems. Nowadays, literally thousands of problems are known to be NP-hard: unlessP = NP, they cannot be solved in polynomial time. 4/37 Some history of complexity: NP-completeness Cook-Levin Theorem (1971): the SAT problem is NP-complete.
    [Show full text]
  • Exploiting C-Closure in Kernelization Algorithms for Graph Problems
    Exploiting c-Closure in Kernelization Algorithms for Graph Problems Tomohiro Koana Technische Universität Berlin, Algorithmics and Computational Complexity, Germany [email protected] Christian Komusiewicz Philipps-Universität Marburg, Fachbereich Mathematik und Informatik, Marburg, Germany [email protected] Frank Sommer Philipps-Universität Marburg, Fachbereich Mathematik und Informatik, Marburg, Germany [email protected] Abstract A graph is c-closed if every pair of vertices with at least c common neighbors is adjacent. The c-closure of a graph G is the smallest number such that G is c-closed. Fox et al. [ICALP ’18] defined c-closure and investigated it in the context of clique enumeration. We show that c-closure can be applied in kernelization algorithms for several classic graph problems. We show that Dominating Set admits a kernel of size kO(c), that Induced Matching admits a kernel with O(c7k8) vertices, and that Irredundant Set admits a kernel with O(c5/2k3) vertices. Our kernelization exploits the fact that c-closed graphs have polynomially-bounded Ramsey numbers, as we show. 2012 ACM Subject Classification Theory of computation → Parameterized complexity and exact algorithms; Theory of computation → Graph algorithms analysis Keywords and phrases Fixed-parameter tractability, kernelization, c-closure, Dominating Set, In- duced Matching, Irredundant Set, Ramsey numbers Funding Frank Sommer: Supported by the Deutsche Forschungsgemeinschaft (DFG), project MAGZ, KO 3669/4-1. 1 Introduction Parameterized complexity [9, 14] aims at understanding which properties of input data can be used in the design of efficient algorithms for problems that are hard in general. The properties of input data are encapsulated in the notion of a parameter, a numerical value that can be attributed to each input instance I.
    [Show full text]
  • FPT Algorithms and Kernelization?
    Noname manuscript No. (will be inserted by the editor) Structural Parameterizations of Undirected Feedback Vertex Set: FPT Algorithms and Kernelization? Diptapriyo Majumdar · Venkatesh Raman Received: date / Accepted: date Abstract A feedback vertex set in an undirected graph is a subset of vertices whose removal results in an acyclic graph. We consider the parameterized and kernelization complexity of feedback vertex set where the parameter is the size of some structure in the input. In particular, we consider parameterizations where the parameter is (instead of the solution size), the distance to a class of graphs where the problem is polynomial time solvable, and sometimes smaller than the solution size. Here, by distance to a class of graphs, we mean the minimum number of vertices whose removal results in a graph in the class. Such a set of vertices is also called the `deletion set'. In this paper, we show that k (1) { FVS is fixed-parameter tractable by an O(2 nO ) time algorithm, but is unlikely to have polynomial kernel when parameterized by the number of vertices of the graph whose degree is at least 4. This answersp a question asked in an earlier paper. We also show that an algorithm with k (1) running time O(( 2 − ) nO ) is not possible unless SETH fails. { When parameterized by k, the number of vertices, whose deletion results in a split graph, we give k (1) an O(3:148 nO ) time algorithm. { When parameterized by k, the number of vertices whose deletion results in a cluster graph (a k (1) disjoint union of cliques), we give an O(5 nO ) algorithm.
    [Show full text]
  • Experimental Evaluation of Parameterized Algorithms for Feedback Vertex Set
    Experimental Evaluation of Parameterized Algorithms for Feedback Vertex Set Krzysztof Kiljan Institute of Informatics, University of Warsaw, Poland [email protected] Marcin Pilipczuk Institute of Informatics, University of Warsaw, Poland [email protected] Abstract Feedback Vertex Set is a classic combinatorial optimization problem that asks for a minimum set of vertices in a given graph whose deletion makes the graph acyclic. From the point of view of parameterized algorithms and fixed-parameter tractability, Feedback Vertex Set is one of the landmark problems: a long line of study resulted in multiple algorithmic approaches and deep understanding of the combinatorics of the problem. Because of its central role in parameterized complexity, the first edition of the Parameterized Algorithms and Computational Experiments Challenge (PACE) in 2016 featured Feedback Vertex Set as the problem of choice in one of its tracks. The results of PACE 2016 on one hand showed large discrepancy between performance of different classic approaches to the problem, and on the other hand indicated a new approach based on half-integral relaxations of the problem as probably the most efficient approach to the problem. In this paper we provide an exhaustive experimental evaluation of fixed-parameter and branching algorithms for Feedback Vertex Set. 2012 ACM Subject Classification Theory of computation → Parameterized complexity and exact algorithms, Theory of computation → Graph algorithms analysis Keywords and phrases Empirical Evaluation of Algorithms, Feedback Vertex Set Digital Object Identifier 10.4230/LIPIcs.SEA.2018.12 Funding Supported by the “Recent trends in kernelization: theory and experimental evaluation” project, carried out within the Homing programme of the Foundation for Polish Science co- financed by the European Union under the European Regional Development Fund.
    [Show full text]
  • Propagation Via Kernelization: the Vertex Cover Constraint
    Propagation via Kernelization: the Vertex Cover Constraint Cl´ement Carbonnel and Emmanuel Hebrard LAAS-CNRS, Universit´ede Toulouse, CNRS, INP, Toulouse, France Abstract. The technique of kernelization consists in extracting, from an instance of a problem, an essentially equivalent instance whose size is bounded in a parameter k. Besides being the basis for efficient param- eterized algorithms, this method also provides a wealth of information to reason about in the context of constraint programming. We study the use of kernelization for designing propagators through the example of the Vertex Cover constraint. Since the classic kernelization rules of- ten correspond to dominance rather than consistency, we introduce the notion of \loss-less" kernel. While our preliminary experimental results show the potential of the approach, they also show some of its limits. In particular, this method is more effective for vertex covers of large and sparse graphs, as they tend to have, relatively, smaller kernels. 1 Introduction The fact that there is virtually no restriction on the algorithms used to reason about each constraint was critical to the success of constraint programming. For instance, efficient algorithms from matching and flow theory [2, 14] were adapted as propagation algorithms [16, 18] and subsequently lead to a number of success- ful applications. NP-hard constraints, however, are often simply decomposed. Doing so may significantly hinder the reasoning made possible by the knowledge on the structure of the problem. For instance, finding a support for the NValue constraint is NP-hard, yet enforcing some incomplete propagation rules for this constraint has been shown to be an effective approach [5, 10], compared to de- composing it, or enforcing bound consistency [3].
    [Show full text]
  • Linear-Time Kernelization for Feedback Vertex Set∗
    Linear-Time Kernelization for Feedback Vertex Set∗ Yoichi Iwata† National Institute of Informatics, Tokyo, Japan [email protected] Abstract In this paper, we give an algorithm that, given an undirected graph G of m edges and an integer k, computes a graph G0 and an integer k0 in O(k4m) time such that (1) the size of the graph G0 is O(k2), (2) k0 ≤ k, and (3) G has a feedback vertex set of size at most k if and only if G0 has a feedback vertex set of size at most k0. This is the first linear-time polynomial-size kernel for Feedback Vertex Set. The size of our kernel is 2k2 + k vertices and 4k2 edges, which is smaller than the previous best of 4k2 vertices and 8k2 edges. Thus, we improve the size and the running time simultaneously. We note that under the assumption of NP 6⊆ coNP/poly, Feedback Vertex Set does not admit an O(k2−)-size kernel for any > 0. Our kernel exploits k-submodular relaxation, which is a recently developed technique for obtaining efficient FPT algorithms for various problems. The dual of k-submodular relaxation of Feedback Vertex Set can be seen as a half-integral variant of A-path packing, and to obtain the linear-time complexity, we give an efficient augmenting-path algorithm for this problem. We believe that this combinatorial algorithm is of independent interest. A solver based on the proposed method won first place in the 1st Parameterized Algorithms and Computational Experiments (PACE) challenge. 1998 ACM Subject Classification G.2.2 Graph Theory Keywords and phrases FPT Algorithms, Kernelization, Path Packing, Half-integrality Digital Object Identifier 10.4230/LIPIcs.ICALP.2017.68 1 Introduction In the theory of parameterized complexity, we introduce parameters to problems and analyze the complexity with respect to both the input length n = |x| and the parameter value k.
    [Show full text]
  • Bidimensionality: New Connections Between FPT Algorithms and Ptass
    Bidimensionality: New Connections between FPT Algorithms and PTASs Erik D. Demaine∗ MohammadTaghi Hajiaghayi∗ Abstract able whenever the parameter is reasonably small. In several We demonstrate a new connection between fixed-parameter applications, e.g., finding locations to place fire stations, we tractability and approximation algorithms for combinatorial prefer exact solutions at the cost of running time: we can optimization problems on planar graphs and their generaliza- afford high running time (e.g., several weeks of real time) tions. Specifically, we extend the theory of so-called “bidi- if the resulting solution builds fewer fire stations (which are mensional” problems to show that essentially all such prob- extremely expensive). lems have both subexponential fixed-parameter algorithms A general result of Cai and Chen [16] says that if and PTASs. Bidimensional problems include e.g. feedback an NP optimization problem has an FPTAS, i.e., a PTAS O(1) O(1) vertex set, vertex cover, minimum maximal matching, face with running time (1/ε) n , then it is fixed-parameter cover, a series of vertex-removal problems, dominating set, tractable. Others [10, 17] have generalized this result to any edge dominating set, r-dominating set, diameter, connected problem with an EPTAS, i.e., a PTAS with running time O(1) dominating set, connected edge dominating set, and con- f(1/ε)n for any function f. On the other hand, no nected r-dominating set. We obtain PTASs for all of these reverse transformation is possible in general, because for problems in planar graphs and certain generalizations; of example vertex cover is an NP optimization problem that is particular interest are our results for the two well-known fixed-parameter tractable but has no PTAS in general graphs problems of connected dominating set and general feedback (unless P = NP ).
    [Show full text]
  • Kernelization Dimitrios M
    A Retrospective on (Meta) Kernelization Dimitrios M. Thilikos To cite this version: Dimitrios M. Thilikos. A Retrospective on (Meta) Kernelization. Treewidth, Kernels, and Algorithms, 12160, pp.222-246, 2020, Lecture Notes in Computer Science, 978-3-030-42070-3. 10.1007/978-3-030- 42071-0_16. hal-03002701 HAL Id: hal-03002701 https://hal.archives-ouvertes.fr/hal-03002701 Submitted on 20 Nov 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Retrospective on (Meta) Kernelization Dimitrios M. Thilikos11[0000−0003−0470−1800] November 18, 2020 «Een mens zijn zin is een mens zijn leven.» Abstract. In parameterized complexity, a kernelization algorithm can be seen as a reduction of a parameterized problem to itself, so that the produced equivalent instance has size depending exclusively on the pa- rameter. If this size is polynomial, then then we say that the parameter- ized problem in question admits a polynomial kernelization algorithm. Kernelization can be seen as a formalization of the notion of preprocess- ing and has occupied a big part of the research on Multi-variate Algorith- mics. The first algorithmic meta-theorem on kernelization appeared in [H.
    [Show full text]
  • Fixed Parameter Algorithms and Kernelization
    Fixed Parameter Algorithms and Kernelization Saket Saurabh The Institute of Mathematica Sciences, India Pre-WorKer 2011Talk, Vienna, 1st September Fixed Parameter Algorithms and Kernelization – p.1/27 Classical complexity A brief review: We usually aim for polynomial-time algorithms: the running time is O(nc ), where n is the input size. Classical polynomial-time algorithms: shortest path, mathching, minimum spanning tree, 2SAT, convext hull, planar drawing, linear programming, etc. It is unlikely that polynomial-time algorithms exist for NP-hard problems. Unfortunately, many problems of interest are NP-hard: Hamiltonian cycle, 3-coloring, 3SAT, etc. We expect that these problems can be solved only in exponential time (i.e., cn). Can we say anything nontrivial about NP-hard problems? Fixed Parameter Algorithms and Kernelization – p.2/27 Parameterized complexity Main idea: Instead of expressing the running time as a function T (n) of n, we express it as a function T (n, k) of the input size n and some parameter k of the input. In other words: we do not want to be efficient on all inputs of size n, only for those where k is small. Fixed Parameter Algorithms and Kernelization – p.3/27 Parameterized complexity Main idea: Instead of expressing the running time as a function T (n) of n, we express it as a function T (n, k) of the input size n and some parameter k of the input. In other words: we do not want to be efficient on all inputs of size n, only for those where k is small. What can be the parameter k? The size k of the solution we are looking for.
    [Show full text]
  • A 2K-Kernelization Algorithm for Vertex Cover Based on Crown
    A2k-Kernelization Algorithm for Vertex Cover Based on Crown Decomposition Wenjun Lia, Binhai Zhub,∗ aHunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation, Changsha University of Science and Technology, Changsha, China. bGianforte School of Computing, Montana State University, Bozeman, MT 59717-3880, USA. Abstract We revisit crown decomposition for the Vertex Cover problem by giving a simple 2k-kernelization algorithm. Previously, a 2k kernel was known but it was computed using both crown decomposition and linear programming; moreover, with crown decomposition alone only a 3k kernel was known. Our refined crown decomposition carries some extra property and could be used for some other related problems. Keywords: Vertex cover, Crown decomposition, Kernelization, FPT algorithms, NP-completeness 1. Introduction Vertex Cover is a classic NP-complete problem which has been used to model conflicts in many applications [8, 9]. Due to its importance, a lot of research has been done on it and in this paper we focus on the parameterized version of the problem. The problem is defined as follows. Vertex Cover Given: A simple undirected graph G = (V, E), and a positive integer k; Parameter: k; ∗Corresponding Author. Email addresses: [email protected] (Wenjun Li), [email protected] (Binhai Zhu) Preprint submitted to Theoretical Computer Science April 5, 2018 Question: Decide if there is a subset V ′ ⊆ V with |V ′| ≤ k such that for any edge hu, vi ∈ E at least one of u, v is in V ′. For a parameterized problem (Π,k), we say that (Π,k) is Fixed-Parameter Tractable (FPT) if it can be solved in O(f(k)nc)= O∗(f(k)) time, where n is the input length, c is a fixed constant and f(−) is some computable function.
    [Show full text]
  • Introduction + Bidimensionality Theory
    Introduction + Bidimensionality Theory Ignasi Sau CNRS, LIRMM, Montpellier (France) Many thanks to Dimitrios M. Thilikos!! 14`emesJCALM. October 2013. UPC, Barcelona, Catalunya 1/57 Outline of the talk 1 Introduction, part II Treewidth Dynamic programming on tree decompositions Structure of H-minor-free graphs Some algorithmic issues A few words on other containment relations 2 Bidimensionality Some ingredients An illustrative example Meta-algorithms Further extensions 2/57 Next section is... 1 Introduction, part II Treewidth Dynamic programming on tree decompositions Structure of H-minor-free graphs Some algorithmic issues A few words on other containment relations 2 Bidimensionality Some ingredients An illustrative example Meta-algorithms Further extensions 3/57 Single-exponential FPT algorithm:2 O(k) · nO(1) Subexponential FPT algorithm:2 o(k) · nO(1) Parameterized complexity in one slide Idea given an NP-hard problem, fix one parameter of the input to see if the problem gets more \tractable". Example: the size of a Vertex Cover. Given a (NP-hard) problem with input of size n and a parameter k, a fixed-parameter tractable( FPT) algorithm runs in f (k) · nO(1); for some function f . Examples: k-Vertex Cover, k-Longest Path. 4/57 Subexponential FPT algorithm:2 o(k) · nO(1) Parameterized complexity in one slide Idea given an NP-hard problem, fix one parameter of the input to see if the problem gets more \tractable". Example: the size of a Vertex Cover. Given a (NP-hard) problem with input of size n and a parameter k, a fixed-parameter tractable( FPT) algorithm runs in f (k) · nO(1); for some function f .
    [Show full text]
  • An Exponential Time Parameterized Algorithm for Planar Disjoint Paths∗
    An Exponential Time Parameterized Algorithm for Planar Disjoint Paths∗ Daniel Lokshtanov† Pranabendu Misra‡ Michal Pilipczuk§ Saket Saurabh¶ Meirav Zehavi‖ April 1, 2021 Abstract In the Disjoint Paths problem, the input is an undirected graph G on n vertices and a set k of k vertex pairs, fsi; tigi=1, and the task is to find k pairwise vertex-disjoint paths such that the i'th path connects si to ti. In this paper, we give a parameterized algorithm with running 2 time 2O(k )nO(1) for Planar Disjoint Paths, the variant of the problem where the input graph is required to be planar. Our algorithm is based on the unique linkage/treewidth reduction theorem for planar graphs by Adler et al. [JCTB 2017], the algebraic cohomology based technique of Schrijver [SICOMP 1994] and one of the key combinatorial insights developed by Cygan et al. [FOCS 2013] in their algorithm for Disjoint Paths on directed planar graphs. To the best of our knowledge our algorithm is the first parameterized algorithm to exploit that the treewidth of the input graph is small in a way completely different from the use of dynamic programming. arXiv:2103.17041v1 [cs.DS] 31 Mar 2021 ∗A preliminary version of this paper appeared in the proceedings of STOC 2020. †University of California, Santa Barbara, USA. [email protected] ‡Max Planck Institute for Informatics, Saarbrucken, Germany. [email protected] §Institute of Informatics, University of Warsaw, Poland. [email protected] ¶The Institute of Mathematical Sciences, HBNI, Chennai, India. [email protected] ‖Ben-Gurion University, Beersheba, Israel.
    [Show full text]