Approximation Algorithms Using ILP

Total Page:16

File Type:pdf, Size:1020Kb

Approximation Algorithms Using ILP Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Approximation Algorithms using ILP Subhas C. Nandy ([email protected]) Advanced Computing and Microelectronics Unit Indian Statistical Institute Kolkata 700108, India. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Organization 1 Introduction 2 Approximation Algorithm 3 Integer Linear Programming 4 Vertex cover problem 5 Set cover problem 6 Rectangle Stabbing Problem 7 Uncapacited Facility Location Complexity Classes P: A decision problem that can be solved in polynomial time. NP: A decision problem that, for an input, can be verified in polynomial time. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Introduction A problem is said to be tractable if there exists an algorithm for that problem whose worst case time complexity is a polynomial function in n (the input size). Otherwise, the problem is said to be intractable. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Introduction A problem is said to be tractable if there exists an algorithm for that problem whose worst case time complexity is a polynomial function in n (the input size). Otherwise, the problem is said to be intractable. Complexity Classes P: A decision problem that can be solved in polynomial time. NP: A decision problem that, for an input, can be verified in polynomial time. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location NP-Completeness An important question Whether P = NP or P 6= NP ? NP-Complete Class: A class of decision problems in NP such that if one of them can be solved in polynomial time, all other problems can also be solved in polynomial time. NP NPC P Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location NP-Complete Problems Circuit satisfiability, 3-SAT (satisfiability where the expression is in CNF, and each clause is of length at most 3), Clique decision problem, Vertex Cover decision, Travelling salesman problem, Knapsack problem, Bin packing Art gallery problem to name only a few. Heuristic Develop an intuitive algorithm, such that it guarantees to run in polynomial time, but no guarantee on the quality of the solution. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Handling the NP-Hard Problems Brute-force algorithm Design a clever enumeration strategy, so that it guarantees the optimality of the solution, but no guarantee on the running time. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Handling the NP-Hard Problems Brute-force algorithm Design a clever enumeration strategy, so that it guarantees the optimality of the solution, but no guarantee on the running time. Heuristic Develop an intuitive algorithm, such that it guarantees to run in polynomial time, but no guarantee on the quality of the solution. But, what is a high quality solution? Is the solution = OPT + α, or α × OPT , or (1 + ) × OPT where, OPT ! the optimum solution, α ! a constant, ! a very small constant. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Approximation Algorithm Approximation Algorithm Guaranteed to run in polynomial time, and guaranteed to produce a "high quality" solution. Is the solution = OPT + α, or α × OPT , or (1 + ) × OPT where, OPT ! the optimum solution, α ! a constant, ! a very small constant. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Approximation Algorithm Approximation Algorithm Guaranteed to run in polynomial time, and guaranteed to produce a "high quality" solution. But, what is a high quality solution? Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Approximation Algorithm Approximation Algorithm Guaranteed to run in polynomial time, and guaranteed to produce a "high quality" solution. But, what is a high quality solution? Is the solution = OPT + α, or α × OPT , or (1 + ) × OPT where, OPT ! the optimum solution, α ! a constant, ! a very small constant. Difficulty: One needs to prove that the solution is close to optimum, without knowing the optimum solution. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Approximation Algorithm Note: Designing approximation algorithm for a problem having polynomial time algorithm is also relevant if the time complexity of the problem is very high. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Approximation Algorithm Note: Designing approximation algorithm for a problem having polynomial time algorithm is also relevant if the time complexity of the problem is very high. Difficulty: One needs to prove that the solution is close to optimum, without knowing the optimum solution. Performance ratio: f (n) = A(I )=OPT in case P is a minimization problem OPT =A(I ) in case P is a maximization problem We will show the use of ILP in designing Approximation Algorithms Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Approximation Algorithm f (n)-approximation Algorithm Let P be a problem of size n. An algorithm A for a problem P is said to be an f (n)-approximation algorithm if and only if for every instance I of P, jA(I ) − OPT j=OPT ≤ f (n). It is assumed that OPT ≥ 0. Note: f (n) can be a constant also. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Approximation Algorithm f (n)-approximation Algorithm Let P be a problem of size n. An algorithm A for a problem P is said to be an f (n)-approximation algorithm if and only if for every instance I of P, jA(I ) − OPT j=OPT ≤ f (n). It is assumed that OPT ≥ 0. Note: f (n) can be a constant also. Performance ratio: f (n) = A(I )=OPT in case P is a minimization problem OPT =A(I ) in case P is a maximization problem We will show the use of ILP in designing Approximation Algorithms Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Integer Linear Programming The Problem Objective Function: Maximize 8x1 + 11x2 + 6x3 + 4x4 Subject to: 5x1 + 7x2 + 4x3 + 3x4 ≤ 14 7x1 + 2x3 + x4 ≤ 10 + xi 2 Z for all i = 1; 2; 3; 4 Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Vertex Cover Problem Input: G = (V ; E) −! a graph, where each vertex is associated a positive weight. Output: A subset C ⊆ V such that every edge e 2 E is covered by some vertex in C, and w(C) is minimized. ILP Formulation P Minimize: v2V wv xv Subject to: xu + xv ≥ 1 for each edge e = (u; v) 2 E xv 2 f0; 1g for all v 2 V LP Relaxation P Minimize: v2V wv xv Subject to: xu + xv ≥ 1 for each edge e = (u; v) 2 E 0 ≤ xv ≤ 1 for all v 2 V This LP can be solved in polynomial time Let the optimum solution be OPTf Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Vertex Cover Problem Finally, round the fractional solution x to an integer solution 1 x∗ : C = fvjx ≥ g v 2 Observation 1 C is a feasible solution of the vertex cover problem. Follows from the fact: every edge e = (u; v) satisfies xu + xv ≥ 1 Observation 2 w(C) ≤ 2OPTf ≤ 2OPT P w(C) = v2C w(v) P = v2V jx ≥1=2 w(v) P v ≤ 2 × v2V jx ≥1=2 w(v)x(v) P v ≤ 2 × v2V w(v)x(v) = 2OPTf ≤ 2OPT . Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Integrality Gap Given a set of instances I of the problem P The integrality gap is defined by OPT (I ) OPT (I ) supI 2I for minimization problem, and infI 2I for OPTf (I ) OPTf (I ) maximization problem. Note: We can not get an approximation factor better than the integrality gap. Introduction Approximation Algorithm Integer Linear Programming Vertex cover problem Set cover problem Rectangle Stabbing Problem Uncapacited Facility Location Integrality Gap 2 Vertex cover problem has integrality gap 2 − n Let G be a complete graph with n vertices n OPTf = 2 However, the optimum solution of the vertex cover problem is n − 1 n−1 Thus, the Integrality Gap is n=2 =2 − frac2n Our approximation bound is Tight Consider
Recommended publications
  • Lecture Notes: Randomized Rounding 1 Randomized Approximation
    IOE 691: Approximation Algorithms Date: 02/15/2017 Lecture Notes: Randomized Rounding Instructor: Viswanath Nagarajan Scribe: Kevin J. Sung 1 Randomized approximation algorithms A randomized algorithm is an algorithm that makes random choices. A randomized α-approximation algorithm for a minimization problem is a randomized algorithm that, with probability Ω(1=poly), returns a feasible solution that has value at most α · OPT, where OPT is the value of an optimal solution. 2 Minimum cut The minimum cut problem is defined as follows. • Input: A directed graph G = (V; E) with a cost ce ≥ 0 associated with each edge e 2 E, and two vertices s; t 2 V . • Output: A subset F ⊆ E such that if the edges in F are removed from G, the resulting graph has no path from s to t. Equivalently, output a subset S that contains s (the set F is then defined by F = f(u; v) 2 E : u 2 S; v2 = Sg). 2.1 Integer program formulation The minimum cut problem can be formulated as an integer program as follows: X min cexe e2E s:t: yt = 1; ys = 0; yv ≤ yu + xuv; 8(u; v) 2 E; xe; yv 2 f0; 1g; 8e 2 E; v 2 V: The variables have the following interpretation: xe indicates whether we choose e as part of the cut, and yv = 0 indicates whether, after cutting edges, there is a path from s to v. The inequality constraint expresses the fact that if (u; v) 2 E and there is a path from s to u, then there must also be a path from s to v.
    [Show full text]
  • Rounding Algorithms for Covering Problems
    Mathematical Programming 80 (1998) 63 89 Rounding algorithms for covering problems Dimitris Bertsimas a,,,1, Rakesh Vohra b,2 a Massachusetts Institute of Technology, Sloan School of Management, 50 Memorial Drive, Cambridge, MA 02142-1347, USA b Department of Management Science, Ohio State University, Ohio, USA Received 1 February 1994; received in revised form 1 January 1996 Abstract In the last 25 years approximation algorithms for discrete optimization problems have been in the center of research in the fields of mathematical programming and computer science. Re- cent results from computer science have identified barriers to the degree of approximability of discrete optimization problems unless P -- NP. As a result, as far as negative results are con- cerned a unifying picture is emerging. On the other hand, as far as particular approximation algorithms for different problems are concerned, the picture is not very clear. Different algo- rithms work for different problems and the insights gained from a successful analysis of a par- ticular problem rarely transfer to another. Our goal in this paper is to present a framework for the approximation of a class of integer programming problems (covering problems) through generic heuristics all based on rounding (deterministic using primal and dual information or randomized but with nonlinear rounding functions) of the optimal solution of a linear programming (LP) relaxation. We apply these generic heuristics to obtain in a systematic way many known as well as new results for the set covering, facility location, general covering, network design and cut covering problems. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
    [Show full text]
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • Chapter 9. Coping with NP-Completeness
    Chapter 9 Coping with NP-completeness You are the junior member of a seasoned project team. Your current task is to write code for solving a simple-looking problem involving graphs and numbers. What are you supposed to do? If you are very lucky, your problem will be among the half-dozen problems concerning graphs with weights (shortest path, minimum spanning tree, maximum flow, etc.), that we have solved in this book. Even if this is the case, recognizing such a problem in its natural habitat—grungy and obscured by reality and context—requires practice and skill. It is more likely that you will need to reduce your problem to one of these lucky ones—or to solve it using dynamic programming or linear programming. But chances are that nothing like this will happen. The world of search problems is a bleak landscape. There are a few spots of light—brilliant algorithmic ideas—each illuminating a small area around it (the problems that reduce to it; two of these areas, linear and dynamic programming, are in fact decently large). But the remaining vast expanse is pitch dark: NP- complete. What are you to do? You can start by proving that your problem is actually NP-complete. Often a proof by generalization (recall the discussion on page 270 and Exercise 8.10) is all that you need; and sometimes a simple reduction from 3SAT or ZOE is not too difficult to find. This sounds like a theoretical exercise, but, if carried out successfully, it does bring some tangible rewards: now your status in the team has been elevated, you are no longer the kid who can't do, and you have become the noble knight with the impossible quest.
    [Show full text]
  • CS 561, Lecture 24 Outline All-Pairs Shortest Paths Example
    Outline CS 561, Lecture 24 • All Pairs Shortest Paths Jared Saia • TSP Approximation Algorithm University of New Mexico 1 All-Pairs Shortest Paths Example • For the single-source shortest paths problem, we wanted to find the shortest path from a source vertex s to all the other vertices in the graph • We will now generalize this problem further to that of finding • For any vertex v, we have dist(v, v) = 0 and pred(v, v) = the shortest path from every possible source to every possible NULL destination • If the shortest path from u to v is only one edge long, then • In particular, for every pair of vertices u and v, we need to dist(u, v) = w(u → v) and pred(u, v) = u compute the following information: • If there’s no shortest path from u to v, then dist(u, v) = ∞ – dist(u, v) is the length of the shortest path (if any) from and pred(u, v) = NULL u to v – pred(u, v) is the second-to-last vertex (if any) on the short- est path (if any) from u to v 2 3 APSP Lots of Single Sources • The output of our shortest path algorithm will be a pair of |V | × |V | arrays encoding all |V |2 distances and predecessors. • Many maps contain such a distance matric - to find the • Most obvious solution to APSP is to just run SSSP algorithm distance from (say) Albuquerque to (say) Ruidoso, you look |V | times, once for every possible source vertex in the row labeled “Albuquerque” and the column labeled • Specifically, to fill in the subarray dist(s, ∗), we invoke either “Ruidoso” Dijkstra’s or Bellman-Ford starting at the source vertex s • In this class, we’ll focus
    [Show full text]
  • Vertex Cover Might Be Hard to Approximate to Within 2 Ε − Subhash Khot ∗ Oded Regev †
    Vertex Cover Might be Hard to Approximate to within 2 ε − Subhash Khot ∗ Oded Regev † Abstract Based on a conjecture regarding the power of unique 2-prover-1-round games presented in [Khot02], we show that vertex cover is hard to approximate within any constant factor better than 2. We actually show a stronger result, namely, based on the same conjecture, vertex cover on k-uniform hypergraphs is hard to approximate within any constant factor better than k. 1 Introduction Minimum vertex cover is the problem of finding the smallest set of vertices that touches all the edges in a given graph. This is one of the most fundamental NP-complete problems. A simple 2- approximation algorithm exists for this problem: construct a maximal matching by greedily adding edges and then let the vertex cover contain both endpoints of each edge in the matching. It can be seen that the resulting set of vertices indeed touches all the edges and that its size is at most twice the size of the minimum vertex cover. However, despite considerable efforts, state of the art techniques can only achieve an approximation ratio of 2 o(1) [16, 21]. − Given this state of affairs, one might strongly suspect that vertex cover is NP-hard to approxi- mate within 2 ε for any ε> 0. This is one of the major open questions in the field of approximation − algorithms. In [18], H˚astad showed that approximating vertex cover within constant factors less 7 than 6 is NP-hard. This factor was recently improved by Dinur and Safra [10] to 1.36.
    [Show full text]
  • CSC2411 - Linear Programming and Combinatorial Optimization∗ Lecture 12: Approximation Algorithms Using Tools from LP
    CSC2411 - Linear Programming and Combinatorial Optimization∗ Lecture 12: Approximation Algorithms using Tools from LP Notes taken by Matei David April 12, 2005 Summary: In this lecture, we give three example applications of ap- proximating the solutions to IP by solving the relaxed LP and rounding. We begin with Vertex Cover, where we get a 2-approximation. Then we look at Set Cover, for which we give a randomized rounding algorithm which achieves a O(log n)-approximation with high probability. Finally, we again apply randomized rounding to approximate MAXSAT. In the associated tutorial, we revisit MAXSAT. Then, we discuss primal- dual algorithms, including one for Set Cover. LP relaxation It is easy to capture certain combinatorial problems with an IP. We can then relax integrality constraints to get a LP, and solve the latter efficiently. In the process, we might lose something, for the relaxed LP might have a strictly better optimum than the original IP. In last class, we have seen that in certain cases, we can use algebraic properties of the matrix to argue that we do not lose anything in the relaxation, i.e. we get an exact relaxation. Note that, in general, we can add constraints to an IP to the point where we do not lose anything when relaxing it to an LP. However, the size of the inflated IP is usually exponential, and this procedure is not algorithmically doable. 1 Vertex Cover revisited P P In the IP/LP formulation of VC, we are minimizing xi (or wixi in the weighted case), subject to constraints of the form xi + xj ≥ 1 for all edges i, j.
    [Show full text]
  • Approximability Preserving Reduction Giorgio Ausiello, Vangelis Paschos
    Approximability preserving reduction Giorgio Ausiello, Vangelis Paschos To cite this version: Giorgio Ausiello, Vangelis Paschos. Approximability preserving reduction. 2005. hal-00958028 HAL Id: hal-00958028 https://hal.archives-ouvertes.fr/hal-00958028 Preprint submitted on 11 Mar 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Laboratoire d'Analyse et Modélisation de Systèmes pour l'Aide à la Décision CNRS UMR 7024 CAHIER DU LAMSADE 227 Septembre 2005 Approximability preserving reductions Giorgio AUSIELLO, Vangelis Th. PASCHOS Approximability preserving reductions∗ Giorgio Ausiello1 Vangelis Th. Paschos2 1 Dipartimento di Informatica e Sistemistica Università degli Studi di Roma “La Sapienza” [email protected] 2 LAMSADE, CNRS UMR 7024 and Université Paris-Dauphine [email protected] September 15, 2005 Abstract We present in this paper a “tour d’horizon” of the most important approximation-pre- serving reductions that have strongly influenced research about structure in approximability classes. 1 Introduction The technique of transforming a problem into another in such a way that the solution of the latter entails, somehow, the solution of the former, is a classical mathematical technique that has found wide application in computer science since the seminal works of Cook [10] and Karp [19] who introduced particular kinds of transformations (called reductions) with the aim of study- ing the computational complexity of combinatorial decision problems.
    [Show full text]
  • Admm for Multiaffine Constrained Optimization Wenbo Gao†, Donald
    ADMM FOR MULTIAFFINE CONSTRAINED OPTIMIZATION WENBO GAOy, DONALD GOLDFARBy, AND FRANK E. CURTISz Abstract. We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-Lojasiewicz (K-L)property holds, this is strengthened to convergence to a single con- strained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnega- tive matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions. 1. Introduction The alternating direction method of multipliers (ADMM) is an iterative method that was initially proposed for solving linearly-constrained separable optimization problems having the form: ( inf f(x) + g(y) (P 0) x;y Ax + By − b = 0: The augmented Lagrangian L of the problem (P 0), for some penalty parameter ρ > 0, is defined to be ρ L(x; y; w) = f(x) + g(y) + hw; Ax + By − bi + kAx + By − bk2: 2 In iteration k, with the iterate (xk; yk; wk), ADMM takes the following steps: (1) Minimize L(x; yk; wk) with respect to x to obtain xk+1.
    [Show full text]
  • Linear Programming Notes X: Integer Programming
    Linear Programming Notes X: Integer Programming 1 Introduction By now you are familiar with the standard linear programming problem. The assumption that choice variables are infinitely divisible (can be any real number) is unrealistic in many settings. When we asked how many chairs and tables should the profit-maximizing carpenter make, it did not make sense to come up with an answer like “three and one half chairs.” Maybe the carpenter is talented enough to make half a chair (using half the resources needed to make the entire chair), but probably she wouldn’t be able to sell half a chair for half the price of a whole chair. So, sometimes it makes sense to add to a problem the additional constraint that some (or all) of the variables must take on integer values. This leads to the basic formulation. Given c = (c1, . , cn), b = (b1, . , bm), A a matrix with m rows and n columns (and entry aij in row i and column j), and I a subset of {1, . , n}, find x = (x1, . , xn) max c · x subject to Ax ≤ b, x ≥ 0, xj is an integer whenever j ∈ I. (1) What is new? The set I and the constraint that xj is an integer when j ∈ I. Everything else is like a standard linear programming problem. I is the set of components of x that must take on integer values. If I is empty, then the integer programming problem is a linear programming problem. If I is not empty but does not include all of {1, . , n}, then sometimes the problem is called a mixed integer programming problem.
    [Show full text]
  • 3.1 Matchings and Factors: Matchings and Covers
    1 3.1 Matchings and Factors: Matchings and Covers This copyrighted material is taken from Introduction to Graph Theory, 2nd Ed., by Doug West; and is not for further distribution beyond this course. These slides will be stored in a limited-access location on an IIT server and are not for distribution or use beyond Math 454/553. 2 Matchings 3.1.1 Definition A matching in a graph G is a set of non-loop edges with no shared endpoints. The vertices incident to the edges of a matching M are saturated by M (M-saturated); the others are unsaturated (M-unsaturated). A perfect matching in a graph is a matching that saturates every vertex. perfect matching M-unsaturated M-saturated M Contains copyrighted material from Introduction to Graph Theory by Doug West, 2nd Ed. Not for distribution beyond IIT’s Math 454/553. 3 Perfect Matchings in Complete Bipartite Graphs a 1 The perfect matchings in a complete b 2 X,Y-bigraph with |X|=|Y| exactly c 3 correspond to the bijections d 4 f: X -> Y e 5 Therefore Kn,n has n! perfect f 6 matchings. g 7 Kn,n The complete graph Kn has a perfect matching iff… Contains copyrighted material from Introduction to Graph Theory by Doug West, 2nd Ed. Not for distribution beyond IIT’s Math 454/553. 4 Perfect Matchings in Complete Graphs The complete graph Kn has a perfect matching iff n is even. So instead of Kn consider K2n. We count the perfect matchings in K2n by: (1) Selecting a vertex v (e.g., with the highest label) one choice u v (2) Selecting a vertex u to match to v K2n-2 2n-1 choices (3) Selecting a perfect matching on the rest of the vertices.
    [Show full text]
  • Theoretical Computer Science Efficient Approximation of Min Set
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Theoretical Computer Science 410 (2009) 2184–2195 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs Efficient approximation of min set cover by moderately exponential algorithms N. Bourgeois, B. Escoffier ∗, V.Th. Paschos LAMSADE, CNRS FRE 3234 and Université Paris-Dauphine, France article info a b s t r a c t Article history: We study the approximation of min set cover combining ideas and results from polynomial Received 13 October 2008 approximation and from exact computation (with non-trivial worst case complexity upper Received in revised form 30 January 2009 bounds) for NP-hard problems. We design approximation algorithms for min set cover Accepted 1 February 2009 achieving ratios that cannot be achieved in polynomial time (unless problems in NP could Communicated by G. Ausiello be solved by slightly super-polynomial algorithms) with worst-case complexity much lower (though super-polynomial) than those of an exact computation. Keywords: 2009 Elsevier B.V. All rights reserved. Exponential algorithms ' Approximation algorithms min set cover 1. Introduction C Given a ground set C of cardinality n and a system S D fS1;:::; Smg ⊂ 2 , min set cover consists of determining a 0 minimum size subsystem S such that [S2S0 S D C. min set cover is a famous NP-hard problem dealt with in the seminal paper [22]. For the last ten years, the issue of exact resolution of NP-hard problems by algorithms having provably non-trivial upper time-complexity bounds has been very actively studied (see, for instance, the surveys [16,26,30]).
    [Show full text]