Approximation Algorithms

Total Page:16

File Type:pdf, Size:1020Kb

Approximation Algorithms Introduction to Approximation algorithms Algorithms and Networks 2016/2017 Johan M. M. van Rooij Hans L. Bodlaender 1 Solution Optimal Bound on Good quality quality solution, no quality Computation guarantee time Polynomial Approximation Construction Polynomial solution algorithms heuristics algorithms Exact algorithms: Hybrid Meta • Tree search algorithms heuristics: Super • Dynamic • Column • Local polynomial programming generation search and/or no • Integer linear without • Genetic guarantee programming complete algorithms • …… branch-and- price 2 Solution Optimal Bound on Good quality quality solution, no quality Computation guarantee time Polynomial ApproximationAlgorithms Construction Polynomial solution algorithmsand networks heuristics algorithms ExactAlgorithms algorithms : Hybrid Meta a• ndTree networks search algorithms heuristics: Super • Dynamic • Column • Local polynomial programming generation search and/or no • Integer linear without • Genetic guarantee programming complete algorithms • …… branch-and- price 3 What to do if a problem is NP-complete? We have already seen many options to deal with NP- complete problems. Earlier in this course: Special cases (planar graphs), exact exponential-time algorithms. Later in this course: FPT and treewidth. In other courses: local search, ILP, constraint programming, … Approximation algorithms are one of these options. An approximation algorithm is a heuristic with a performance guarantee. We consider polynomial-time approximation algorithms. Non-optimal solutions, but with some performance guarantee compared to the optimal solution. Also useful as a starting point for other approaches: Local search, branch and bound. 4 What is a polynomial-time approximation algorithm? An algorithm that... 1. ‘Solves’ (applies to) an optimisation problem. A minimisation problem or a maximisation problem. E.g., independent set, vertex cover, knapsack, max-SAT. Not: SAT, 3-SAT, etc. 2. Runs in polynomial time. 3. It has a bound on the quality of the solution. This is called the approximation ratio. E.g.: value of algorithm result / optimal solution value · c. We call such an algorithm a c-approximation. 5 Approximation ratio For an instance I of an optimisation problem, let OPT(I) be the value of the optimal solution to I. ALG(I) be the value computed by the approximation algorithm. An algorithm for a minimisation problem has an approximation ratio c, if for all instances I: An algorithm for a maximisation problem has an approximation ratio c, if for all instances I: 6 Class of approximation algorithms Class of problems with approximation algorithms whose approximation ratio is a constant: APX. Subclass of NP. We only consider optimisation problems in NP. Other notions of approximability also exist. We see some of them the next lecture. And one at the end of this lecture. Notions of APX-completeness also exist. As NP-completeness. We will see this at the end of the lectures on approximation algorithms. 7 This lecture Approximation algorithms for a series of problems: Travelling Salesman Problem. Minimum (Weight) Vertex Cover. Max Satisfiability. 8 Approximation algorithms - Algorithms and Networks MAXIMUM SATISFIABILITY 9 Max satisfiability (decision version) Maximum Satisfiability (Max SAT). Instance: A set of clauses C containing literals of variables from a set X and an integer k. Question: Does there exist a truth assignment to the variables in X such that at least k clauses in C are satisfied? k-Satisfiability (k-SAT): Instance: set of clauses C (logical formula in CNF format) where each clause contains at most k literals. Question: does there exist a satisfying assignment to the variables in C, i.e., a truth assignment such that each clause has at least one literal set to true. Defined similarity: Maximum k-Satisfiability (Max k-SAT) 10 Approximating Max Satisfiability Algorithm: 1. Take any truth assignment a to the variables X. 2. Let :a be the assignment a with all variables negated. 3. Return form {a,:a} the assignment that satisfies most clauses. This is a 2-approximation. ALG ¸ |C|/2, OPT · |C|. Why? Can you also give approximation algorithms for max k- satisfiability? 11 Approximation algorithms - Algorithms and Networks TRAVELLING SALESMAN PROBLEM 12 The travelling salesman problem Instance: n vertices 4 (cities), distance 1 2 between every pair of 2 3 vertices. 5 Question: Find shortest 2 (simple) cycle that visits 3 2 4 every city? 4 4 Restriction on distances: 1 2 1 2 distances are non- 2 3 2 3 5 negative (or positive). 5 Triangle inequality: 2 2 for all x, y, z: 3 4 3 4 2 2 w(x,y) + w(y,z) w(x,z) 13 11 13 A simple algorithm Consider the Travelling Salesman Problem with triangle inequality. Algorithm: 1. Find a minimum spanning tree. 2. Output vertices of tree in preorder. Preorder: visit a node before its children. Take a tour following the MST This algorithm has approximation ratio 2: OPT ≥ MST. 2 MST ≥ ALG. ALG / OPT ≤ 2MST / MST = 2. 14 Can we do better? Yes: Christofides’ algorithm Christofides’ Algorithm: 1. Construct a Minimum Spanning Tree T. 2. Set W = {v | v has odd degree in tree T}. 3. Compute a minimum weight matching M in the graph G[W]. 4. Look at the graph T+M. Note that T+M is Eulerian! 5. Compute an Euler tour C’ in T+M. 6. Add shortcuts to C’ to get a TSP-tour. This is a 3/2-approximation. Proof on the blackboard. 15 Ratio 1.5 Total length edges in T: at most OPT Total length edges in matching M: at most OPT/2. T+M has length at most 3/2 OPT. Use triangle inequality. 16 Proving approximation ratios We just saw two algorithms for TSP with triangle inequality. We proved approximation ratios in the following way: 1. Use a quantity x that relates to both the optimal solution and the result of the algorithm A minimum spanning tree in both cases. 2. Prove that OPT ¸ c1 x. 3. Prove that ALG · c2 x 4. Combine both inequalities for a bound on ALG / OPT. Next, we consider some more problems. 17 Approximation algorithms - Algorithms and Networks MINIMUM (WEIGHT) VERTEX COVER 18 Approximation for minimum vertex cover Algorithm: 1. Let E’ = E, C = ;. 2. While E’ ;. 1. Let {u,v} be any edge from E’. 2. C := C [ {u,v}. 3. Remove every edge incident to u or v from E’. 3. Return C. Approximation algorithm? Runs in polynomial time. Returns a vertex cover. How good is this vertex cover? 19 2-approximation for vertex cover Theorem: The algorithm on the previous slide is a 2-approximation. Proof: Let A be the set of edges which endpoints we picked. OPT ¸ |A|, because every edge in A must be covered. ALG = 2|A| · 2OPT, hence ALG/OPT · 2. 20 Minimum weight vertex cover Minimum weight vertex cover: Vertex cover where each vertex has a weight. We look for the minimum weight vertex cover. 2-approximation for vertex cover no longer works. We may select very heavy vertices using that algorithm. Next, we will give a 2-approximation for minimum weights vertex cover using LP rounding. 21 An ILP formulation of the problem Consider the following ILP: minimise: w(v)x(v) v V X2 subject to:x(u) + x(v) 1 for each (u; v) E ¸ 2 x(v) 0; 1 for each v V 2 f g 2 It’s LP relaxation is ILP with the last constraint replaced by: 0 · x(v) · 1. Linear programming can be solved in polynomial time. Not by the simplex algorithm!! Ellipsoid method / interior point methods. 22 2-approximation algorithm for minimum weight vertex cover Algorithm: 1. Compute the optimal solution to the LP relaxation. 2. Output all v with x(v) ¸ ½. This algorithm returns a vertex cover. For every edge, sum of incident vertices at least 1. Hence at least one of the vertex variables at least ½. Approximation algorithm? Runs in polynomial time and produces a vertex cover. How good is this vertex cover? Proof of 2-approximation on the blackboard! 23 Proof of 2-approximation algorithm for minimum weight vertex cover Let z* be the solution to the LP. Because any vertex cover is a solution to the LP we have: Also, we can bound ALG in terms of z*: Hence: QED 24 Approximation algorithms - Algorithms and Networks CONCLUSION 25 Conclusion We have seen several different approximation algorithms for different problems: 2-approximations for Max Satisfiability (and Max k-SAT) 1.5- and 2-approximations for TSP. 2-approximations for vertex cover and weighted vertex cover. c-approximations, for a constant c, are called constant factor approximation algorithms. There are more different types of approximations. These we will see after the break. 26 The Landscape of Approximation Algorithms Algorithms and Networks 2016/2017 Johan M. M. van Rooij Hans L. Bodlaender 27 What is a polynomial-time approximation algorithm? An algorithm that... 1. Solves an optimisation problem. 2. Runs in polynomial time. 3. It has a bound on the quality of the solution. Approximation ratio c: ALG/OPT · c for minimisation problems. OPT/ALG · c for maximisation problems. Ratio always bigger or equal to 1. 28 Different forms of approximation algorithms (outline of two lectures) Qualities of polynomial-time approximation algorithms: 1. Absolute constant difference. |OPT – ALG| · c 2. FPTAS: Fully polynomial-time approximation scheme. Approximation ratio 1+² for any ² > 0, while the algorithm runs in time polynomial in n and 1/². 3. PTAS: Polynomial-time approximation scheme. Approximation ratio 1+² for any ² > 0, while the algorithm runs in polynomial time for any fixed ². 4. APX: Constant-factor approximation. Approximation ratio: ALG/OPT · c for minimisation problems. Approximation ratio: OPT/ALG · c for maximisation problems. 5. f(n)-APX: Approximation by a factor of f(n). f(n) depends only on the size of the input. 29 Qualities of poly-time approximation algorithms: 1. Absolute constant difference.
Recommended publications
  • Advanced Complexity Theory
    Advanced Complexity Theory Markus Bl¨aser& Bodo Manthey Universit¨atdes Saarlandes Draft|February 9, 2011 and forever 2 1 Complexity of optimization prob- lems 1.1 Optimization problems The study of the complexity of solving optimization problems is an impor- tant practical aspect of complexity theory. A good textbook on this topic is the one by Ausiello et al. [ACG+99]. The book by Vazirani [Vaz01] is also recommend, but its focus is on the algorithmic side. Definition 1.1. An optimization problem P is a 4-tuple (IP ; SP ; mP ; goalP ) where ∗ 1. IP ⊆ f0; 1g is the set of valid instances of P , 2. SP is a function that assigns to each valid instance x the set of feasible ∗ 1 solutions SP (x) of x, which is a subset of f0; 1g . + 3. mP : f(x; y) j x 2 IP and y 2 SP (x)g ! N is the objective function or measure function. mP (x; y) is the objective value of the feasible solution y (with respect to x). 4. goalP 2 fmin; maxg specifies the type of the optimization problem. Either it is a minimization or a maximization problem. When the context is clear, we will drop the subscript P . Formally, an optimization problem is defined over the alphabet f0; 1g. But as usual, when we talk about concrete problems, we want to talk about graphs, nodes, weights, etc. In this case, we tacitly assume that we can always find suitable encodings of the objects we talk about. ∗ Given an instance x of the optimization problem P , we denote by SP (x) the set of all optimal solutions, that is, the set of all y 2 SP (x) such that mP (x; y) = goalfmP (x; z) j z 2 SP (x)g: (Note that the set of optimal solutions could be empty, since the maximum need not exist.
    [Show full text]
  • Chapter 9. Coping with NP-Completeness
    Chapter 9 Coping with NP-completeness You are the junior member of a seasoned project team. Your current task is to write code for solving a simple-looking problem involving graphs and numbers. What are you supposed to do? If you are very lucky, your problem will be among the half-dozen problems concerning graphs with weights (shortest path, minimum spanning tree, maximum flow, etc.), that we have solved in this book. Even if this is the case, recognizing such a problem in its natural habitat—grungy and obscured by reality and context—requires practice and skill. It is more likely that you will need to reduce your problem to one of these lucky ones—or to solve it using dynamic programming or linear programming. But chances are that nothing like this will happen. The world of search problems is a bleak landscape. There are a few spots of light—brilliant algorithmic ideas—each illuminating a small area around it (the problems that reduce to it; two of these areas, linear and dynamic programming, are in fact decently large). But the remaining vast expanse is pitch dark: NP- complete. What are you to do? You can start by proving that your problem is actually NP-complete. Often a proof by generalization (recall the discussion on page 270 and Exercise 8.10) is all that you need; and sometimes a simple reduction from 3SAT or ZOE is not too difficult to find. This sounds like a theoretical exercise, but, if carried out successfully, it does bring some tangible rewards: now your status in the team has been elevated, you are no longer the kid who can't do, and you have become the noble knight with the impossible quest.
    [Show full text]
  • CS 561, Lecture 24 Outline All-Pairs Shortest Paths Example
    Outline CS 561, Lecture 24 • All Pairs Shortest Paths Jared Saia • TSP Approximation Algorithm University of New Mexico 1 All-Pairs Shortest Paths Example • For the single-source shortest paths problem, we wanted to find the shortest path from a source vertex s to all the other vertices in the graph • We will now generalize this problem further to that of finding • For any vertex v, we have dist(v, v) = 0 and pred(v, v) = the shortest path from every possible source to every possible NULL destination • If the shortest path from u to v is only one edge long, then • In particular, for every pair of vertices u and v, we need to dist(u, v) = w(u → v) and pred(u, v) = u compute the following information: • If there’s no shortest path from u to v, then dist(u, v) = ∞ – dist(u, v) is the length of the shortest path (if any) from and pred(u, v) = NULL u to v – pred(u, v) is the second-to-last vertex (if any) on the short- est path (if any) from u to v 2 3 APSP Lots of Single Sources • The output of our shortest path algorithm will be a pair of |V | × |V | arrays encoding all |V |2 distances and predecessors. • Many maps contain such a distance matric - to find the • Most obvious solution to APSP is to just run SSSP algorithm distance from (say) Albuquerque to (say) Ruidoso, you look |V | times, once for every possible source vertex in the row labeled “Albuquerque” and the column labeled • Specifically, to fill in the subarray dist(s, ∗), we invoke either “Ruidoso” Dijkstra’s or Bellman-Ford starting at the source vertex s • In this class, we’ll focus
    [Show full text]
  • Vertex Cover Might Be Hard to Approximate to Within 2 Ε − Subhash Khot ∗ Oded Regev †
    Vertex Cover Might be Hard to Approximate to within 2 ε − Subhash Khot ∗ Oded Regev † Abstract Based on a conjecture regarding the power of unique 2-prover-1-round games presented in [Khot02], we show that vertex cover is hard to approximate within any constant factor better than 2. We actually show a stronger result, namely, based on the same conjecture, vertex cover on k-uniform hypergraphs is hard to approximate within any constant factor better than k. 1 Introduction Minimum vertex cover is the problem of finding the smallest set of vertices that touches all the edges in a given graph. This is one of the most fundamental NP-complete problems. A simple 2- approximation algorithm exists for this problem: construct a maximal matching by greedily adding edges and then let the vertex cover contain both endpoints of each edge in the matching. It can be seen that the resulting set of vertices indeed touches all the edges and that its size is at most twice the size of the minimum vertex cover. However, despite considerable efforts, state of the art techniques can only achieve an approximation ratio of 2 o(1) [16, 21]. − Given this state of affairs, one might strongly suspect that vertex cover is NP-hard to approxi- mate within 2 ε for any ε> 0. This is one of the major open questions in the field of approximation − algorithms. In [18], H˚astad showed that approximating vertex cover within constant factors less 7 than 6 is NP-hard. This factor was recently improved by Dinur and Safra [10] to 1.36.
    [Show full text]
  • APX Radio Family Brochure
    APX MISSION-CRITICAL P25 COMMUNICATIONS BROCHURE APX P25 COMMUNICATIONS THE BEST OF WHAT WE DO Whether you’re a state trooper, firefighter, law enforcement officer or highway maintenance technician, people count on you to get the job done. There’s no room for error. This is mission critical. APX™ radios exist for this purpose. They’re designed to be reliable and to optimize your communications, specifically in extreme environments and during life-threatening situations. Even with the widest portfolio in the industry, APX continues to evolve. The latest APX NEXT smart radio series delivers revolutionary new capabilities to keep you safer and more effective. WE’VE PUT EVERYTHING WE’VE LEARNED OVER THE LAST 90 YEARS INTO APX. THAT’S WHY IT REPRESENTS THE VERY BEST OF THE MOTOROLA SOLUTIONS PORTFOLIO. THERE IS NO BETTER. BROCHURE APX P25 COMMUNICATIONS OUTLAST AND OUTPERFORM RELIABLE COMMUNICATIONS ARE NON-NEGOTIABLE APX two-way radios are designed for extreme durability, so you can count on them to work under the toughest conditions. From the rugged aluminum endoskeleton of our portable radios to the steel encasement of our mobile radios, APX is built to last. Pressure-tested HEAR AND BE HEARD tempered glass display CLEAR COMMUNICATION CAN MAKE A DIFFERENCE The APX family is designed to help you hear and be heard with unparalleled clarity, so you’re confident your message will always get through. Multiple microphones and adaptive windporting technology minimize noise from wind, crowds and sirens. And the loud and clear speaker ensures you can hear over background sounds. KEEP INFORMATION PROTECTED EVERYDAY, SECURITY IS BEING PUT TO THE TEST With the APX family, you can be sure that your calls stay private, secure, and confidential.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Approximability Preserving Reduction Giorgio Ausiello, Vangelis Paschos
    Approximability preserving reduction Giorgio Ausiello, Vangelis Paschos To cite this version: Giorgio Ausiello, Vangelis Paschos. Approximability preserving reduction. 2005. hal-00958028 HAL Id: hal-00958028 https://hal.archives-ouvertes.fr/hal-00958028 Preprint submitted on 11 Mar 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Laboratoire d'Analyse et Modélisation de Systèmes pour l'Aide à la Décision CNRS UMR 7024 CAHIER DU LAMSADE 227 Septembre 2005 Approximability preserving reductions Giorgio AUSIELLO, Vangelis Th. PASCHOS Approximability preserving reductions∗ Giorgio Ausiello1 Vangelis Th. Paschos2 1 Dipartimento di Informatica e Sistemistica Università degli Studi di Roma “La Sapienza” [email protected] 2 LAMSADE, CNRS UMR 7024 and Université Paris-Dauphine [email protected] September 15, 2005 Abstract We present in this paper a “tour d’horizon” of the most important approximation-pre- serving reductions that have strongly influenced research about structure in approximability classes. 1 Introduction The technique of transforming a problem into another in such a way that the solution of the latter entails, somehow, the solution of the former, is a classical mathematical technique that has found wide application in computer science since the seminal works of Cook [10] and Karp [19] who introduced particular kinds of transformations (called reductions) with the aim of study- ing the computational complexity of combinatorial decision problems.
    [Show full text]
  • Admm for Multiaffine Constrained Optimization Wenbo Gao†, Donald
    ADMM FOR MULTIAFFINE CONSTRAINED OPTIMIZATION WENBO GAOy, DONALD GOLDFARBy, AND FRANK E. CURTISz Abstract. We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-Lojasiewicz (K-L)property holds, this is strengthened to convergence to a single con- strained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnega- tive matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions. 1. Introduction The alternating direction method of multipliers (ADMM) is an iterative method that was initially proposed for solving linearly-constrained separable optimization problems having the form: ( inf f(x) + g(y) (P 0) x;y Ax + By − b = 0: The augmented Lagrangian L of the problem (P 0), for some penalty parameter ρ > 0, is defined to be ρ L(x; y; w) = f(x) + g(y) + hw; Ax + By − bi + kAx + By − bk2: 2 In iteration k, with the iterate (xk; yk; wk), ADMM takes the following steps: (1) Minimize L(x; yk; wk) with respect to x to obtain xk+1.
    [Show full text]
  • 3.1 Matchings and Factors: Matchings and Covers
    1 3.1 Matchings and Factors: Matchings and Covers This copyrighted material is taken from Introduction to Graph Theory, 2nd Ed., by Doug West; and is not for further distribution beyond this course. These slides will be stored in a limited-access location on an IIT server and are not for distribution or use beyond Math 454/553. 2 Matchings 3.1.1 Definition A matching in a graph G is a set of non-loop edges with no shared endpoints. The vertices incident to the edges of a matching M are saturated by M (M-saturated); the others are unsaturated (M-unsaturated). A perfect matching in a graph is a matching that saturates every vertex. perfect matching M-unsaturated M-saturated M Contains copyrighted material from Introduction to Graph Theory by Doug West, 2nd Ed. Not for distribution beyond IIT’s Math 454/553. 3 Perfect Matchings in Complete Bipartite Graphs a 1 The perfect matchings in a complete b 2 X,Y-bigraph with |X|=|Y| exactly c 3 correspond to the bijections d 4 f: X -> Y e 5 Therefore Kn,n has n! perfect f 6 matchings. g 7 Kn,n The complete graph Kn has a perfect matching iff… Contains copyrighted material from Introduction to Graph Theory by Doug West, 2nd Ed. Not for distribution beyond IIT’s Math 454/553. 4 Perfect Matchings in Complete Graphs The complete graph Kn has a perfect matching iff n is even. So instead of Kn consider K2n. We count the perfect matchings in K2n by: (1) Selecting a vertex v (e.g., with the highest label) one choice u v (2) Selecting a vertex u to match to v K2n-2 2n-1 choices (3) Selecting a perfect matching on the rest of the vertices.
    [Show full text]
  • Approximation Algorithms
    Lecture 21 Approximation Algorithms 21.1 Overview Suppose we are given an NP-complete problem to solve. Even though (assuming P = NP) we 6 can’t hope for a polynomial-time algorithm that always gets the best solution, can we develop polynomial-time algorithms that always produce a “pretty good” solution? In this lecture we consider such approximation algorithms, for several important problems. Specific topics in this lecture include: 2-approximation for vertex cover via greedy matchings. • 2-approximation for vertex cover via LP rounding. • Greedy O(log n) approximation for set-cover. • Approximation algorithms for MAX-SAT. • 21.2 Introduction Suppose we are given a problem for which (perhaps because it is NP-complete) we can’t hope for a fast algorithm that always gets the best solution. Can we hope for a fast algorithm that guarantees to get at least a “pretty good” solution? E.g., can we guarantee to find a solution that’s within 10% of optimal? If not that, then how about within a factor of 2 of optimal? Or, anything non-trivial? As seen in the last two lectures, the class of NP-complete problems are all equivalent in the sense that a polynomial-time algorithm to solve any one of them would imply a polynomial-time algorithm to solve all of them (and, moreover, to solve any problem in NP). However, the difficulty of getting a good approximation to these problems varies quite a bit. In this lecture we will examine several important NP-complete problems and look at to what extent we can guarantee to get approximately optimal solutions, and by what algorithms.
    [Show full text]
  • Branch and Price for Chance Constrained Bin Packing
    Branch and Price for Chance Constrained Bin Packing Zheng Zhang Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Brian Denton Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Xiaolan Xie Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Center for Biomedical and Healthcare Engineering, Ecole Nationale Superi´euredes Mines, Saint Etienne 42023, France, [email protected] This article considers two versions of the stochastic bin packing problem with chance constraints. In the first version, we formulate the problem as a two-stage stochastic integer program that considers item-to- bin allocation decisions in the context of chance constraints on total item size within the bins. Next, we describe a distributionally robust formulation of the problem that assumes the item sizes are ambiguous. We present a branch-and-price approach based on a column generation reformulation that is tailored to these two model formulations. We further enhance this approach using antisymmetry branching and subproblem reformulations of the stochastic integer programming model based on conditional value at risk (CVaR) and probabilistic covers. For the distributionally robust formulation we derive a closed form expression for the chance constraints; furthermore, we show that under mild assumptions the robust model can be reformulated as a mixed integer program with significantly fewer integer variables compared to the stochastic integer program. We implement a series of numerical experiments based on real data in the context of an application to surgery scheduling in hospitals to demonstrate the performance of the methods for practical applications.
    [Show full text]
  • Complexity and Approximation Results for the Connected Vertex Cover Problem in Graphs and Hypergraphs Bruno Escoffier, Laurent Gourvès, Jérôme Monnot
    Complexity and approximation results for the connected vertex cover problem in graphs and hypergraphs Bruno Escoffier, Laurent Gourvès, Jérôme Monnot To cite this version: Bruno Escoffier, Laurent Gourvès, Jérôme Monnot. Complexity and approximation results forthe connected vertex cover problem in graphs and hypergraphs. 2007. hal-00178912 HAL Id: hal-00178912 https://hal.archives-ouvertes.fr/hal-00178912 Preprint submitted on 12 Oct 2007 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Laboratoire d'Analyse et Modélisation de Systèmes pour l'Aide à la Décision CNRS UMR 7024 CAHIER DU LAMSADE 262 Juillet 2007 Complexity and approximation results for the connected vertex cover problem in graphs and hypergraphs Bruno Escoffier, Laurent Gourvès, Jérôme Monnot Complexity and approximation results for the connected vertex cover problem in graphs and hypergraphs Bruno Escoffier∗ Laurent Gourv`es∗ J´erˆome Monnot∗ Abstract We study a variation of the vertex cover problem where it is required that the graph induced by the vertex cover is connected. We prove that this problem is polynomial in chordal graphs, has a PTAS in planar graphs, is APX-hard in bipartite graphs and is 5/3-approximable in any class of graphs where the vertex cover problem is polynomial (in particular in bipartite graphs).
    [Show full text]