Computing All-Pairs Shortest Paths by Leveraging Low Treewidth

Total Page:16

File Type:pdf, Size:1020Kb

Computing All-Pairs Shortest Paths by Leveraging Low Treewidth Proceedings of the Twenty-First International Conference on Automated Planning and Scheduling Computing All-Pairs Shortest Paths by Leveraging Low Treewidth Leon´ Planken and Mathijs de Weerdt Roman van der Krogt Delft University of Technology Cork Constraint Computation Centre {l.r.planken,m.m.deweerdt}@tudelft.nl [email protected] Abstract Warshall algorithm. The state of the art for computing APSP is Johnson’s algorithm, which runs in O n2 log n + nm Considering directed graphs on n vertices and m edges time using a Fibonacci heap (Fredman and Tarjan 1987). with real (possibly negative) weights, we present two new, efficient algorithms for computing all-pairs shortest paths Recently there has been specific interest in STNs stem- (APSP). These algorithms make use of directed path consis- ming from hierarchical task networks (HTNs) (Castillo, tency (DPC) along a vertex ordering d. The algorithms run Fdez-Olivares, and Gonzalez´ 2006; Bui and Yorke-Smith 2 in O n wd time, where wd is the graph width induced by 2010). These graphs have the “sibling-restricted” property: this vertex ordering. For graphs of constant treewidth, this each task, represented by a pair of vertices, is connected only 2 yields O n time, which is optimal. On chordal graphs, the to its sibling tasks, its parent or its children. In these graphs algorithms run in O (nm) time. We show empirically that the number of children of a task is restricted by a constant also in many general cases, both constructed and from real- branching factor, and therefore the resulting STNs also have istic benchmarks, the algorithms often outperform Johnson’s a tree-like structure. algorithm, which represents the current state of the art with a run time of O nm + n2 log n. These algorithms can be In this paper we present two new algorithms for APSP in used for temporal and spatial reasoning, e.g. for the Simple Section 3. One algorithm is based on a point-to-point short- Temporal Problem (STP), which underlines its relevance to est path algorithm by Chleq (1995), and another is similar to the planning and scheduling community. Planken et al.’s (2008) algorithm for enforcing partial path consistency (P3C). These algorithms advance the state of the art in computing APSP. In graphs of constant treewidth, 1 Introduction such as the sibling-restricted STNs based on HTNs, the Finding shortest paths is an important and fundamental run time is bounded by O n2, which is optimal since problem in communication and transportation networks, cir- the output is Θ(n2). In addition to these STNs, examples cuit design, graph analysis—e.g. for computing the be- of such graphs are outerplanar graphs, graphs of bounded tweenness (Girvan and Newman 2002)—and is a sub- bandwidth, graphs of bounded cutwidth, and series-parallel problem of many combinatorial problems, such as those graphs. that can be represented as a network flow problem. In When the algorithms are applied to chordal graphs, they particular, in the context of planning and scheduling, find- have a run time of O (nm), which is a strict improvement ing shortest paths is important to solve the Simple Tempo- over the state of the art (Chaudhuri and Zaroliagis 2000, with ral Problem (STP) (Dechter, Meiri, and Pearl 1991), which 2 a run time of O nmwd ; wd is defined below). Chordal in turn appears as a sub-problem to the NP-hard Tempo- graphs are an important subset of general sparse graphs: in- ral Constraint Satisfaction Problem (TCSP) and Disjunc- terval graphs, trees, k-trees, directed path, and split graphs tive Temporal Problem (DTP) classes, powerful enough to are all special cases of chordal graphs (Golumbic 2004). model job-shop scheduling problems. The shortest path Moreover, any graph can be made chordal using a so-called computations in these applications can account for a sig- triangulation algorithm. Such an algorithm operates by nificant part of the total run time. These topics have re- eliminating vertices one by one, connecting the neighbours ceived substantial interest in the planning and schedul- of each eliminated vertex and thereby inducing cliques in the ing community (Satish Kumar 2005; Bresina et al. 2005; graph. The induced width wd of the elimination ordering d Shah and Williams 2008). is defined to be equal to the cardinality of the largest set of Instances of the STP, called Simple Temporal Net- neighbours thus connected. The upper bound of the run time works (STNs), have a natural representation as directed of both proposed algorithms on these general graphs, then, graphs with real edge weights. The canonical way of solv- 2 is O n wd ; this is better than the bound on Johnson’s al- ing an STP instance (Dechter et al.) is by computing all- gorithm if w 2 O (log n). pairs shortest paths (APSP) on its STN, e.g. with the Floyd– d Finding an elimination ordering of minimum induced Copyright c 2011, Association for the Advancement of Artificial width is an NP-hard problem. This minimum induced width Intelligence (www.aaai.org). All rights reserved. is the tree-likeness property of the graph mentioned above, 170 Algorithm 1: DPC (Dechter, Meiri, and Pearl 1991) induced width is exactly the highest number of neighbours Input: A weighted directed graph G = hV; Ei and a j < k encountered during the DPC algorithm. It is not a vertex ordering d : V ! f1; : : : ; ng property of the graph that forms DPC’s input per se; rather, Output: CONSISTENT if DPC could be enforced on G; it is dependent on both the graph and the vertex ordering INCONSISTENT if a negative cycle was found used. With a careful implementation, DPC’s time bound is 2 O nwd if this ordering is known beforehand. 1 for k n to 1 do The edges added by DPC are called fill edges and make 2 forall i < j < k such that fi; kg; fj; kg 2 E do the graph chordal (sometimes also called triangulated). In- 3 wi!j minfwi!j; wi!k + wk!jg deed, DPC differs from a triangulation procedure only by its 4 wj!i minfwj!i; wj!k + wk!ig manipulation of the arc weights. In a chordal graph, every 5 E E [ ffi; jgg cycle of length four or more has an edge joining two ver- 6 if wi!j + wj!i < 0 then tices that are not adjacent in the cycle. By definition of the 7 return INCONSISTENT induced width, the number of edges in such a chordal graph, denoted by mc ≥ m, is O (nwd). 8 return CONSISTENT Finding a Vertex Ordering i.e. the treewidth, denoted w∗. In contrast, the induced width is not a direct measure of the input (graph), so the bound of In principle, DPC can use any vertex ordering to make the 2 graph both chordal and directionally path-consistent. How- O n wd is not quite proper. Therefore, we experimen- tally establish the computational efficiency of the proposed ever, since the vertex ordering defines the induced width, it algorithms in Section 4 on a wide range of graphs, varying directly influences the run time and the number of edges mc from random scale-free networks, parts of the road network in the resulting graph. As mentioned above, finding an of New York City, to STNs generated from HTNs, and job- ordering of minimum induced width and determining the shop scheduling problems. However, we start by introduc- treewidth is an NP-hard problem in general. Still, the ing the concepts of induced width, treewidth, chordal graphs class of constant-treewidth graphs can be optimally triangu- and triangulation in more detail. lated in O (n) time (Bodlaender 1993); and if G is already chordal, we can find a perfect elimination ordering (result- 2 Preliminaries ing in no fill edges) in O (m) time, using maximal cardinal- ity search (MCS) (Tarjan and Yannakakis 1984). Both algorithms for all-pairs shortest paths presented in this For general graphs, various heuristics exist that often pro- paper rely on the fact that the graph has already been made duce good results. We mention here the minimum degree directionally path-consistent along a certain vertex ordering. heuristic (Rose 1972), which in each iteration chooses a ver- In this section, we therefore briefly discuss the algorithm for tex of lowest degree. Since the ordering produced by this directed path consistency (DPC) and how to find a vertex heuristic is not fully known before DPC starts but depends ordering required for DPC. on the fill edges added, an adjacency-list-based implemen- tation will require another O (log n) factor in DPC’s time Directed Path Consistency bound. However, for our purposes in this article, we can af- Dechter, Meiri, and Pearl (1991) proposed DPC, included ford the comfort of maintaining an adjacency matrix, which here as Algorithm 1, for checking whether an STP instance 2 2 2 yields bounds of O n + nwd time and O n space. is consistent (i.e. the graph contains no negative cycles). This algorithm takes as input a weighted directed graph 3 All-Pairs Shortest Paths G = hV; Ei and a vertex ordering d, which is a bijection between V and the natural numbers f1; : : : ; ng. In this pa- Even though, to the best of our knowledge, a DPC-based per, we simply represent the ith vertex in such an ordering APSP algorithm has not yet been proposed, algorithms for as the natural number i. The weight on the arc from i to j computing single-source shortest paths (SSSP) based on DPC can be obtained from known results in a relatively is represented as wi!j; further, our shorthand for the exis- tence of an arc between these vertices, in either direction, is straightforward manner.
Recommended publications
  • Chapter 9. Coping with NP-Completeness
    Chapter 9 Coping with NP-completeness You are the junior member of a seasoned project team. Your current task is to write code for solving a simple-looking problem involving graphs and numbers. What are you supposed to do? If you are very lucky, your problem will be among the half-dozen problems concerning graphs with weights (shortest path, minimum spanning tree, maximum flow, etc.), that we have solved in this book. Even if this is the case, recognizing such a problem in its natural habitat—grungy and obscured by reality and context—requires practice and skill. It is more likely that you will need to reduce your problem to one of these lucky ones—or to solve it using dynamic programming or linear programming. But chances are that nothing like this will happen. The world of search problems is a bleak landscape. There are a few spots of light—brilliant algorithmic ideas—each illuminating a small area around it (the problems that reduce to it; two of these areas, linear and dynamic programming, are in fact decently large). But the remaining vast expanse is pitch dark: NP- complete. What are you to do? You can start by proving that your problem is actually NP-complete. Often a proof by generalization (recall the discussion on page 270 and Exercise 8.10) is all that you need; and sometimes a simple reduction from 3SAT or ZOE is not too difficult to find. This sounds like a theoretical exercise, but, if carried out successfully, it does bring some tangible rewards: now your status in the team has been elevated, you are no longer the kid who can't do, and you have become the noble knight with the impossible quest.
    [Show full text]
  • CS 561, Lecture 24 Outline All-Pairs Shortest Paths Example
    Outline CS 561, Lecture 24 • All Pairs Shortest Paths Jared Saia • TSP Approximation Algorithm University of New Mexico 1 All-Pairs Shortest Paths Example • For the single-source shortest paths problem, we wanted to find the shortest path from a source vertex s to all the other vertices in the graph • We will now generalize this problem further to that of finding • For any vertex v, we have dist(v, v) = 0 and pred(v, v) = the shortest path from every possible source to every possible NULL destination • If the shortest path from u to v is only one edge long, then • In particular, for every pair of vertices u and v, we need to dist(u, v) = w(u → v) and pred(u, v) = u compute the following information: • If there’s no shortest path from u to v, then dist(u, v) = ∞ – dist(u, v) is the length of the shortest path (if any) from and pred(u, v) = NULL u to v – pred(u, v) is the second-to-last vertex (if any) on the short- est path (if any) from u to v 2 3 APSP Lots of Single Sources • The output of our shortest path algorithm will be a pair of |V | × |V | arrays encoding all |V |2 distances and predecessors. • Many maps contain such a distance matric - to find the • Most obvious solution to APSP is to just run SSSP algorithm distance from (say) Albuquerque to (say) Ruidoso, you look |V | times, once for every possible source vertex in the row labeled “Albuquerque” and the column labeled • Specifically, to fill in the subarray dist(s, ∗), we invoke either “Ruidoso” Dijkstra’s or Bellman-Ford starting at the source vertex s • In this class, we’ll focus
    [Show full text]
  • Approximability Preserving Reduction Giorgio Ausiello, Vangelis Paschos
    Approximability preserving reduction Giorgio Ausiello, Vangelis Paschos To cite this version: Giorgio Ausiello, Vangelis Paschos. Approximability preserving reduction. 2005. hal-00958028 HAL Id: hal-00958028 https://hal.archives-ouvertes.fr/hal-00958028 Preprint submitted on 11 Mar 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Laboratoire d'Analyse et Modélisation de Systèmes pour l'Aide à la Décision CNRS UMR 7024 CAHIER DU LAMSADE 227 Septembre 2005 Approximability preserving reductions Giorgio AUSIELLO, Vangelis Th. PASCHOS Approximability preserving reductions∗ Giorgio Ausiello1 Vangelis Th. Paschos2 1 Dipartimento di Informatica e Sistemistica Università degli Studi di Roma “La Sapienza” [email protected] 2 LAMSADE, CNRS UMR 7024 and Université Paris-Dauphine [email protected] September 15, 2005 Abstract We present in this paper a “tour d’horizon” of the most important approximation-pre- serving reductions that have strongly influenced research about structure in approximability classes. 1 Introduction The technique of transforming a problem into another in such a way that the solution of the latter entails, somehow, the solution of the former, is a classical mathematical technique that has found wide application in computer science since the seminal works of Cook [10] and Karp [19] who introduced particular kinds of transformations (called reductions) with the aim of study- ing the computational complexity of combinatorial decision problems.
    [Show full text]
  • Admm for Multiaffine Constrained Optimization Wenbo Gao†, Donald
    ADMM FOR MULTIAFFINE CONSTRAINED OPTIMIZATION WENBO GAOy, DONALD GOLDFARBy, AND FRANK E. CURTISz Abstract. We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-Lojasiewicz (K-L)property holds, this is strengthened to convergence to a single con- strained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnega- tive matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions. 1. Introduction The alternating direction method of multipliers (ADMM) is an iterative method that was initially proposed for solving linearly-constrained separable optimization problems having the form: ( inf f(x) + g(y) (P 0) x;y Ax + By − b = 0: The augmented Lagrangian L of the problem (P 0), for some penalty parameter ρ > 0, is defined to be ρ L(x; y; w) = f(x) + g(y) + hw; Ax + By − bi + kAx + By − bk2: 2 In iteration k, with the iterate (xk; yk; wk), ADMM takes the following steps: (1) Minimize L(x; yk; wk) with respect to x to obtain xk+1.
    [Show full text]
  • Approximation Algorithms
    Lecture 21 Approximation Algorithms 21.1 Overview Suppose we are given an NP-complete problem to solve. Even though (assuming P = NP) we 6 can’t hope for a polynomial-time algorithm that always gets the best solution, can we develop polynomial-time algorithms that always produce a “pretty good” solution? In this lecture we consider such approximation algorithms, for several important problems. Specific topics in this lecture include: 2-approximation for vertex cover via greedy matchings. • 2-approximation for vertex cover via LP rounding. • Greedy O(log n) approximation for set-cover. • Approximation algorithms for MAX-SAT. • 21.2 Introduction Suppose we are given a problem for which (perhaps because it is NP-complete) we can’t hope for a fast algorithm that always gets the best solution. Can we hope for a fast algorithm that guarantees to get at least a “pretty good” solution? E.g., can we guarantee to find a solution that’s within 10% of optimal? If not that, then how about within a factor of 2 of optimal? Or, anything non-trivial? As seen in the last two lectures, the class of NP-complete problems are all equivalent in the sense that a polynomial-time algorithm to solve any one of them would imply a polynomial-time algorithm to solve all of them (and, moreover, to solve any problem in NP). However, the difficulty of getting a good approximation to these problems varies quite a bit. In this lecture we will examine several important NP-complete problems and look at to what extent we can guarantee to get approximately optimal solutions, and by what algorithms.
    [Show full text]
  • Branch and Price for Chance Constrained Bin Packing
    Branch and Price for Chance Constrained Bin Packing Zheng Zhang Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Brian Denton Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Xiaolan Xie Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Center for Biomedical and Healthcare Engineering, Ecole Nationale Superi´euredes Mines, Saint Etienne 42023, France, [email protected] This article considers two versions of the stochastic bin packing problem with chance constraints. In the first version, we formulate the problem as a two-stage stochastic integer program that considers item-to- bin allocation decisions in the context of chance constraints on total item size within the bins. Next, we describe a distributionally robust formulation of the problem that assumes the item sizes are ambiguous. We present a branch-and-price approach based on a column generation reformulation that is tailored to these two model formulations. We further enhance this approach using antisymmetry branching and subproblem reformulations of the stochastic integer programming model based on conditional value at risk (CVaR) and probabilistic covers. For the distributionally robust formulation we derive a closed form expression for the chance constraints; furthermore, we show that under mild assumptions the robust model can be reformulated as a mixed integer program with significantly fewer integer variables compared to the stochastic integer program. We implement a series of numerical experiments based on real data in the context of an application to surgery scheduling in hospitals to demonstrate the performance of the methods for practical applications.
    [Show full text]
  • Distributed Optimization Algorithms for Networked Systems
    Distributed Optimization Algorithms for Networked Systems by Nikolaos Chatzipanagiotis Department of Mechanical Engineering & Materials Science Duke University Date: Approved: Michael M. Zavlanos, Supervisor Wilkins Aquino Henri Gavin Alejandro Ribeiro Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mechanical Engineering & Materials Science in the Graduate School of Duke University 2015 Abstract Distributed Optimization Algorithms for Networked Systems by Nikolaos Chatzipanagiotis Department of Mechanical Engineering & Materials Science Duke University Date: Approved: Michael M. Zavlanos, Supervisor Wilkins Aquino Henri Gavin Alejandro Ribeiro An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mechanical Engineering & Materials Science in the Graduate School of Duke University 2015 Copyright c 2015 by Nikolaos Chatzipanagiotis All rights reserved except the rights granted by the Creative Commons Attribution-Noncommercial Licence Abstract Distributed optimization methods allow us to decompose certain optimization prob- lems into smaller, more manageable subproblems that are solved iteratively and in parallel. For this reason, they are widely used to solve large-scale problems arising in areas as diverse as wireless communications, optimal control, machine learning, artificial intelligence, computational biology, finance and statistics, to name a few. Moreover, distributed algorithms avoid the cost and fragility associated with cen- tralized coordination, and provide better privacy for the autonomous decision mak- ers. These are desirable properties, especially in applications involving networked robotics, communication or sensor networks, and power distribution systems. In this thesis we propose the Accelerated Distributed Augmented Lagrangians (ADAL) algorithm, a novel decomposition method for convex optimization prob- lems.
    [Show full text]
  • Approximation Algorithms for Optimization of Combinatorial Dynamical Systems
    1 Approximation Algorithms for Optimization of Combinatorial Dynamical Systems Insoon Yang, Samuel A. Burden, Ram Rajagopal, S. Shankar Sastry, and Claire J. Tomlin Abstract This paper considers an optimization problem for a dynamical system whose evolution depends on a collection of binary decision variables. We develop scalable approximation algorithms with provable suboptimality bounds to provide computationally tractable solution methods even when the dimension of the system and the number of the binary variables are large. The proposed method employs a linear approximation of the objective function such that the approximate problem is defined over the feasible space of the binary decision variables, which is a discrete set. To define such a linear approximation, we propose two different variation methods: one uses continuous relaxation of the discrete space and the other uses convex combinations of the vector field and running payoff. The approximate problem is a 0–1 linear program, which can be solved by existing polynomial-time exact or approximation algorithms, and does not require the solution of the dynamical system. Furthermore, we characterize a sufficient condition ensuring the approximate solution has a provable suboptimality bound. We show that this condition can be interpreted as the concavity of the objective function. The performance and utility of the proposed algorithms are demonstrated with the ON/OFF control problems of interdependent refrigeration systems. I. INTRODUCTION The dynamics of critical infrastructures and their system elements—for instance, electric grid infrastructure and their electric load elements—are interdependent, meaning that the state of each infrastructure or its system elements influences and is influenced by the state of the others [1].
    [Show full text]
  • Scribe Notes 2 from 2018
    Advanced Algorithms 25 September 2018 Lecture 2: Approximation Algorithms II Lecturer: Mohsen Ghaffari Scribe: Davin Choo 1 Approximation schemes Previously, we described simple greedy algorithms that approximate the optimum for minimum set cover, maximal matching and vertex cover. We now formalize the notion of efficient (1 + )-approximation algorithms for minimization problems, a la [Vaz13]. Let I be an instance from the problem class of interest (e.g. minimum set cover). Denote jIj as the size of the problem (in bits), and jIuj as the size of the problem (in unary). For example, if the input is n just a number x (of at most n bits), then jIj = log2(x) = O(n) while jIuj = O(2 ). This distinction of \size of input" will be important later when we discuss the knapsack problem. Definition 1 (Polynomial time approximation algorithm (PTAS)). For cost metric c, an algorithm A is a PTAS if for each fixed > 0, c(A(I)) ≤ (1 + ) · c(OP T (I)) and A runs in poly(jIj). By definition, the runtime for PTAS may depend arbitrarily on . A stricter related definition is that of fully polynomial time approximation algorithms (FPTAS). Assuming P 6= NP, FPTAS is the best one can hope for on NP-hard optimization problems. Definition 2 (Fully polynomial time approximation algorithm (FPTAS)). For cost metric c, an algo- 1 rithm A is a FPTAS if for each fixed > 0, c(A(I)) ≤ (1 + ) · c(OP T (I)) and A runs in poly(jIj; ). As before, (1−)-approximation, PTAS and FPTAS for maximization problems are defined similarly.
    [Show full text]
  • Approximation Algorithms
    Approximation Algorithms Frédéric Giroire FG Simplex 1/11 Motivation • Goal: • Find “good” solutions for difficult problems (NP-hard). • Be able to quantify the “goodness” of the given solution. • Presentation of a technique to get approximation algorithms: fractional relaxation of integer linear programs. FG Simplex 2/11 Fractional Relaxation • Reminder: • Integer Linear Programs often hard to solve (NP-hard). • Linear Programs (with real numbers) easier to solve (polynomial-time algorithms). • Idea: • 1- Relax the integrality constraints; • 2- Solve the (fractional) linear program and then; • 3- Round the solution to obtain an integral solution. FG Simplex 3/11 Set Cover Definition: An approximation algorithm produces • in polynomial time • a feasible solution • whose objective function value is close to the optimal OPT, by close we mean within a guaranteed factor of the optimal. Example: a factor 2 approximation algorithm for the cardinality vertex cover problem, i.e. an algorithm that finds a cover of cost ≤ 2 · OPT in time polynomial in jV j. FG Simplex 4/11 Set Cover • Problem: Given a universe U of n elements, a collection of + subsets of U, S = S1;:::;Sk , and a cost function c : S ! Q , find a minimum cost subcollection of S that covers all elements of U. • Model numerous classical problems as special cases of set cover: vertex cover, minimum cost shortest path... • Definition: The frequency of an element is the number of sets it is in. The frequency of the most frequent element is denoted by f . • Various approximation algorithms for set cover achieve one of the two factors O(logn) or f .
    [Show full text]
  • Hardness of Approximation
    Hardness of Approximation For approximation algorithms we generally focus on NP-hard problems - these problems do not have an exact solution unless P = NP. Despite being very hard to solve exactly, some of these problems are quite easy to approximate (e.g., 2-approximation for the vertex-cover problem). Formally, an algorithm A for an optimization problem P is an α-approximation algorithm, if for any instance I of P , A(I) < α:OP T (I) when P is a minimization problem, and OP T (I) < α:A(I) if P is a maximization problem1. Here A(I) is the solution by algorithm A and OP T (I) is the optimal solution, both for the instance I. Here α is a function on the size of I, α : jIj ! R+, and is often called approximation factor or approximation guarantee for A. 1 Reduction from NP-Complete Problems There are many ways to show that for an optimization problem P , there is no α-approximation algorithm for P unless P = NP. However we will only consider the most straight-forward approach. Our approach is very similar to the reduction approach for showing that a problem is NP-hard. Lets recall. For proving that a decision problem P is NP-complete, we first find a known NP-complete problem L and show that there is a reduction function f from the instances of L to the instances of P , such that yes instances of L map to the yes instances of P and no instances of L to the no instances of P ; see Fig.
    [Show full text]
  • Handbook of Approximation Algorithms and Metaheuristics
    Handbook of Approximation Algorithms and Metaheuristics Edited by Teofilo F. Gonzalez University of California Santa Barbara, U.S.A. ^LM Chapman & Hall/CRC M M Taylor & Francis Croup Boca Raton London New York Chapman & Hall/CRC is an imprint of the Taylor & Francis Croup, an informa business Contents PART I Basic Methodologies 1 Introduction, Overview, and Notation Teofilo F. Gonzalez 1-1 2 Basic Methodologies and Applications Teofilo F. Gonzalez 2-1 3 Restriction Methods Teofilo F. Gonzalez 3-1 4 Greedy Methods Samir Khuller, Balaji Raghavachari, and Neal E. Young 4-1 5 Recursive Greedy Methods Guy Even 5-1 6 Linear Programming YuvalRabani 6-1 7 LP Rounding and Extensions Daya Ram Gaur and Ramesh Krishnamurti 7-1 8 On Analyzing Semidefinite Programming Relaxations of Complex Quadratic Optimization Problems Anthony Man-Cho So, Yinyu Ye, and Jiawei Zhang 8-1 9 Polynomial-Time Approximation Schemes Hadas Shachnai and Tami Tamir 9-1 10 Rounding, Interval Partitioning, and Separation Sartaj Sahni 10-1 11 Asymptotic Polynomial-Time Approximation Schemes Rajeev Motwani, Liadan O'Callaghan, and An Zhu 11-1 12 Randomized Approximation Techniques Sotiris Nikoletseas and Paul Spirakis 12-1 13 Distributed Approximation Algorithms via LP-Duality and Randomization Devdatt Dubhashi, Fabrizio Grandoni, and Alessandro Panconesi 13-1 14 Empirical Analysis of Randomized Algorithms Holger H. Hoos and Thomas Stützle 14-1 15 Reductions That Preserve Approximability Giorgio Ausiello and Vangelis Th. Paschos 15-1 16 Differential Ratio Approximation Giorgio Ausiello and Vangelis Th. Paschos 16-1 17 Hardness of Approximation Mario Szegedy 17-1 xvii xviii Contents PART II Local Search, Neural Networks, and Metaheuristics 18 Local Search Roberto Solis-Oba 18-1 19 Stochastic Local Search Holger H.
    [Show full text]