Lecture Notes Fall 2020

Total Page:16

File Type:pdf, Size:1020Kb

Lecture Notes Fall 2020 Lecture Notes for IEOR 266: Graph Algorithms and Network Flows Professor Dorit S. Hochbaum Contents 1 Introduction 1 1.1 Assignment problem . .1 1.2 Basic graph definitions . .2 2 Totally unimodular matrices 4 3 Minimum cost network flow problem 5 3.1 Transportation problem . .7 3.2 The maximum flow problem . .8 3.3 The shortest path problem . .9 3.4 The single source shortest paths . .9 3.5 The bipartite matching problem . 11 3.6 Summary of classes of network flow problems . 13 3.7 Negative cost cycles in graphs . 14 4 Other network problems 15 4.1 Eulerian tour . 15 4.1.1 Eulerian walk . 16 4.2 Chinese postman problem . 16 4.2.1 Undirected chinese postman problem . 16 4.2.2 Directed chinese postman problem . 16 4.2.3 Mixed chinese postman problem . 17 4.3 Hamiltonian tour problem . 17 4.4 Traveling salesman problem (TSP) . 17 4.5 Vertex packing problems (Independent Set) . 17 4.6 Maximum clique problem . 17 4.7 Vertex cover problem . 17 4.8 Edge cover problem . 18 4.9 b-factor (b-matching) problem . 18 4.10 Graph colorability problem . 18 4.11 Minimum spanning tree problem . 19 5 Complexity analysis 20 5.1 Measuring quality of an algorithm . 20 5.1.1 Examples . 20 5.2 Growth of functions . 23 i IEOR 266 notes: Updated 2020 ii 5.3 Definitions for asymptotic comparisons of functions . 23 5.4 Properties of asymptotic notation . 23 5.5 Caveats of complexity analysis . 24 5.6 A sketch of the ellipsoid method . 25 6 Graph representations 26 6.1 Node-arc adjacency matrix . 26 6.2 Node-node adjacency matrix . 26 6.3 Node-adjacency list . 27 6.4 Comparison of the graph representations . 28 6.4.1 Storage efficiency comparison . 28 6.4.2 Advantages and disadvantages comparison . 28 7 Graph search algorithms 28 7.1 Generic search algorithm . 28 7.2 Breadth first search (BFS) . 29 7.3 Depth first search (DFS) . 30 7.4 Applications of BFS and DFS . 30 7.4.1 Checking if a graph is strongly connected . 30 7.4.2 Checking if a graph is acyclic . 31 7.4.3 Checking if a graph is bipartite . 32 8 Shortest paths 32 8.1 Introduction . 32 8.2 Properties of DAGs . 32 8.2.1 Topological sort and directed acyclic graphs . 33 8.3 Properties of shortest paths . 34 8.4 Alternative formulation for SP from s to t ........................ 35 8.5 Shortest paths on a directed acyclic graph (DAG) . 35 8.6 Applications of the shortest/longest path problem on a DAG . 36 8.7 Dijkstra's algorithm . 39 8.8 Bellman-Ford algorithm for single source shortest paths . 42 8.9 Floyd-Warshall algorithm for all pairs shortest paths . 44 8.10 D.B. Johnson's algorithm for all pairs shortest paths . 45 8.11 Matrix multiplication algorithm for all pairs shortest paths . 46 8.12 Why finding shortest paths in the presence of negative cost cycles is difficult . 47 9 Maximum flow problem 48 9.1 Introduction . 48 9.2 Linear programming duality and max flow min cut . 49 9.3 Applications . 50 9.3.1 Hall's theorem . 50 9.3.2 The selection problem . 52 9.3.3 Vertex cover on bipartite graphs . 54 9.3.4 Forest clearing . 55 9.3.5 Producing memory chips (VLSI layout) . 57 9.4 Flow Decomposition . 57 9.5 Algorithms . 58 IEOR 266 notes: Updated 2020 iii 9.5.1 Ford-Fulkerson algorithm . 59 9.5.2 Maximum capacity augmenting path algorithm . 61 9.5.3 Capacity scaling algorithm . 62 9.5.4 Dinic's algorithm for maximum flow . 63 10 Goldberg's Algorithm - the Push/Relabel Algorithm 69 10.1 Overview . 69 10.2 The Generic Algorithm . 70 10.3 Variants of Push/Relabel Algorithm . 74 10.4 Wave Implementation of Goldberg's Algorithm (Lift-to-front) . 74 11 The pseudoflow algorithm (Hochbaum's PseudoFlow HPF) 75 11.1 Initialization . 76 11.2 A labeling pseudoflow algorithm . 76 11.3 The monotone pseudoflow algorithm . 77 11.4 Complexity summary . 78 12 The minimum spanning tree problem (MST) 80 12.1 IP formulation . 80 12.2 Properties of MST . 81 12.2.1 Cut Optimality Condition . 81 12.2.2 Path Optimality Condition . 82 12.3 Algorithms of MST . 83 12.3.1 Prim's algorithm . 83 12.3.2 Kruskal's algorithm . 84 12.4 Maximum spanning tree . 86 13 Complexity classes and NP-completeness 86 13.1 Search vs. Decision . 86 13.2 The class NP ........................................ 87 13.2.1 Some Problems in NP ............................... 88 13.3 co-NP ............................................ 88 13.3.1 Some Problems in co-NP ............................. 89 13.4 NP and co-NP ....................................... 89 13.5 NP-completeness and reductions . 90 13.5.1 Reducibility . 90 13.5.2 NP-Completeness . 91 14 Approximation algorithms 94 14.1 Traveling salesperson problem (TSP) . 95 15 Integer programs with two variables per inequality (IP2) 97 15.1 A practical example: open-pit mining . 97 15.2 The maximum closure problem . 98 15.3 Monotone IP2 . 101 15.4 Non{monotone IP2 . 103 15.5 2-approximations for integer programs with two variables per inequality . 104 15.5.1 The quality of the bound provided by the monotonized system . 104 15.5.2 Computing an approximate solution . ..
Recommended publications
  • Oracles, Ellipsoid Method and Their Uses in Convex Optimization Lecturer: Matt Weinberg Scribe:Sanjeev Arora
    princeton univ. F'17 cos 521: Advanced Algorithm Design Lecture 17: Oracles, Ellipsoid method and their uses in convex optimization Lecturer: Matt Weinberg Scribe:Sanjeev Arora Oracle: A person or agency considered to give wise counsel or prophetic pre- dictions or precognition of the future, inspired by the gods. Recall that Linear Programming is the following problem: maximize cT x Ax ≤ b x ≥ 0 where A is a m × n real constraint matrix and x; c 2 Rn. Recall that if the number of bits to represent the input is L, a polynomial time solution to the problem is allowed to have a running time of poly(n; m; L). The Ellipsoid algorithm for linear programming is a specific application of the ellipsoid method developed by Soviet mathematicians Shor(1970), Yudin and Nemirovskii(1975). Khachiyan(1979) applied the ellipsoid method to derive the first polynomial time algorithm for linear programming. Although the algorithm is theoretically better than the Simplex algorithm, which has an exponential running time in the worst case, it is very slow practically and not competitive with Simplex. Nevertheless, it is a very important theoretical tool for developing polynomial time algorithms for a large class of convex optimization problems, which are much more general than linear programming. In fact we can use it to solve convex optimization problems that are even too large to write down. 1 Linear programs too big to write down Often we want to solve linear programs that are too large to even write down (or for that matter, too big to fit into all the hard drives of the world).
    [Show full text]
  • On Computing Longest Paths in Small Graph Classes
    On Computing Longest Paths in Small Graph Classes Ryuhei Uehara∗ Yushi Uno† July 28, 2005 Abstract The longest path problem is to find a longest path in a given graph. While the graph classes in which the Hamiltonian path problem can be solved efficiently are widely investigated, few graph classes are known to be solved efficiently for the longest path problem. For a tree, a simple linear time algorithm for the longest path problem is known. We first generalize the algorithm, and show that the longest path problem can be solved efficiently for weighted trees, block graphs, and cacti. We next show that the longest path problem can be solved efficiently on some graph classes that have natural interval representations. Keywords: efficient algorithms, graph classes, longest path problem. 1 Introduction The Hamiltonian path problem is one of the most well known NP-hard problem, and there are numerous applications of the problems [17]. For such an intractable problem, there are two major approaches; approximation algorithms [20, 2, 35] and algorithms with parameterized complexity analyses [15]. In both approaches, we have to change the decision problem to the optimization problem. Therefore the longest path problem is one of the basic problems from the viewpoint of combinatorial optimization. From the practical point of view, it is also very natural approach to try to find a longest path in a given graph, even if it does not have a Hamiltonian path. However, finding a longest path seems to be more difficult than determining whether the given graph has a Hamiltonian path or not.
    [Show full text]
  • Approximating the Maximum Clique Minor and Some Subgraph Homeomorphism Problems
    Approximating the maximum clique minor and some subgraph homeomorphism problems Noga Alon1, Andrzej Lingas2, and Martin Wahlen2 1 Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv. [email protected] 2 Department of Computer Science, Lund University, 22100 Lund. [email protected], [email protected], Fax +46 46 13 10 21 Abstract. We consider the “minor” and “homeomorphic” analogues of the maximum clique problem, i.e., the problems of determining the largest h such that the input graph (on n vertices) has a minor isomorphic to Kh or a subgraph homeomorphic to Kh, respectively, as well as the problem of finding the corresponding subgraphs. We term them as the maximum clique minor problem and the maximum homeomorphic clique problem, respectively. We observe that a known result of Kostochka and √ Thomason supplies an O( n) bound on the approximation factor for the maximum clique minor problem achievable in polynomial time. We also provide an independent proof of nearly the same approximation factor with explicit polynomial-time estimation, by exploiting the minor separator theorem of Plotkin et al. Next, we show that another known result of Bollob´asand Thomason √ and of Koml´osand Szemer´ediprovides an O( n) bound on the ap- proximation factor for the maximum homeomorphic clique achievable in γ polynomial time. On the other hand, we show an Ω(n1/2−O(1/(log n) )) O(1) lower bound (for some constant γ, unless N P ⊆ ZPTIME(2(log n) )) on the best approximation factor achievable efficiently for the maximum homeomorphic clique problem, nearly matching our upper bound.
    [Show full text]
  • Linear Programming Notes
    Linear Programming Notes Lecturer: David Williamson, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE These notes summarize the central definitions and results of the theory of linear program- ming, as taught by David Williamson in ORIE 6300 at Cornell University in the fall of 2014. Proofs and discussion are mostly omitted. These notes also draw on Convex Optimization by Stephen Boyd and Lieven Vandenberghe, and on Stephen Boyd's notes on ellipsoid methods. Prof. Williamson's full lecture notes can be found here. Contents 1 The linear programming problem3 2 Duailty 5 3 Geometry6 4 Optimality conditions9 4.1 Answer 1: cT x = bT y for a y 2 F(D)......................9 4.2 Answer 2: complementary slackness holds for a y 2 F(D)........... 10 4.3 Answer 3: the verifying y is in F(D)...................... 10 5 The simplex method 12 5.1 Reformulation . 12 5.2 Algorithm . 12 5.3 Convergence . 14 5.4 Finding an initial basic feasible solution . 15 5.5 Complexity of a pivot . 16 5.6 Pivot rules . 16 5.7 Degeneracy and cycling . 17 5.8 Number of pivots . 17 5.9 Varieties of the simplex method . 18 5.10 Sensitivity analysis . 18 5.11 Solving large-scale linear programs . 20 1 6 Good algorithms 23 7 Ellipsoid methods 24 7.1 Ellipsoid geometry . 24 7.2 The basic ellipsoid method . 25 7.3 The ellipsoid method with objective function cuts . 28 7.4 Separation oracles . 29 8 Interior-point methods 30 8.1 Finding a descent direction that preserves feasibility . 30 8.2 The affine-scaling direction .
    [Show full text]
  • Downloaded the Biochemical Pathways Information from the Saccharomyces Genome Database (SGD) [29], Which Contains for Each Enzyme of S
    Algorithms 2015, 8, 810-831; doi:10.3390/a8040810 OPEN ACCESS algorithms ISSN 1999-4893 www.mdpi.com/journal/algorithms Article Finding Supported Paths in Heterogeneous Networks y Guillaume Fertin 1;*, Christian Komusiewicz 2, Hafedh Mohamed-Babou 1 and Irena Rusu 1 1 LINA, UMR CNRS 6241, Université de Nantes, Nantes 44322, France; E-Mails: [email protected] (H.M.-B.); [email protected] (I.R.) 2 Institut für Softwaretechnik und Theoretische Informatik, Technische Universität Berlin, Berlin D-10587, Germany; E-Mail: [email protected] y This paper is an extended version of our paper published in the Proceedings of the 11th International Symposium on Experimental Algorithms (SEA 2012), Bordeaux, France, 7–9 June 2012, originally entitled “Algorithms for Subnetwork Mining in Heterogeneous Networks”. * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +33-2-5112-5824. Academic Editor: Giuseppe Lancia Received: 17 June 2015 / Accepted: 29 September 2015 / Published: 9 October 2015 Abstract: Subnetwork mining is an essential issue in the analysis of biological, social and communication networks. Recent applications require the simultaneous mining of several networks on the same or a similar vertex set. That is, one searches for subnetworks fulfilling different properties in each input network. We study the case that the input consists of a directed graph D and an undirected graph G on the same vertex set, and the sought pattern is a path P in D whose vertex set induces a connected subgraph of G. In this context, three concrete problems arise, depending on whether the existence of P is questioned or whether the length of P is to be optimized: in that case, one can search for a longest path or (maybe less intuitively) a shortest one.
    [Show full text]
  • On Digraph Width Measures in Parameterized Algorithmics (Extended Abstract)
    On Digraph Width Measures in Parameterized Algorithmics (extended abstract) Robert Ganian1, Petr Hlinˇen´y1, Joachim Kneis2, Alexander Langer2, Jan Obdrˇz´alek1, and Peter Rossmanith2 1 Faculty of Informatics, Masaryk University, Brno, Czech Republic {xganian1,hlineny,obdrzalek}@fi.muni.cz 2 Theoretical Computer Science, RWTH Aachen University, Germany {kneis,langer,rossmani}@cs.rwth-aachen.de Abstract. In contrast to undirected width measures (such as tree- width or clique-width), which have provided many important algo- rithmic applications, analogous measures for digraphs such as DAG- width or Kelly-width do not seem so successful. Several recent papers, e.g. those of Kreutzer–Ordyniak, Dankelmann–Gutin–Kim, or Lampis– Kaouri–Mitsou, have given some evidence for this. We support this di- rection by showing that many quite different problems remain hard even on graph classes that are restricted very beyond simply having small DAG-width. To this end, we introduce new measures K-width and DAG- depth. On the positive side, we also note that taking Kant´e’s directed generalization of rank-width as a parameter makes many problems fixed parameter tractable. 1 Introduction The very successful concept of graph tree-width was introduced in the context of the Graph Minors project by Robertson and Seymour [RS86,RS91], and it turned out to be very useful for efficiently solving graph problems. Tree-width is a property of undirected graphs. In this paper we will be interested in directed graphs or digraphs. Naturally, a width measure specifically tailored to digraphs with all the nice properties of tree-width would be tremendously useful. The properties of such a measure should include at least the following: i) The width measure is small on many interesting instances.
    [Show full text]
  • Fixed-Parameter Algorithms for Longest Heapable Subsequence and Maximum Binary Tree
    Fixed-Parameter Algorithms for Longest Heapable Subsequence and Maximum Binary Tree Karthekeyan Chandrasekaran∗ Elena Grigorescuy Gabriel Istratez Shubhang Kulkarniy Young-San Liny Minshen Zhuy November 23, 2020 Abstract A heapable sequence is a sequence of numbers that can be arranged in a min-heap data structure. Finding a longest heapable subsequence of a given sequence was proposed by Byers, Heeringa, Mitzenmacher, and Zervas (ANALCO 2011) as a generalization of the well-studied longest increasing subsequence problem and its complexity still remains open. An equivalent formulation of the longest heapable subsequence problem is that of finding a maximum-sized binary tree in a given permutation directed acyclic graph (permutation DAG). In this work, we study parameterized algorithms for both longest heapable subsequence as well as maximum-sized binary tree. We show the following results: 1. The longest heapable subsequence problem can be solved in kO(log k)n time, where k is the number of distinct values in the input sequence. 2. We introduce the alphabet size as a new parameter in the study of computational prob- lems in permutation DAGs. Our result on longest heapable subsequence implies that the maximum-sized binary tree problem in a given permutation DAG is fixed-parameter tractable when parameterized by the alphabet size. 3. We show that the alphabet size with respect to a fixed topological ordering can be com- puted in polynomial time, admits a min-max relation, and has a polyhedral description. 4. We design a fixed-parameter algorithm with run-time wO(w)n for the maximum-sized bi- nary tree problem in undirected graphs when parameterized by treewidth w.
    [Show full text]
  • 1979 Khachiyan's Ellipsoid Method
    Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for LP Cutting planes & Ellipsoid methods for unconstrained minimization 2 Books to Read David G. Luenberger, Yinyu Ye : Linear and Nonlinear Programming Boyd and Vandenberghe: Convex Optimization 3 Back to Linear Programs Inequality form of LPs using matrix notation: Standard form of LPs: We already know: Any LP can be rewritten to an equivalent standard LP 4 Motivation Linear programs can be viewed in two somewhat complementary ways: continuous optimization : continuous variables, convex feasible region continuous objective function combinatorial problems : solutions can be found among the vertices of the convex polyhedron defined by the constraints Issues with combinatorial search methods : number of vertices may be exponentially large, making direct search impossible for even modest size problems 5 History 6 Simplex Method Simplex method: Jumping from one vertex to another, it improves values of the objective as the process reaches an optimal point. It performs well in practice, visiting only a small fraction of the total number of vertices. Running time? Polynomial? or Exponential? 7 The Simplex method is not polynomial time Dantzig observed that for problems with m ≤ 50 and n ≤ 200 the number of iterations is ordinarily less than 1.5m. That time many researchers believed (and tried to prove) that the simplex algorithm is polynomial in the size of the problem (n,m) In 1972, Klee and Minty showed by examples that for certain linear programs the simplex method will examine every vertex. These examples proved that in the worst case, the simplex method requires a number of steps that is exponential in the size of the problem.
    [Show full text]
  • Convex Optimization: Algorithms and Complexity
    Foundations and Trends R in Machine Learning Vol. 8, No. 3-4 (2015) 231–357 c 2015 S. Bubeck DOI: 10.1561/2200000050 Convex Optimization: Algorithms and Complexity Sébastien Bubeck Theory Group, Microsoft Research [email protected] Contents 1 Introduction 232 1.1 Some convex optimization problems in machine learning . 233 1.2 Basic properties of convexity . 234 1.3 Why convexity? . 237 1.4 Black-box model . 238 1.5 Structured optimization . 240 1.6 Overview of the results and disclaimer . 240 2 Convex optimization in finite dimension 244 2.1 The center of gravity method . 245 2.2 The ellipsoid method . 247 2.3 Vaidya’s cutting plane method . 250 2.4 Conjugate gradient . 258 3 Dimension-free convex optimization 262 3.1 Projected subgradient descent for Lipschitz functions . 263 3.2 Gradient descent for smooth functions . 266 3.3 Conditional gradient descent, aka Frank-Wolfe . 271 3.4 Strong convexity . 276 3.5 Lower bounds . 279 3.6 Geometric descent . 284 ii iii 3.7 Nesterov’s accelerated gradient descent . 289 4 Almost dimension-free convex optimization in non-Euclidean spaces 296 4.1 Mirror maps . 298 4.2 Mirror descent . 299 4.3 Standard setups for mirror descent . 301 4.4 Lazy mirror descent, aka Nesterov’s dual averaging . 303 4.5 Mirror prox . 305 4.6 The vector field point of view on MD, DA, and MP . 307 5 Beyond the black-box model 309 5.1 Sum of a smooth and a simple non-smooth term . 310 5.2 Smooth saddle-point representation of a non-smooth function312 5.3 Interior point methods .
    [Show full text]
  • On Some Polynomial-Time Algorithms for Solving Linear Programming Problems
    Available online a t www.pelagiaresearchlibrary.com Pelagia Research Library Advances in Applied Science Research, 2012, 3 (5):3367-3373 ISSN: 0976-8610 CODEN (USA): AASRFC On Some Polynomial-time Algorithms for Solving Linear Programming Problems B. O. Adejo and H. S. Adaji Department of Mathematical Sciences, Kogi State University, Anyigba _____________________________________________________________________________________________ ABSTRACT In this article we survey some Polynomial-time Algorithms for Solving Linear Programming Problems namely: the ellipsoid method, Karmarkar’s algorithm and the affine scaling algorithm. Finally, we considered a test problem which we solved with the methods where applicable and conclusions drawn from the results so obtained. __________________________________________________________________________________________ INTRODUCTION An algorithm A is said to be a polynomial-time algorithm for a problem P, if the number of steps (i. e., iterations) required to solve P on applying A is bounded by a polynomial function O(m n,, L) of dimension and input length of the problem. The Ellipsoid method is a specific algorithm developed by soviet mathematicians: Shor [5], Yudin and Nemirovskii [7]. Khachian [4] adapted the ellipsoid method to derive the first polynomial-time algorithm for linear programming. Although the algorithm is theoretically better than the simplex algorithm, which has an exponential running time in the worst case, it is very slow practically and not competitive with the simplex method. Nevertheless, it is a very important theoretical tool for developing polynomial-time algorithms for a large class of convex optimization problems. Narendra K. Karmarkar is an Indian mathematician; renowned for developing the Karmarkar’s algorithm. He is listed as an ISI highly cited researcher.
    [Show full text]
  • Solving Ellipsoidal Inclusion and Optimal Experimental Design Problems: Theory and Algorithms
    SOLVING ELLIPSOIDAL INCLUSION AND OPTIMAL EXPERIMENTAL DESIGN PROBLEMS: THEORY AND ALGORITHMS. A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Selin Damla Ahipas¸aoglu˘ August 2009 c 2009 Selin Damla Ahipas¸aoglu˘ ALL RIGHTS RESERVED SOLVING ELLIPSOIDAL INCLUSION AND OPTIMAL EXPERIMENTAL DESIGN PROBLEMS: THEORY AND ALGORITHMS. Selin Damla Ahipas¸aoglu,˘ Ph.D. Cornell University 2009 This thesis is concerned with the development and analysis of Frank-Wolfe type algorithms for two problems, namely the ellipsoidal inclusion problem of optimization and the optimal experimental design problem of statistics. These two problems are closely related to each other and can be solved simultaneously as discussed in Chapter 1 of this thesis. Chapter 1 introduces the problems in parametric forms. The weak and strong duality relations between them are established and the optimality cri- teria are derived. Based on this discussion, we define -primal feasible and -approximate optimal solutions: these solutions do not necessarily satisfy the optimality criteria but the violation is controlled by the error parameter and can be arbitrarily small. Chapter 2 deals with the most well-known special case of the optimal ex- perimental design problems: the D-optimal design problem and its dual, the Minimum-Volume Enclosing Ellipsoid (MVEE) problem. Chapter 3 focuses on another special case, the A-optimal design problem. In Chapter 4 we focus on a generalization of the optimal experimental design problem in which a sub- set but not all of parameters is being estimated. We focus on the following two problems: the Dk-optimal design problem and the Ak-optimal design prob- lem, generalizations of the D-optimal and the A-optimal design problems, re- spectively.
    [Show full text]
  • Optimal Longest Paths by Dynamic Programming∗
    Optimal Longest Paths by Dynamic Programming∗ Tomáš Balyo1, Kai Fieger1, and Christian Schulz1,2 1 Karlsruhe Institute of Technology, Karlsruhe, Germany {tomas.balyo, christian.schulz}@kit.edu, fi[email protected] 2 University of Vienna, Vienna, Austria [email protected] Abstract We propose an optimal algorithm for solving the longest path problem in undirected weighted graphs. By using graph partitioning and dynamic programming, we obtain an algorithm that is significantly faster than other state-of-the-art methods. This enables us to solve instances that have previously been unsolved. 1998 ACM Subject Classification G.2.2 Graph Theory Keywords and phrases Longest Path, Graph Partitioning Digital Object Identifier 10.4230/LIPIcs.SEA.2017. 1 Introduction The longest path problem (LP) is to find a simple path of maximum length between two given vertices of a graph where length is defined as the number of edges or the total weight of the edges in the path. The problem is known to be NP-complete [5] and has several applications such as designing circuit boards [8, 7], project planning [2], information retrieval [15] or patrolling algorithms for multiple robots in graphs [9]. For example, when designing circuit boards where the length difference between wires has to be kept small [8, 7], the longest path problem manifests itself when the length of shorter wires is supposed to be increased. Another example application is project planning/scheduling where the problem can be used to determine the least amount of time that a project could be completed in [2]. We organize the rest of paper as follows.
    [Show full text]